context
stringlengths
5k
202k
question
stringlengths
17
163
answer
stringlengths
1
619
length
int64
819
31.7k
dataset
stringclasses
5 values
context_range
stringclasses
5 values
INTRODUCTION The idea of language identification is to classify a given audio signal into a particular class using a classification algorithm. Commonly language identification task was done using i-vector systems [1]. A very well known approach for language identification proposed by N. Dahek et al. [1] uses the GMM-UBM model to obtain utterance level features called i-vectors. Recent advances in deep learning [15,16] have helped to improve the language identification task using many different neural network architectures which can be trained efficiently using GPUs for large scale datasets. These neural networks can be configured in various ways to obtain better accuracy for language identification task. Early work on using Deep learning for language Identification was published by Pavel Matejka et al. [2], where they used stacked bottleneck features extracted from deep neural networks for language identification task and showed that the bottleneck features learned by Deep neural networks are better than simple MFCC or PLP features. Later the work by I. Lopez-Moreno et al. [3] from Google showed how to use Deep neural networks to directly map the sequence of MFCC frames into its language class so that we can apply language identification at the frame level. Speech signals will have both spatial and temporal information, but simple DNNs are not able to capture temporal information. Work done by J. Gonzalez-Dominguez et al. [4] by Google developed an LSTM based language identification model which improves the accuracy over the DNN based models. Work done by Alicia et al. [5] used CNNs to improve upon i-vector [1] and other previously developed systems. The work done by Daniel Garcia-Romero et al. [6] has used a combination of Acoustic model trained for speech recognition with Time-delay neural networks where they train the TDNN model by feeding the stacked bottleneck features from acoustic model to predict the language labels at the frame level. Recently X-vectors [7] is proposed for speaker identification task and are shown to outperform all the previous state of the art speaker identification algorithms and are also used for language identification by David Snyder et al. [8]. In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results. POOLING STRATEGIES In any language identification model, we want to obtain utterance level representation which has very good language discriminative features. These representations should be compact and should be easily separable by a linear classifier. The idea of any pooling strategy is to pool the frame-level representations into a single utterance level representation. Previous works by [7] have used simple mean and standard deviation aggregation to pool the frame-level features from the top layer of the neural network to obtain the utterance level features. Recently [9] used VLAD based pooling strategy for speaker identification which is inspired from [10] proposed for face recognition. The NetVLAD [11] and Ghost-VLAD [10] methods are proposed for Place recognition and face recognition, respectively, and in both cases, they try to aggregate the local descriptors into global features. In our case, the local descriptors are features extracted from ResNet [15], and the global utterance level feature is obtained by using GhostVLAD pooling. In this section, we explain different pooling methods, including NetVLAD, Ghost-VLAD, Statistic pooling, and Average pooling. POOLING STRATEGIES ::: NetVLAD pooling The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side). The model takes spectrogram as an input and feeds into CNN based ResNet architecture. The ResNet is used to map the spectrogram into 3D feature map of dimension HxWxD. We convert this 3D feature map into 2D by unfolding H and W dimensions, creating a NxD dimensional feature map, where N=HxW. The NetVLAD layer is kept on top of the feature extraction layer of ResNet, as shown in Figure 1. The NetVLAD now takes N features vectors of dimension D and computes a matrix V of dimension KxD, where K is the number clusters in the NetVLAD layer, and D is the dimension of the feature vector. The matrix V is computed as follows. Where $w_k$,$b_k$ and $c_k$ are trainable parameters for the cluster $k$ and V(j,k) represents a point in the V matrix for (j,k)th location. The matrix is constructed using the equation (1) where the first term corresponds to the soft assignment of the input $x_i$ to the cluster $c_k$, whereas the second term corresponds to the residual term which tells how far the input descriptor $x_i$ is from the cluster center $c_k$. POOLING STRATEGIES ::: GhostVLAD pooling GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label. POOLING STRATEGIES ::: Statistic and average pooling In statistic pooling, we compute the first and second order statistics of the local features from the top layer of the ResNet model. The 3-D feature map is unfolded to create N features of D dimensions, and then we compute the mean and standard deviation of all these N vectors and get two D dimensional vectors, one for mean and the other for standard deviation. We then concatenate these 2 features and feed it to the projection layer for predicting the language label. In the Average pooling layer, we compute only the first-order statistics (mean) of the local features from the top layer of the CNN model. The feature map from the top layer of CNN is unfolded to create N features of D dimensions, and then we compute the mean of all these N vectors and get D dimensional representation. We then feed this feature to the projection layer followed by softmax for predicting the language label. DATASET In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow. EXPERIMENTS In this section, we describe the feature extraction process and network architecture in detail. We use spectral features of 256 dimensions computed using 512 point FFT for every frame, and we add an energy feature for every frame giving us total 257 features for every frame. We use a window size of 25ms and frame shift of 10ms during feature computation. We crop random 5sec audio data from each utterance during training which results in a spectrogram of size 257x500 (features x number of features). We use these spectrograms as input to our CNN model during training. During testing, we compute the prediction score irrespective of the audio length. For the network architecture, we use ResNet-34 architecture, as described in [9]. The model uses convolution layers with Relu activations to map the spectrogram of size 257x500 input into 3D feature map of size 1x32x512. This feature cube is converted into 2D feature map of dimension 32x512 and fed into Ghost-VLAD/NetVLAD layer to generate a representation that has more language discrimination capacity. We use Adam optimizer with an initial learning rate of 0.01 and a final learning rate of 0.00001 for training. Each model is trained for 15 epochs with early stopping criteria. For the baseline, we train an i-vector model using GMM-UBM. We fit a small classifier on top of the generated i-vectors to measure the accuracy. This model is referred as i-vector+svm . To compare our model with the previous state of the art system, we set up the x-vector language identification system [8]. The x-vector model used time-delay neural networks (TDNN) along with statistic-pooling. We use 7 layer TDNN architecture similar to [8] for training. We refer to this model as tdnn+stat-pool . Finally, we set up a Deep LSTM based language identification system similar to [4] but with little modification where we add statistics pooling for the last layers hidden activities before classification. We use 3 layer Bi-LSTM with 256 hidden units at each layer. We refer to this model as LSTM+stat-pool. We train our i-vector+svm and TDNN+stat-pool using Kaldi toolkit. We train our NetVLAD and GhostVLAD experiments using Keras by modifying the code given by [9] for language identification. We train the LSTM+stat-pool and the remaining experiments using Pytorch [14] toolkit, and we will opensource all the codes and data soon. RESULTS In this section, we compare the performance of our system with the recent state of the art language identification approaches. We also compare different pooling strategies and finally, compare the robustness of our system to the length of the input spectrogram during training. We visualize the embeddings generated by the GhostVLAD method and conclude that the GhostVLAD embeddings shows very good feature discrimination capabilities. RESULTS ::: Comparison with different approaches We compare our system performance with the previous state of the art language identification approaches, as shown in Table 2. The i-vector+svm system is trained using GMM-UBM models to generate i-vectors as proposed in [1]. Once the i-vectors are extracted, we fit SVM classifier to classify the audio. The TDNN+stat-pool system is trained with a statistics pooling layer and is called the x-vector system as proposed by David Snyder et al. [11] and is currently the state of the art language identification approach as far as our knowledge. Our methods outperform the state of the art x-vector system by absolute 1.88% improvement in F1-score, as shown in Table 2. RESULTS ::: Comparison with different pooling techniques We compare our approach with different pooling strategies in Table 3. We use ResNet as our base feature extraction network. We keep the base network the same and change only the pooling layers to see which pooling approach performs better for language identification task. Our experiments show that GhostVLAD pooling outperforms all the other pooling methods by achieving 98.43% F1-Score. RESULTS ::: Duration analysis To observe the performance of our method with different input durations, we conducted an experiment where we train our model on different input durations. Since our model uses ResNet as the base feature extractor, we need to feed fixed-length spectrogram. We conducted 4 different experiments where we trained the model using 2sec, 3sec, 4sec and 5sec spectrograms containing 200,300,400 and 500 frames respectively. We observed that the model trained with a 5sec spectrogram is the best model, as shown in Table 4. RESULTS ::: Visualization of embeddings We visualize the embeddings generated by our approach to see the effectiveness. We extracted 512-dimensional embeddings for our testing data and reduced the dimensionality using t-sne projection. The t-sne plot of the embeddings space is shown in Figure 3. The plot shows that the embeddings learned by our approach has very good discriminative properties Conclusion In this work, we use Ghost-VLAD pooling approach that was originally proposed for face recognition to improve language identification performance for Indian languages. We collected and curated 630 hrs audio data from news All India Radio news channel for 7 Indian languages. Our experimental results shows that our approach outperforms the previous state of the art methods by an absolute 1.88% F1-score. We have also conducted experiments with different pooling strategies proposed in the past, and the GhostVLAD pooling approach turns out to be the best approach for aggregating frame-level features into a single utterance level feature. Our experiments also prove that our approach works much better even if the input during training contains smaller durations. Finally, we see that the embeddings generated by our method has very good language discriminative features and helps to improve the performance of language identification.
What is the GhostVLAD approach?
extension of the NetVLAD, adds Ghost clusters along with the NetVLAD clusters
2,454
qasper
3k
Introduction Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 . Recently, an increasing number of users have been subjected to harassment, or have witnessed offensive behaviors online BIBREF6 . Major social media companies (i.e. Facebook, Twitter) have utilized multiple resources—artificial intelligence, human reviewers, user reporting processes, etc.—in effort to censor offensive language, yet it seems nearly impossible to successfully resolve the issue BIBREF7 , BIBREF8 . The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 . Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date. This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models. Related Work The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Naïve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets. More recently, distributed word representations and neural network models have been widely applied for abusive language detection. Djuric et al. djuric2015hate used the Continuous Bag Of Words model with paragraph2vec algorithm BIBREF13 to more accurately detect hate speech than that of the plain Bag Of Words models. Badjatiya et al. badjatiya2017deep implemented Gradient Boosted Decision Trees classifiers using word representations trained by deep learning models. Other researchers have investigated character-level representations and their effectiveness compared to word-level representations BIBREF14 , BIBREF15 . As traditional machine learning methods have relied on feature engineering, (i.e. n-grams, POS tags, user information) BIBREF1 , researchers have proposed neural-based models with the advent of larger datasets. Convolutional Neural Networks and Recurrent Neural Networks have been applied to detect abusive language, and they have outperformed traditional machine learning classifiers such as Logistic Regression and SVM BIBREF15 , BIBREF16 . However, there are no studies investigating the efficiency of neural models with large-scale datasets over 100K. Methodology This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated. Traditional Machine Learning Models We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications: Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1 Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function Neural Network based Models Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features. CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024. Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output. All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer. RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer. For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data. Feature Extension While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language. As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1). (1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on. INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it. Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice. (3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'. INLINEFORM0 (4) Who the HELL is “LIKE" ING this post? Sick people.... Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data. In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated. Dataset Hate and Abusive Speech on Twitter BIBREF10 classifies tweets into 4 labels, “normal", “spam", “hateful" and “abusive". We were only able to crawl 70,904 tweets out of 99,996 tweet IDs, mainly because the tweet was deleted or the user account had been suspended. Table shows the distribution of labels of the crawled data. Data Preprocessing In the data preprocessing steps, user IDs, URLs, and frequently used emojis are replaced as special tokens. Since hashtags tend to have a high correlation with the content of the tweet BIBREF23 , we use a segmentation library BIBREF24 for hashtags to extract more information. For character-level representations, we apply the method Zhang et al. zhang2015character proposed. Tweets are transformed into one-hot encoded vectors using 70 character dimensions—26 lower-cased alphabets, 10 digits, and 34 special characters including whitespace. Training and Evaluation In training the feature engineering based machine learning classifiers, we truncate vector representations according to the TF-IDF values (the top 14,000 and 53,000 for word-level and character-level representations, respectively) to avoid overfitting. For neural network models, words that appear only once are replaced as unknown tokens. Since the dataset used is not split into train, development, and test sets, we perform 10-fold cross validation, obtaining the average of 5 tries; we divide the dataset randomly by a ratio of 85:5:10, respectively. In order to evaluate the overall performance, we calculate the weighted average of precision, recall, and F1 scores of all four labels, “normal”, “spam”, “hateful”, and “abusive”. Empirical Results As shown in Table , neural network models are more accurate than feature engineering based models (i.e. NB, SVM, etc.) except for the LR model—the best LR model has the same F1 score as the best CNN model. Among traditional machine learning models, the most accurate in classifying abusive language is the LR model followed by ensemble models such as GBT and RF. Character-level representations improve F1 scores of SVM and RF classifiers, but they have no positive effect on other models. For neural network models, RNN with LTC modules have the highest accuracy score, but there are no significant improvements from its baseline model and its attention-added model. Similarly, HybridCNN does not improve the baseline CNN model. For both CNN and RNN models, character-level features significantly decrease the accuracy of classification. The use of context tweets generally have little effect on baseline models, however they noticeably improve the scores of several metrics. For instance, CNN with context tweets score the highest recall and F1 for “hateful" labels, and RNN models with context tweets have the highest recall for “abusive" tweets. Discussion and Conclusion While character-level features are known to improve the accuracy of neural network models BIBREF16 , they reduce classification accuracy for Hate and Abusive Speech on Twitter. We conclude this is because of the lack of labeled data as well as the significant imbalance among the different labels. Unlike neural network models, character-level features in traditional machine learning classifiers have positive results because we have trained the models only with the most significant character elements using TF-IDF values. Variants of neural network models also suffer from data insufficiency. However, these models show positive performances on “spam" (14%) and “hateful" (4%) tweets—the lower distributed labels. The highest F1 score for “spam" is from the RNN-LTC model (0.551), and the highest for “hateful" is CNN with context tweets (0.309). Since each variant model excels in different metrics, we expect to see additional improvements with the use of ensemble models of these variants in future works. In this paper, we report the baseline accuracy of different learning models as well as their variants on the recently introduced dataset, Hate and Abusive Speech on Twitter. Experimental results show that bidirectional GRU networks with LTC provide the most accurate results in detecting abusive language. Additionally, we present the possibility of using ensemble models of variant models and features for further improvements. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632), the Technology Innovation Program (10073144) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). We would also like to thank Yongkeun Hwang and Ji Ho Park for helpful discussions and their valuable insights.
What additional features and context are proposed?
using tweets that one has replied or quoted to as contextual information
2,060
qasper
3k
Introduction Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natural language processing tasks such as question answering and reasoning BIBREF0, stance detection BIBREF1, claim verification BIBREF2. Recent models BIBREF3, BIBREF4 work on the basis that words with similar context share semantic similarity. BIBREF4 proposes a neural probabilistic model which models the target word probability conditioned on the previous words using a recurrent neural network. Word2Vec models BIBREF3 such as continuous bag-of-words (CBOW) predict the target word given the context, and skip-gram model works in reverse of predicting the context given the target word. While, GloVe embeddings were based on a Global matrix factorization on local contexts BIBREF5. However, the aforementioned models do not handle words with multiple meanings (polysemies). BIBREF6 proposes a neural network approach considering both local and global contexts in learning word embeddings (point estimates). Their multiple prototype model handles polysemous words by providing apriori heuristics about word senses in the dataset. BIBREF7 proposes an alternative to handle polysemous words by a modified skip-gram model and EM algorithm. BIBREF8 presents a non-parametric based alternative to handle polysemies. However, these approaches fail to consider entailment relations among the words. BIBREF9 learn a Gaussian distribution per word using the expected likelihood kernel. However, for polysemous words, this may lead to word distributions with larger variances as it may have to cover various senses. BIBREF10 proposes multimodal word distribution approach. It captures polysemy. However, the energy based objective function fails to consider asymmetry and hence entailment. Textual entailment recognition is necessary to capture lexical inference relations such as causality (for example, mosquito $\rightarrow $ malaria), hypernymy (for example, dog $\models $ animal) etc. In this paper, we propose to obtain multi-sense word embedding distributions by using a variant of max margin objective based on the asymmetric KL divergence energy function to capture textual entailment. Multi-sense distributions are advantageous in capturing polysemous nature of words and in reducing the uncertainty per word by distributing it across senses. However, computing KL divergence between mixtures of Gaussians is intractable, and we use a KL divergence approximation based on stricter upper and lower bounds. While capturing textual entailment (asymmetry), we have also not compromised on capturing symmetrical similarity between words (for example, funny and hilarious) which will be elucidated in Section $3.1$. We also show the effectiveness of the proposed approach on the benchmark word similarity and entailment datasets in the experimental section. Methodology ::: Word Representation Probabilistic representation of words helps one model uncertainty in word representation, and polysemy. Given a corpus $V$, containing a list of words each represented as $w$, the probability density for a word $w$ can be represented as a mixture of Gaussians with $C$ components BIBREF10. Here, $p_{w,j}$ represents the probability of word $w$ belonging to the component $j$, $\operatorname{\mathbf {\mu }}_{w,j}$ represents $D$ dimensional word representation corresponding to the $j^{th}$ component sense of the word $w$, and $\Sigma _{w,j}$ represents the uncertainty in representation for word $w$ belonging to component $j$. Objective function The model parameters (means, covariances and mixture weights) $\theta $ can be learnt using a variant of max-margin objective BIBREF11. Here $E_\theta (\cdot , \cdot )$ represents an energy function which assigns a score to the pair of words, $w$ is a particular word under consideration, $c$ its positive context (same context), and $c^{\prime }$ the negative context. The objective aims to push the margin of the difference between the energy function of a word $w$ to its positive context $c$ higher than its negative context $c$ by a threshold of $m$. Thus, word pairs in the same context gets a higher energy than the word pairs in the dissimilar context. BIBREF10 consider the energy function to be an expected likelihood kernel which is defined as follows. This is similar to the cosine similarity metric over vectors and the energy between two words is maximum when they have similar distributions. But, the expected likelihood kernel is a symmetric metric which will not be suitable for capturing ordering among words and hence entailment. Objective function ::: Proposed Energy function As each word is represented by a mixture of Gaussian distributions, KL divergence is a better choice of energy function to capture distance between distributions. Since, KL divergence is minimum when the distributions are similar and maximum when they are dissimilar, energy function is taken as exponentiated negative KL divergence. However, computing KL divergence between Gaussian mixtures is intractable and obtaining exact KL value is not possible. One way of approximating the KL is by Monte-Carlo approximation but it requires large number of samples to get a good approximation and is computationally expensive on high dimensional embedding space. Alternatively, BIBREF12 presents a KL approximation between Gaussian mixtures where they obtain an upper bound through product of Gaussian approximation method and a lower bound through variational approximation method. In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to provide a stricter bound on KL between Gaussian mixtures. Lets consider Gaussian mixtures for the words $w$ and $v$ as follows. The approximate KL divergence between the Gaussian mixture representations over the words $w$ and $v$ is shown in equation DISPLAY_FORM8. More details on approximation is included in the Supplementary Material. where $EL_{ik}(w,w) = \int f_{w,i} (\operatorname{\mathbf {x}}) f_{w,k} (\operatorname{\mathbf {x}}) d\operatorname{\mathbf {x}}$ and $EL_{ij}(w,v) = \int f_{w,i} (\operatorname{\mathbf {x}}) f_{v,k} (\operatorname{\mathbf {x}}) d\operatorname{\mathbf {x}}$. Note that the expected likelihood kernel appears component wise inside the approximate KL divergence derivation. One advantage of using KL as energy function is that it enables to capture asymmetry in entailment datasets. For eg., let us consider the words 'chair' with two senses as 'bench' and 'sling', and 'wood' with two senses as 'trees' and 'furniture'. The word chair ($w$) is entailed within wood ($v$), i.e. chair $\models $ wood. Now, minimizing the KL divergence necessitates maximizing $\log {\sum _j p_{v,j} \exp ({-KL(f_{w,i} (\operatorname{\mathbf {x}})||f_{v,j}(\operatorname{\mathbf {x}}))})}$ which in turn minimizes $KL(f_{w,i}(\operatorname{\mathbf {x}})||f_{v,j}(\operatorname{\mathbf {x}}))$. This will result in the support of the $i^{th}$ component of $w$ to be within the $j^{th}$ component of $v$, and holds for all component pairs leading to the entailment of $w$ within $v$. Consequently, we can see that bench $\models $ trees, bench $\models $ furniture, sling $\models $ trees, and sling $\models $ furniture. Thus, it introduces lexical relationship between the senses of child word and that of the parent word. Minimizing the KL also necessitates maximizing $\log {\sum _j {p_{v,j}} EL_{ij}(w,v)}$ term for all component pairs among $w$ and $v$. This is similar to maximizing expected likelihood kernel, which brings the means of $f_{w,i}(\operatorname{\mathbf {x}})$ and $f_{v,j}(\operatorname{\mathbf {x}})$ closer (weighted by their co-variances) as discussed in BIBREF10. Hence, the proposed approach captures the best of both worlds, thereby catering to both word similarity and entailment. We also note that minimizing the KL divergence necessitates minimizing $\log {\sum _k p_{w,k} \exp ({-KL(f_{w,i}||f_{w,k})})}$ which in turn maximizes $KL(f_{w,i}||f_{w,k})$. This prevents the different mixture components of a word converging to single Gaussian and encourages capturing different possible senses of the word. The same is also achieved by minimizing $\sum _k {p_{w,k}} EL_{ik}(w,w)$ term and act as a regularization term which promotes diversity in learning senses of a word. Experimentation and Results We train our proposed model GM$\_$KL (Gaussian Mixture using KL Divergence) on the Text8 dataset BIBREF14 which is a pre-processed data of $17M$ words from wikipedia. Of which, 71290 unique and frequent words are chosen using the subsampling trick in BIBREF15. We compare GM$\_$KL with the previous approaches w2g BIBREF9 ( single Gaussian model) and w2gm BIBREF10 (mixture of Gaussian model with expected likelihood kernel). For all the models used for experimentation, the embedding size ($D$) was set to 50, number of mixtures to 2, context window length to 10, batch size to 128. The word embeddings were initialized using a uniform distribution in the range of $[-\sqrt{\frac{3}{D}}$, $\sqrt{\frac{3}{D}}]$ such that the expectation of variance is 1 and mean 0 BIBREF16. One could also consider initializing the word embeddings using other contextual representations such as BERT BIBREF17 and ELMo BIBREF18 in the proposed approach. In order to purely analyze the performance of $\emph {GM\_KL}$ over the other models, we have chosen initialization using uniform distribution for experiments. For computational benefits, diagonal covariance is used similar to BIBREF10. Each mixture probability is constrained in the range $[0,1]$, summing to 1 by optimizing over unconstrained scores in the range $(-\infty ,\infty )$ and converting scores to probability using softmax function. The mixture scores are initialized to 0 to ensure fairness among all the components. The threshold for negative sampling was set to $10^{-5}$, as recommended in BIBREF3. Mini-batch gradient descent with Adagrad optimizer BIBREF19 was used with initial learning rate set to $0.05$. Table TABREF9 shows the qualitative results of GM$\_$KL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word `plane' in its 0th component captures the `geometry' sense and so are its neighbours, and its 1st component captures `vehicle' sense and so are its corresponding neighbours. Other words such as `rock' captures both the `metal' and `music' senses, `star' captures `celebrity' and `astronomical' senses, and `phone' captures `telephony' and `internet' senses. We quantitatively compare the performance of the GM$\_$KL, w2g, and w2gm approaches on the SCWS dataset BIBREF6. The dataset consists of 2003 word pairs of polysemous and homonymous words with labels obtained by an average of 10 human scores. The Spearman correlation between the human scores and the model scores are computed. To obtain the model score, the following metrics are used: MaxCos: Maximum cosine similarity among all component pairs of words $w$ and $v$: AvgCos: Average component-wise cosine similarity between the words $w$ and $v$. KL$\_$approx: Formulated as shown in (DISPLAY_FORM8) between the words $w$ and $v$. KL$\_$comp: Maximum component-wise negative KL between words $w$ and $v$: Table TABREF17 compares the performance of the approaches on the SCWS dataset. It is evident from Table TABREF17 that GM$\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset. Table TABREF18 shows the Spearman correlation values of GM$\_$KL model evaluated on the benchmark word similarity datasets: SL BIBREF20, WS, WS-R, WS-S BIBREF21, MEN BIBREF22, MC BIBREF23, RG BIBREF24, YP BIBREF25, MTurk-287 and MTurk-771 BIBREF26, BIBREF27, and RW BIBREF28. The metric used for comparison is 'AvgCos'. It can be seen that for most of the datasets, GM$\_$KL achieves significantly better correlation score than w2g and w2gm approaches. Other datasets such as MC and RW consist of only a single sense, and hence w2g model performs better and GM$\_$KL achieves next better performance. The YP dataset have multiple senses but does not contain entailed data and hence could not make use of entailment benefits of GM$\_$KL. Table TABREF19 shows the evaluation results of GM$\_$KL model on the entailment datasets such as entailment pairs dataset BIBREF29 created from WordNet with both positive and negative labels, a crowdsourced dataset BIBREF30 of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset BIBREF31. The 'MaxCos' similarity metric is used for evaluation and the best precision and best F1-score is shown, by picking the optimal threshold. Overall, GM$\_$KL performs better than both w2g and w2gm approaches. Conclusion We proposed a KL divergence based energy function for learning multi-sense word embedding distributions modelled as Gaussian mixtures. Due to the intractability of the Gaussian mixtures for the KL divergence measure, we use an approximate KL divergence function. We also demonstrated that the proposed GM$\_$KL approaches performed better than other approaches on the benchmark word similarity and entailment datasets. tocsectionAppendices Approximation for KL divergence between mixtures of gaussians KL between gaussian mixtures $f_{w}(\operatorname{\mathbf {x}})$ and $f_{v}(\operatorname{\mathbf {x}})$ can be decomposed as: BIBREF12 presents KL approximation between gaussian mixtures using product of gaussian approximation method where KL is approximated using product of component gaussians and variational approximation method where KL is approximated by introducing some variational parameters. The product of component gaussian approximation method using Jensen's inequality provides upper bounds as shown in equations DISPLAY_FORM23 and . The variational approximation method provides lower bounds as shown in equations DISPLAY_FORM24 and DISPLAY_FORM25. where $H$ represents the entropy term and the entropy of $i^{th}$ component of word $w$ with dimension $D$ is given as In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to formulate a stricter bound on KL between gaussian mixtures. From equations DISPLAY_FORM23 and DISPLAY_FORM25, a stricter lower bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM26 From equations and DISPLAY_FORM24, a stricter upper bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM27 Finally, the KL between gaussian mixtures is taken as the mean of KL upper and lower bounds as shown in equation DISPLAY_FORM28.
How does this approach compare to other WSD approaches employing word embeddings?
GM$\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset.
2,189
qasper
3k
Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences. In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models. In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data. Related Work Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content. At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output. Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora. Simplified Corpora We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K. Text Simplification using Neural Machine Translation Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence. The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 . State INLINEFORM0 is calculated by DISPLAYFORM0 where INLINEFORM0 is the activation function GRU. The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding. Synthetic Simplified Sentences We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system. Evaluation We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. Conclusion In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated.
what language does this paper focus on?
English
2,243
qasper
3k
Introduction There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4). It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets. The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion. Literature Review Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17). Continuous skipgram predicts (by maximizing classification of) words before and after the center word, for a given range. Since distant words are less connected to a center word in a sentence, less weight is assigned to such distant words in training. CBoW, on the other hand, uses words from the history and future in a sequence, with the objective of correctly classifying the target word in the middle. It works by projecting all history or future words within a chosen window into the same position, averaging their vectors. Hence, the order of words in the history or future does not influence the averaged vector. This is similar to the traditional bag-of-words, which is oblivious of the order of words in its sequence. A log-linear classifier is used in both architectures (BIBREF0). In further work, they extended the model to be able to do phrase representations and subsample frequent words (BIBREF16). Being a NNLM, word2vec assigns probabilities to words in a sequence, like other NNLMs such as feedforward networks or recurrent neural networks (BIBREF15). Earlier models like latent dirichlet allocation (LDA) and latent semantic analysis (LSA) exist and effectively achieve low dimensional vectors by matrix factorization (BIBREF18, BIBREF19). It's been shown that word vectors are beneficial for NLP tasks (BIBREF15), such as sentiment analysis and named entity recognition. Besides, BIBREF0 showed with vector space algebra that relationships among words can be evaluated, expressing the quality of vectors produced from the model. The famous, semantic example: vector("King") - vector("Man") + vector("Woman") $\approx $ vector("Queen") can be verified using cosine distance. Another type of semantic meaning is the relationship between a capital city and its corresponding country. Syntactic relationship examples include plural verbs and past tense, among others. Combination of both syntactic and semantic analyses is possible and provided (totaling over 19,000 questions) as Google analogy test set by BIBREF0. WordSimilarity-353 test set is another analysis tool for word vectors (BIBREF20). Unlike Google analogy score, which is based on vector space algebra, WordSimilarity is based on human expert-assigned semantic similarity on two sets of English word pairs. Both tools rank from 0 (totally dissimilar) to 1 (very much similar or exact, in Google analogy case). A typical artificial neural network (ANN) has very many hyper-parameters which may be tuned. Hyper-parameters are values which may be manually adjusted and include vector dimension size, type of algorithm and learning rate (BIBREF19). BIBREF0 tried various hyper-parameters with both architectures of their model, ranging from 50 to 1,000 dimensions, 30,000 to 3,000,000 vocabulary sizes, 1 to 3 epochs, among others. In our work, we extended research to 3,000 dimensions. Different observations were noted from the many trials. They observed diminishing returns after a certain point, despite additional dimensions or larger, unstructured training data. However, quality increased when both dimensions and data size were increased together. Although BIBREF16 pointed out that choice of optimal hyper-parameter configurations depends on the NLP problem at hand, they identified the most important factors are architecture, dimension size, subsampling rate, and the window size. In addition, it has been observed that variables like size of datasets improve the quality of word vectors and, potentially, performance on downstream tasks (BIBREF21, BIBREF0). Methodology The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch. To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased. Google (semantic and syntactic) analogy tests and WordSimilarity-353 (with Spearman correlation) by BIBREF20 were chosen for intrinsic evaluations. They measure the quality of word vectors. The analogy scores are averages of both semantic and syntactic tests. NER and SA were chosen for extrinsic evaluations. The GMB dataset for NER was trained in an LSTM network, which had an embedding layer for input. The network diagram is shown in fig. FIGREF4. The IMDb dataset for SA was trained in a BiLSTM network, which also used an embedding layer for input. Its network diagram is given in fig. FIGREF4. It includes an additional hidden linear layer. Hyper-parameter details of the two networks for the downstream tasks are given in table TABREF3. The metrics for extrinsic evaluation include F1, precision, recall and accuracy scores. In both tasks, the default pytorch embedding was tested before being replaced by pre-trained embeddings released by BIBREF0 and ours. In each case, the dataset was shuffled before training and split in the ratio 70:15:15 for training, validation (dev) and test sets. Batch size of 64 was used. For each task, experiments for each embedding was conducted four times and an average value calculated and reported in the next section Results and Discussion Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora. Best combination changes when corpus size increases, as will be noticed from table TABREF5. In terms of analogy score, for 10 epochs, w8s0h0 performs best while w8s1h0 performs best in terms of WordSim and corresponding Spearman correlation. Meanwhile, increasing the corpus size to BW, w4s1h0 performs best in terms of analogy score while w8s1h0 maintains its position as the best in terms of WordSim and Spearman correlation. Besides considering quality metrics, it can be observed from table TABREF6 that comparative ratio of values between the models is not commensurate with the results in intrinsic or extrinsic values, especially when we consider the amount of time and energy spent, since more training time results in more energy consumption (BIBREF23). Information on the length of training time for the released Mikolov model is not readily available. However, it's interesting to note that their presumed best model, which was released is also s1h0. Its analogy score, which we tested and report, is confirmed in their paper. It beats our best models in only analogy score (even for Simple Wiki), performing worse in others. This is inspite of using a much bigger corpus of 3,000,000 vocabulary size and 100 billion words while Simple Wiki had vocabulary size of 367,811 and is 711MB. It is very likely our analogy scores will improve when we use a much larger corpus, as can be observed from table TABREF5, which involves just one billion words. Although the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. FIGREF7, decreased only slightly compared to others with increasing dimensions, the increased training time and much larger serialized model size render any possible minimal score advantage over higher dimensions undesirable. As can be observed in fig. FIGREF9, from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW. More becomes worse! This trend is true for all combinations for all tests. Polynomial interpolation may be used to determine the optimal dimension in both corpora. Our models are available for confirmation and source codes are available on github. With regards to NER, most pretrained embeddings outperformed the default pytorch embedding, with our BW w4s1h0 model (which is best in BW analogy score) performing best in F1 score and closely followed by BIBREF0 model. On the other hand, with regards to SA, pytorch embedding outperformed the pretrained embeddings but was closely followed by our SW w8s0h0 model (which also had the best SW analogy score). BIBREF0 performed second worst of all, despite originating from a very huge corpus. The combinations w8s0h0 & w4s0h0 of SW performed reasonably well in both extrinsic tasks, just as the default pytorch embedding did. Conclusion This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19). Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research. The work on this project is partially funded by Vinnova under the project number 2019-02996 "Språkmodeller för svenska myndigheter" Acronyms
What sentiment analysis dataset is used?
IMDb dataset of movie reviews
2,327
qasper
3k
Introduction Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are. Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly. The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data. Multitask Learning for Twitter Sentiment Classification In his work, Caruana BIBREF4 proposed a multitask approach in which a learner takes advantage of the multiplicity of interdependent tasks while jointly learning them. The intuition is that if the tasks are correlated, the learner can learn a model jointly for them while taking into account the shared information which is expected to improve its generalization ability. People express their opinions online on various subjects (events, products..), on several languages and in several styles (tweets, paragraph-sized reviews..), and it is exactly this variety that motivates the multitask approaches. Specifically for Twitter for instance, the different settings of classification like binary, ternary and fine-grained are correlated since their difference lies in the sentiment granularity of the classes which increases while moving from binary to fine-grained problems. There are two main decisions to be made in our approach: the learning algorithm, which learns a decision function, and the data representation. With respect to the former, neural networks are particularly suitable as one can design architectures with different properties and arbitrary complexity. Also, as training neural network usually relies on back-propagation of errors, one can have shared parts of the network trained by estimating errors on the joint tasks and others specialized for particular tasks. Concerning the data representation, it strongly depends on the data type available. For the task of sentiment classification of tweets with neural networks, distributed embeddings of words have shown great potential. Embeddings are defined as low-dimensional, dense representations of words that can be obtained in an unsupervised fashion by training on large quantities of text BIBREF10 . Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively. Figure FIGREF2 presents the architecture we use for multitask learning. In the top-left of the figure a biLSTM network (enclosed by the dashed line) is fed with embeddings INLINEFORM0 that correspond to the INLINEFORM1 words of a tokenized tweet. Notice, as discussed above, the biLSTM consists of two LSTMs that are fed with the word sequence forward and backwards. On top of the biLSTM network one (or more) hidden layers INLINEFORM2 transform its output. The output of INLINEFORM3 is led to the softmax layers for the prediction step. There are INLINEFORM4 softmax layers and each is used for one of the INLINEFORM5 tasks of the multitask setting. In tasks such as sentiment classification, additional features like membership of words in sentiment lexicons or counts of elongated/capitalized words can be used to enrich the representation of tweets before the classification step BIBREF3 . The lower part of the network illustrates how such sources of information can be incorporated to the process. A vector “Additional Features” for each tweet is transformed from the hidden layer(s) INLINEFORM6 and then is combined by concatenation with the transformed biLSTM output in the INLINEFORM7 layer. Experimental setup Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task. Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes. Feature representation We report results using two different feature sets. The first one, dubbed nbow, is a neural bag-of-words that uses text embeddings to generate low-dimensional, dense representations of the tweets. To construct the nbow representation, given the word embeddings dictionary where each word is associated with a vector, we apply the average compositional function that averages the embeddings of the words that compose a tweet. Simple compositional functions like average were shown to be robust and efficient in previous work BIBREF17 . Instead of training embeddings from scratch, we use the pre-trained on tweets GloVe embeddings of BIBREF10 . In terms of resources required, using only nbow is efficient as it does not require any domain knowledge. However, previous research on sentiment analysis showed that using extra resources, like sentiment lexicons, can benefit significantly the performance BIBREF3 , BIBREF2 . To validate this and examine at which extent neural networks and multitask learning benefit from such features we evaluate the models using an augmented version of nbow, dubbed nbow+. The feature space of the latter, is augmented using 1,368 extra features consisting mostly of counts of punctuation symbols ('!?#@'), emoticons, elongated words and word membership features in several sentiment lexicons. Due to space limitations, for a complete presentation of these features, we refer the interested reader to BIBREF2 , whose open implementation we used to extract them. Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1 where INLINEFORM0 is the number of categories, INLINEFORM1 is the set of instances whose true class is INLINEFORM2 , INLINEFORM3 is the true label of the instance INLINEFORM4 and INLINEFORM5 the predicted label. The measure penalizes decisions far from the true ones and is macro-averaged to account for the fact that the data are unbalanced. Complementary to INLINEFORM6 , we report the performance achieved on the micro-averaged INLINEFORM7 measure, which is a commonly used measure for classification. The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion. Both SVMs and LRs as discussed above treat the problem as a multi-class one, without considering the ordering of the classes. For these four models, we tuned the hyper-parameter INLINEFORM0 that controls the importance of the L INLINEFORM1 regularization part in the optimization problem with grid-search over INLINEFORM2 using 10-fold cross-validation in the union of the training and development data and then retrained the models with the selected values. Also, to account for the unbalanced classification problem we used class weights to penalize more the errors made on the rare classes. These weights were inversely proportional to the frequency of each class. For the four models we used the implementations of Scikit-learn BIBREF19 . For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks. To tune the neural network hyper-parameters we used 5-fold cross validation. We tuned the probability INLINEFORM0 of dropout after the hidden layers INLINEFORM1 and for the biLSTM for INLINEFORM2 , the size of the hidden layer INLINEFORM3 and the probability INLINEFORM4 of the Bernoulli trials from INLINEFORM5 . During training, we monitor the network's performance on the development set and apply early stopping if the performance on the validation set does not improve for 5 consecutive epochs. Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved. Several observations can be made from the table. First notice that, overall, the best performance is achieved by the neural network architecture that uses multitask learning. This entails that the system makes use of the available resources efficiently and improves the state-of-the-art performance. In conjunction with the fact that we found the optimal probability INLINEFORM0 , this highlights the benefits of multitask learning over single task learning. Furthermore, as described above, the neural network-based models have only access to the training data as the development are hold for early stopping. On the other hand, the baseline systems were retrained on the union of the train and development sets. Hence, even with fewer resources available for training on the fine-grained problem, the neural networks outperform the baselines. We also highlight the positive effect of the additional features that previous research proposed. Adding the features both in the baselines and in the biLSTM-based architectures improves the INLINEFORM1 scores by several points. Lastly, we compare the performance of the baseline systems with the performance of the state-of-the-art system of BIBREF2 . While BIBREF2 uses n-grams (and character-grams) with INLINEFORM0 , the baseline systems (SVMs, LRs) used in this work use the nbow+ representation, that relies on unigrams. Although they perform on par, the competitive performance of nbow highlights the potential of distributed representations for short-text classification. Further, incorporating structure and distributed representations leads to the gains of the biLSTM network, in the multitask and single task setting. Similar observations can be drawn from Figure FIGREF10 that presents the INLINEFORM0 scores. Again, the biLSTM network with multitask learning achieves the best performance. It is also to be noted that although the two evaluation measures are correlated in the sense that the ranking of the models is the same, small differences in the INLINEFORM1 have large effect on the scores of the INLINEFORM2 measure. Conclusion In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second. This opens several avenues for future research. Since sentiment is expressed in different textual types like tweets and paragraph-sized reviews, in different languages (English, German, ..) and in different granularity levels (binary, ternary,..) one can imagine multitask approaches that could benefit from combining such resources. Also, while we opted for biLSTM networks here, one could use convolutional neural networks or even try to combine different types of networks and tasks to investigate the performance effect of multitask learning. Lastly, while our approach mainly relied on the foundations of BIBREF4 , the internal mechanisms and the theoretical guarantees of multitask learning remain to be better understood. Acknowledgements This work is partially supported by the CIFRE N 28/2015.
By how much did they improve?
They decrease MAE in 0.34
2,735
qasper
3k
Introduction This paper describes our approach and results for Task 2 of the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs. Morphological (re)inflection from context is of particular relevance to the field of computational linguistics: it is compelling to estimate how well a machine-learned system can capture the morphosyntactic properties of a word given its context, and map those properties to the correct surface form for a given lemma. There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances. The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages. In analysing the performance of our system, we found that encoding the full context improves performance considerably for all languages: 11.15 percentage points on average, although it also highly increases the variance in results. Multi-task learning, paired with multilingual training and subsequent monolingual finetuning, scored highest for five out of seven languages, improving accuracy by another 9.86% on average. System Description Our system is a modification of the provided CoNLL–SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce. Baseline The CoNLL–SIGMORPHON 2018 baseline is described as follows: The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism. To that we add a few details regarding model size and training schedule: the number of LSTM layers is one; embedding size, LSTM layer size and attention layer size is 100; models are trained for 20 epochs; on every epoch, training data is subsampled at a rate of 0.3; LSTM dropout is applied at a rate 0.3; context word forms are randomly dropped at a rate of 0.1; the Adam optimiser is used, with a default learning rate of 0.001; and trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets). Our system Here we compare and contrast our system to the baseline system. A diagram of our system is shown in Figure FIGREF4 . The idea behind this modification is to provide the encoder with access to all morpho-syntactic cues present in the sentence. In contrast to the baseline, which only encodes the immediately adjacent context of a target word, we encode the entire context. All context word forms, lemmas, and MSD tags (in Track 1) are embedded in their respective high-dimensional spaces as before, and their embeddings are concatenated. However, we now reduce the entire past context to a fixed-size vector by encoding it with a forward LSTM, and we similarly represent the future context by encoding it with a backwards LSTM. We introduce an auxiliary objective that is meant to increase the morpho-syntactic awareness of the encoder and to regularise the learning process—the task is to predict the MSD tag of the target form. MSD tag predictions are conditioned on the context encoding, as described in UID15 . Tags are generated with an LSTM one component at a time, e.g. the tag PRO;NOM;SG;1 is predicted as a sequence of four components, INLINEFORM0 PRO, NOM, SG, 1 INLINEFORM1 . For every training instance, we backpropagate the sum of the main loss and the auxiliary loss without any weighting. As MSD tags are only available in Track 1, this augmentation only applies to this track. The parameters of the entire MSD (auxiliary-task) decoder are shared across languages. Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages. After 20 epochs of multilingual training, we perform 5 epochs of monolingual finetuning for each language. For this phase, we reduce the learning rate to a tenth of the original learning rate, i.e. 0.0001, to ensure that the models are indeed being finetuned rather than retrained. We keep all hyperparameters the same as in the baseline. Training data is split 90:10 for training and validation. We train our models for 50 epochs, adding early stopping with a tolerance of five epochs of no improvement in the validation loss. We do not subsample from the training data. We train models for 50 different random combinations of two to three languages in Track 1, and 50 monolingual models for each language in Track 2. Instead of picking the single model that performs best on the development set and thus risking to select a model that highly overfits that data, we use an ensemble of the five best models, and make the final prediction for a given target form with a majority vote over the five predictions. Results and Discussion Test results are listed in Table TABREF17 . Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2—only in the high resource setting is our system not definitively superior to the baseline. Interestingly, our results in the low resource setting are often higher for Track 2 than for Track 1, even though contextual information is less explicit in the Track 2 data and the multilingual multi-tasking approach does not apply to this track. We interpret this finding as an indicator that a simpler model with fewer parameters works better in a setting of limited training data. Nevertheless, we focus on the low resource setting in the analysis below due to time limitations. As our Track 1 results are still substantially higher than the baseline results, we consider this analysis valid and insightful. Ablation Study We analyse the incremental effect of the different features in our system, focusing on the low-resource setting in Track 1 and using development data. Encoding the entire context with an LSTM highly increases the variance of the observed results. So we trained fifty models for each language and each architecture. Figure FIGREF23 visualises the means and standard deviations over the trained models. In addition, we visualise the average accuracy for the five best models for each language and architecture, as these are the models we use in the final ensemble prediction. Below we refer to these numbers only. The results indicate that encoding the full context with an LSTM highly enhances the performance of the model, by 11.15% on average. This observation explains the high results we obtain also for Track 2. Adding the auxiliary objective of MSD prediction has a variable effect: for four languages (de, en, es, and sv) the effect is positive, while for the rest it is negative. We consider this to be an issue of insufficient data for the training of the auxiliary component in the low resource setting we are working with. We indeed see results improving drastically with the introduction of multilingual training, with multilingual results being 7.96% higher than monolingual ones on average. We studied the five best models for each language as emerging from the multilingual training (listed in Table TABREF27 ) and found no strong linguistic patterns. The en–sv pairing seems to yield good models for these languages, which could be explained in terms of their common language family and similar morphology. The other natural pairings, however, fr–es, and de–sv, are not so frequent among the best models for these pairs of languages. Finally, monolingual finetuning improves accuracy across the board, as one would expect, by 2.72% on average. The final observation to be made based on this breakdown of results is that the multi-tasking approach paired with multilingual training and subsequent monolingual finetuning outperforms the other architectures for five out of seven languages: de, en, fr, ru and sv. For the other two languages in the dataset, es and fi, the difference between this approach and the approach that emerged as best for them is less than 1%. The overall improvement of the multilingual multi-tasking approach over the baseline is 18.30%. Error analysis Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 . MSD prediction Figure FIGREF32 summarises the average MSD-prediction accuracy for the multi-tasking experiments discussed above. Accuracy here is generally higher than on the main task, with the multilingual finetuned setup for Spanish and the monolingual setup for French scoring best: 66.59% and 65.35%, respectively. This observation illustrates the added difficulty of generating the correct surface form even when the morphosyntactic description has been identified correctly. We observe some correlation between these numbers and accuracy on the main task: for de, en, ru and sv, the brown, pink and blue bars here pattern in the same way as the corresponding INLINEFORM0 's in Figure FIGREF23 . One notable exception to this pattern is fr where inflection gains a lot from multilingual training, while MSD prediction suffers greatly. Notice that the magnitude of change is not always the same, however, even when the general direction matches: for ru, for example, multilingual training benefits inflection much more than in benefits MSD prediction, even though the MSD decoder is the only component that is actually shared between languages. This observation illustrates the two-fold effect of multi-task training: an auxiliary task can either inform the main task through the parameters the two tasks share, or it can help the main task learning through its regularising effect. Related Work Our system is inspired by previous work on multi-task learning and multi-lingual learning, mainly building on two intuitions: (1) jointly learning related tasks tends to be beneficial BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 ; and (2) jointly learning related languages in an MTL-inspired framework tends to be beneficial BIBREF8 , BIBREF9 , BIBREF10 . In the context of computational morphology, multi-lingual approaches have previously been employed for morphological reinflection BIBREF2 and for paradigm completion BIBREF11 . In both of these cases, however, the available datasets covered more languages, 40 and 21, respectively, which allowed for linguistically-motivated language groupings and for parameter sharing directly on the level of characters. BIBREF10 explore parameter sharing between related languages for dependency parsing, and find that sharing is more beneficial in the case of closely related languages. Conclusions In this paper we described our system for the CoNLL–SIGMORPHON 2018 shared task on Universal Morphological Reinflection, Task 2, which achieved the best performance out of all systems submitted, an overall accuracy of 49.87. We showed in an ablation study that this is due to three core innovations, which extend a character-based encoder-decoder model: (1) a wide context window, encoding the entire available context; (2) multi-task learning with the auxiliary task of MSD prediction, which acts as a regulariser; (3) a multilingual approach, exploiting information across languages. In future work we aim to gain better understanding of the increase in variance of the results introduced by each of our modifications and the reasons for the varying effect of multi-task learning for different languages. Acknowledgements We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
What architecture does the encoder have?
LSTM
2,289
qasper
3k
Introduction Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system. Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout. By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs. It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR). Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean. This paper makes the following contributions: Jasper Architecture Jasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels. Each block input is connected directly into the last sub-block via a residual connection. The residual connection is first projected through a 1x1 convolution to account for different numbers of input and output channels, then through a batch norm layer. The output of this batch norm layer is added to the output of the batch norm layer in the last sub-block. The result of this sum is passed through the activation function and dropout to produce the output of the sub-block. The sub-block architecture of Jasper was designed to facilitate fast GPU inference. Each sub-block can be fused into a single GPU kernel: dropout is not used at inference-time and is eliminated, batch norm can be fused with the preceding convolution, ReLU clamps the result, and residual summation can be treated as a modified bias term in this fused operation. All Jasper models have four additional convolutional blocks: one pre-processing and three post-processing. See Figure FIGREF7 and Table TABREF8 for details. We also build a variant of Jasper, Jasper Dense Residual (DR). Jasper DR follows DenseNet BIBREF15 and DenseRNet BIBREF16 , but instead of having dense connections within a block, the output of a convolution block is added to the inputs of all the following blocks. While DenseNet and DenseRNet concatenates the outputs of different layers, Jasper DR adds them in the same way that residuals are added in ResNet. As explained below, we find addition to be as effective as concatenation. Normalization and Activation In our study, we evaluate performance of models with: 3 types of normalization: batch norm BIBREF10 , weight norm BIBREF9 , and layer norm BIBREF17 3 types of rectified linear units: ReLU, clipped ReLU (cReLU), and leaky ReLU (lReLU) 2 types of gated units: gated linear units (GLU) BIBREF8 , and gated activation units (GAU) BIBREF18 All experiment results are shown in Table TABREF15 . We first experimented with a smaller Jasper5x3 model to pick the top 3 settings before training on larger Jasper models. We found that layer norm with GAU performed the best on the smaller model. Layer norm with ReLU and batch norm with ReLU came second and third in our tests. Using these 3, we conducted further experiments on a larger Jasper10x4. For larger models, we noticed that batch norm with ReLU outperformed other choices. Thus, leading us to decide on batch normalization and ReLU for our architecture. During batching, all sequences are padded to match the longest sequence. These padded values caused issues when using layer norm. We applied a sequence mask to exclude padding values from the mean and variance calculation. Further, we computed mean and variance over both the time dimension and channels similar to the sequence-wise normalization proposed by Laurent et al. BIBREF19 . In addition to masking layer norm, we additionally applied masking prior to the convolution operation, and masking the mean and variance calculation in batch norm. These results are shown in Table TABREF16 . Interestingly, we found that while masking before convolution gives a lower WER, using masks for both convolutions and batch norm results in worse performance. As a final note, we found that training with weight norm was very unstable leading to exploding activations. Residual Connections For models deeper than Jasper 5x3, we observe consistently that residual connections are necessary for training to converge. In addition to the simple residual and dense residual model described above, we investigated DenseNet BIBREF15 and DenseRNet BIBREF16 variants of Jasper. Both connect the outputs of each sub-block to the inputs of following sub-blocks within a block. DenseRNet, similar to Dense Residual, connects the output of each output of each block to the input of all following blocks. DenseNet and DenseRNet combine residual connections using concatenation whereas Residual and Dense Residual use addition. We found that Dense Residual and DenseRNet perform similarly with each performing better on specific subsets of LibriSpeech. We decided to use Dense Residual for subsequent experiments. The main reason is that due to concatenation, the growth factor for DenseNet and DenseRNet requires tuning for deeper models whereas Dense Residual simply just repeats a sub-blocks. Language Model A language model (LM) is a probability distribution over arbitrary symbol sequences INLINEFORM0 such that more likely sequences are assigned high probabilities. LMs are frequently used to condition beam search. During decoding, candidates are evaluated using both acoustic scores and LM scores. Traditional N-gram LMs have been augmented with neural LMs in recent work BIBREF20 , BIBREF21 , BIBREF22 . We experiment with statistical N-gram language models BIBREF23 and neural Transformer-XL BIBREF11 models. Our best results use acoustic and word-level N-gram language models to generate a candidate list using beam search with a width of 2048. Next, an external Transformer-XL LM rescores the final list. All LMs were trained on datasets independently from acoustic models. We show results with the neural LM in our Results section. We observed a strong correlation between the quality of the neural LM (measured by perplexity) and WER as shown in Figure FIGREF20 . NovoGrad For training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable. At each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0 The second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0 If L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0 Finally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0 Using NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work. Results We evaluate Jasper across a number of datasets in various domains. In all experiments, we use dropout and weight decay as regularization. At training time, we use speed perturbation with fixed +/-10% BIBREF29 for LibriSpeech. For WSJ and Hub5'00, we use a random speed perturbation factor between [-10%, 10%] as each utterance is fed into the model. All models have been trained on NVIDIA DGX-1 in mixed precision BIBREF30 using OpenSeq2Seq BIBREF31 . Source code, training configurations, and pretrained models are available. Read Speech We evaluated the performance of Jasper on two read speech datasets: LibriSpeech and Wall Street Journal (WSJ). For LibriSpeech, we trained Jasper DR 10x5 using our NovoGrad optimizer for 400 epochs. We achieve SOTA performance on the test-clean subset and SOTA among end-to-end speech recognition models on test-other. We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 . Conversational Speech We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 . We obtain good results for SWB. However, there is work to be done to improve WER on harder tasks such as CHM. Conclusions We have presented a new family of neural architectures for end-to-end speech recognition. Inspired by wav2letter's convolutional approach, we build a deep and scalable model, which requires a well-designed residual topology, effective regularization, and a strong optimizer. As our architecture studies demonstrated, a combination of standard components leads to SOTA results on LibriSpeech and competitive results on other benchmarks. Our Jasper architecture is highly efficient for training and inference, and serves as a good baseline approach on top of which to explore more sophisticated regularization, data augmentation, loss functions, language models, and optimization strategies. We are interested to see if our approach can continue to scale to deeper models and larger datasets.
what were the baselines?
Unanswerable
1,856
qasper
3k
Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2.
What metrics are used for evaluation?
Byte-Pair Encoding perplexity (BPE PPL), BLEU-1, BLEU-4, ROUGE-L, percentage of distinct unigram (D-1), percentage of distinct bigrams(D-2), user matching accuracy(UMA), Mean Reciprocal Rank(MRR) Pairwise preference over baseline(PP)
2,673
qasper
3k
Introduction Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages to legitimate users, etc. A certain amount of spammers could sway the public opinion and cause distrust of the social platform. Despite the use of rigid anti-spam rules, human-like spammers whose homepages having photos, detailed profiles etc. have emerged. Unlike previous "simple" spammers, whose tweets contain only malicious links, those "smart" spammers are more difficult to distinguish from legitimate users via content-based features alone BIBREF0 . There is a considerable amount of previous work on spammer detection on social platforms. Researcher from Twitter Inc. BIBREF1 collect bot accounts and perform analysis on the user behavior and user profile features. Lee et al. lee2011seven use the so-called social honeypot by alluring social spammers' retweet to build a benchmark dataset, which has been extensively explored in our paper. Some researchers focus on the clustering of urls in tweets and network graph of social spammers BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , showing the power of social relationship features.As for content information modeling, BIBREF6 apply improved sparse learning methods. However, few studies have adopted topic-based features. Some researchers BIBREF7 discuss topic characteristics of spamming posts, indicating that spammers are highly likely to dwell on some certain topics such as promotion. But this may not be applicable to the current scenario of smart spammers. In this paper, we propose an efficient feature extraction method. In this method, two new topic-based features are extracted and used to discriminate human-like spammers from legitimate users. We consider the historical tweets of each user as a document and use the Latent Dirichlet Allocation (LDA) model to compute the topic distribution for each user. Based on the calculated topic probability, two topic-based features, the Local Outlier Standard Score (LOSS) which captures the user's interests on different topics and the Global Outlier Standard Score (GOSS) which reveals the user's interests on specific topic in comparison with other users', are extracted. The two features contain both local and global information, and the combination of them can distinguish human-like spammers effectively. To the best of our knowledge, it is the first time that features based on topic distributions are used in spammer classification. Experimental results on one public dataset and one self-collected dataset further validate that the two sets of extracted topic-based features get excellent performance on human-like spammer classification problem compared with other state-of-the-art methods. In addition, we build a Weibo dataset, which contains both legitimate users and spammers. To summarize, our major contributions are two-fold: In the following sections, we first propose the topic-based features extraction method in Section 2, and then introduce the two datasets in Section 3. Experimental results are discussed in Section 4, and we conclude the paper in Section 5. Future work is presented in Section 6. Methodology In this section, we first provide some observations we obtained after carefully exploring the social network, then the LDA model is introduced. Based on the LDA model, the ways to obtain the topic probability vector for each user and the two topic-based features are provided. Observation After exploring the homepages of a substantial number of spammers, we have two observations. 1) social spammers can be divided into two categories. One is content polluters, and their tweets are all about certain kinds of advertisement and campaign. The other is fake accounts, and their tweets resemble legitimate users' but it seems they are simply random copies of others to avoid being detected by anti-spam rules. 2) For legitimate users, content polluters and fake accounts, they show different patterns on topics which interest them. Legitimate users mainly focus on limited topics which interest him. They seldom post contents unrelated to their concern. Content polluters concentrate on certain topics. Fake accounts focus on a wide range of topics due to random copying and retweeting of other users' tweets. Spammers and legitimate users show different interests on some topics e.g. commercial, weather, etc. To better illustrate our observation, Figure. 1 shows the topic distribution of spammers and legitimate users in two employed datasets(the Honeypot dataset and Weibo dataset). We can see that on both topics (topic-3 and topic-11) there exists obvious difference between the red bars and green bars, representing spammers and legitimate users. On the Honeypot dataset, spammers have a narrower shape of distribution (the outliers on the red bar tail are not counted) than that of legitimate users. This is because there are more content polluters than fake accounts. In other word, spammers in this dataset tend to concentrate on limited topics. While on the Weibo dataset, fake accounts who are interested in different topics take large proportion of spammers. Their distribution is more flat (i.e. red bars) than that of the legitimate users. Therefore we can detect spammers by means of the difference of their topic distribution patterns. LDA model Blei et al.blei2003latent first presented Latent Dirichlet Allocation(LDA) as an example of topic model. Each document $i$ is deemed as a bag of words $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $ and $M$ is the number of words. Each word is attributable to one of the document's topics $Z=\left\lbrace z_{i1},z_{i2},...,z_{iK}\right\rbrace $ and $K$ is the number of topics. $\psi _{k}$ is a multinomial distribution over words for topic $k$ . $\theta _i$ is another multinomial distribution over topics for document $i$ . The smoothed generative model is illustrated in Figure. 2 . $\alpha $ and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $0 are hyper parameter that affect scarcity of the document-topic and topic-word distributions. In this paper, $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $1 , $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $2 and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $3 are empirically set to 0.3, 0.01 and 15. The entire content of each Twitter user is regarded as one document. We adopt Gibbs Sampling BIBREF8 to speed up the inference of LDA. Based on LDA, we can get the topic probabilities for all users in the employed dataset as: $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $4 , where $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $5 is the number of users. Each element $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $6 is a topic probability vector for the $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $7 document. $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $8 is the raw topic probability vector and our features are developed on top of it. Topic-based Features Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\in X_{i}$ denotes the likelihood that the $\emph {i}^{th}$ tweet account favors $\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below. Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users. Specifically, the "GOSS" score of user $i$ on topic $k$ can be calculated as Eq.( 12 ): $$\centering \begin{array}{ll} \mu \left(x_{k}\right)=\frac{\sum _{i=1}^{n} x_{ik}}{n},\\ GOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_k\right)}{\sqrt{\underset{i}{\sum }\left(x_{ik}-\mu \left(x_{k}\right)\right)^{2}}}. \end{array}$$ (Eq. 12) The value of $GOSS\left(x_{ik}\right)$ indicates the interesting degree of this person to the $\emph {k}^{th}$ topic. Specifically, if $GOSS\left(x_{ik}\right)$ > $GOSS\left(x_{jk}\right)$ , it means that the $\emph {i}^{th}$ person has more interest in topic $k$ than the $\emph {j}^{th}$ person. If the value $GOSS\left(x_{ik}\right)$ is extremely high or low, the $\emph {i}^{th}$ person showing extreme interest or no interest on topic $k$ which will probably be a distinctive pattern in the fowllowing classfication. Therefore, the topics interested or disliked by the $\emph {k}^{th}$0 person can be manifested by $\emph {k}^{th}$1 , from which the pattern of the interested topics with regarding to this person is found. Denote $\emph {k}^{th}$2 our first topic-based feature, and it hopefully can get good performance on spammer detection. Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only. For instance, the "LOSS" score of account $i$ on topic $k$ can be calculated as Eq.( 13 ): $$\centering \begin{array}{ll} \mu \left(x_{i}\right)=\frac{\sum _{k=1}^{K} x_{ik}}{K},\\ LOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_i\right)}{\sqrt{\underset{k}{\sum }\left(x_{ik}-\mu \left(x_{i}\right)\right)^{2}}}. \end{array}$$ (Eq. 13) $\mu (x_i)$ represents the averaged interesting degree for all topics with regarding to $\emph {i}^{th}$ user and his tweet content. Similarly to $GOSS$ , the topics interested or disliked by the $\emph {i}^{th}$ person via considering his single post information can be manifested by $f_{LOSS}^{i}=[LOSS(x_{i1})\cdots LOSS(x_{iK})]$ , and $LOSS$ becomes our second topic-based features for the $\emph {i}^{th}$ person. Dataset We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features. Social Honeypot Dataset: Lee et al. lee2010devils created and deployed 60 seed social accounts on Twitter to attract spammers by reporting back what accounts interact with them. They collected 19,276 legitimate users and 22,223 spammers in their datasets along with their tweet content in 7 months. This is our first test dataset. Our Weibo Dataset: Sina Weibo is one of the most famous social platforms in China. It has implemented many features from Twitter. The 2197 legitimate user accounts in this dataset are provided by the Tianchi Competition held by Sina Weibo. The spammers are all purchased commercially from multiple vendors on the Internet. We checked them manually and collected 802 suitable "smart" spammers accounts. Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with "Jieba", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination. Evaluation Metrics The evaluating indicators in our model are show in 2 . We calculate precision, recall and F1-score (i.e. F1 score) as in Eq. ( 19 ). Precision is the ratio of selected accounts that are spammers. Recall is the ratio of spammers that are detected so. F1-score is the harmonic mean of precision and recall. $$precision =\frac{TP}{TP+FP}, recall =\frac{TP}{TP+FN}\nonumber \\ F1-score = \frac{2\times precision \times recall}{precision + recall}$$ (Eq. 19) Performance Comparisons with Baseline Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability. Comparison with Other Features To compare our extracted features with previously used features for spammer detection, we use three most discriminative feature sets according to Lee et al. lee2011seven( 4 ). Two classifiers (Adaboost and SVM) are selected to conduct feature performance comparisons. Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check. They keep posting all kinds of news pieces covering diverse topics, which is similar to the behavior of fake accounts. However, UFN based on friendship networks is more useful for public accounts who possess large number of followers. The best recall value of our LOSS+GOSS features using SVM is up to 6% higher than the results by other feature groups. Regarding F1-score, our features outperform all other features. To further show the advantages of our proposed features, we compare our combined LOSS+GOSS with the combination of all the features from Lee et al. lee2011seven (UFN+UC+UH). It's obvious that LOSS+GOSS have a great advantage over UFN+UC+UH in terms of recall and F1-score. Moreover, by combining our LOSS+GOSS features and UFN+UC+UH features together, we obtained another 7.1% and 2.3% performance gain with regard to precision and F1-score by Adaboost. Though there is a slight decline in terms of recall. By SVM, we get comparative results on recall and F1-score but about 3.5% improvement on precision. Conclusion In this paper, we propose a novel feature extraction method to effectively detect "smart" spammers who post seemingly legitimate tweets and are thus difficult to identify by existing spammer classification methods. Using the LDA model, we obtain the topic probability for each Twitter user. By utilizing the topic probability result, we extract our two topic-based features: GOSS and LOSS which represent the account with global and local information. Experimental results on a public dataset and a self-built Chinese microblog dataset validate the effectiveness of the proposed features. Future Work In future work, the combination method of local and global information can be further improved to maximize their individual strengths. We will also apply decision theory to enhancing the performance of our proposed features. Moreover, we are also building larger datasets on both Twitter and Weibo to validate our method. Moreover, larger datasets on both Twitter and Weibo will be built to further validate our method.
LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection?
No
2,239
qasper
3k
Introduction Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required. LID is further also an important step in harvesting scarce language resources. Harvested data can be used to bootstrap more accurate LID models and in doing so continually improve the quality of the harvested data. Availability of data is still one of the big roadblocks for applying data driven approaches like supervised machine learning in developing countries. Having 11 official languages of South Africa has lead to initiatives (discussed in the next section) that have had positive effect on the availability of language resources for research. However, many of the South African languages are still under resourced from the point of view of building data driven models for machine comprehension and process automation. Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages. This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks. Section SECREF2 reviews existing works on the topic and summarises the remaining research problems. Section SECREF3 of the paper discusses the proposed algorithm and Section SECREF4 presents comparative results. Related Works The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0. The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language. The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data. The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9. Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID . Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data. Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available. In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. Increased confusion can in general be expected between shorter pieces of text and languages that are more closely related. Shallow methods still seem to work well compared to deeper models for LID. Other remaining research opportunities seem to be data harvesting, building standardised datasets and creating shared tasks for South Africa and Africa. Support for language codes that include more languages seems to be growing and discoverability of research is improving with more survey papers coming out. Paywalls also seem to no longer be a problem; the references used in this paper was either openly published or available as preprint papers. Methodology The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier. The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda. The scikit-learn multinomial naive Bayes classifier is used for the implementation with an alpha smoothing value of 0.01 and hashed text features. The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets. The lexicon based classifier is designed to trade higher precision for lower recall. The proposed implementation is considered confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place. The stacked classifier is tested against three public LID implementations BIBREF17, BIBREF23, BIBREF8. The LID implementation described in BIBREF17 is available on GitHub and is trained and tested according to a post on the fasttext blog. Character (5-6)-gram features with 16 dimensional vectors worked the best. The implementation discussed in BIBREF23 is available from https://github.com/tomkocmi/LanideNN. Following the instructions for an OSX pip install of an old r0.8 release of TensorFlow, the LanideNN code could be executed in Python 3.7.4. Settings were left at their defaults and a learning rate of 0.001 was used followed by a refinement with learning rate of 0.0001. Only one code modification was applied to return the results from a method that previously just printed to screen. The LID algorithm described in BIBREF8 is also available on GitHub. The stacked classifier is also tested against the results reported for four other algorithms BIBREF16, BIBREF26, BIBREF24, BIBREF15. All the comparisons are done using the NCHLT BIBREF7, DSL 2015 BIBREF19 and DSL 2017 BIBREF1 datasets discussed in Section SECREF2. Results and Analysis The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8. Different variations of the proposed classifier were evaluated. A single NB classifier (NB), a stack of two NB classifiers (NB+NB), a stack of a NB classifier and lexicon (NB+Lex) and a lexicon (Lex) by itself. A lexicon with a 50% training token dropout is also listed to show the impact of the lexicon support on the accuracy. From the results it seems that the DSL 2017 task might be harder than the DSL 2015 and NCHLT tasks. Also, the results for the implementation discussed in BIBREF23 might seem low, but the results reported in that paper is generated on longer pieces of text so lower scores on the shorter pieces of text derived from the NCHLT corpora is expected. The accuracy of the proposed algorithm seems to be dependent on the support of the lexicon. Without a good lexicon a non-stacked naive Bayesian classifier might even perform better. The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU. Conclusion LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. The proposed algorithm was evaluated on three existing datasets and compared to the implementations of three public LID implementations as well as to reported results of four other algorithms. It performed well relative to the other methods beating their results. However, the performance is dependent on the support of the lexicon. We would like to investigate the value of a lexicon in a production system and how to possibly maintain it using self-supervised learning. We are investigating the application of deeper language models some of which have been used in more recent DSL shared tasks. We would also like to investigate data augmentation strategies to reduce the amount of training data that is required. Further research opportunities include data harvesting, building standardised datasets and shared tasks for South Africa as well as the rest of Africa. In general, the support for language codes that include more languages seems to be growing, discoverability of research is improving and paywalls seem to no longer be a big problem in getting access to published research.
Which languages are similar to each other?
Nguni languages (zul, xho, nbl, ssw), Sotho languages (nso, sot, tsn)
1,877
qasper
3k
Introduction Suppose a user wants to write a sentence “I will be 10 minutes late.” Ideally, she would type just a few keywords such as “10 minutes late” and an autocomplete system would be able to infer the intended sentence (Figure FIGREF1). Existing left-to-right autocomplete systems BIBREF0, BIBREF1 can often be inefficient, as the prefix of a sentence (e.g. “I will be”) fails to capture the core meaning of the sentence. Besides the practical goal of building a better autocomplete system, we are interested in exploring the tradeoffs inherent to such communication schemes between the efficiency of typing keywords, accuracy of reconstruction, and interpretability of keywords. One approach to learn such schemes is to collect a supervised dataset of keywords-sentence pairs as a training set, but (i) it would be expensive to collect such data from users, and (ii) a static dataset would not capture a real user's natural predilection to adapt to the system BIBREF2. Another approach is to avoid supervision and jointly learn a user-system communication scheme to directly optimize the combination of efficiency and accuracy. However, learning in this way can lead to communication schemes that are uninterpretable to humans BIBREF3, BIBREF4 (see Appendix for additional related work). In this work, we propose a simple, unsupervised approach to an autocomplete system that is efficient, accurate, and interpretable. For interpretability, we restrict keywords to be subsequences of their source sentences based on the intuition that humans can infer most of the original meaning from a few keywords. We then apply multi-objective optimization approaches to directly control and achieve desirable tradeoffs between efficiency and accuracy. We observe that naively optimizing a linear combination of efficiency and accuracy terms is unstable and leads to suboptimal schemes. Thus, we propose a new objective which optimizes for communication efficiency under an accuracy constraint. We show this new objective is more stable and efficient than the linear objective at all accuracy levels. As a proof-of-concept, we build an autocomplete system within this framework which allows a user to write sentences by specifying keywords. We empirically show that our framework produces communication schemes that are 52.16% more accurate than rule-based baselines when specifying 77.37% of sentences, and 11.73% more accurate than a naive, weighted optimization approach when specifying 53.38% of sentences. Finally, we demonstrate that humans can easily adapt to the keyword-based autocomplete system and save nearly 50% of time compared to typing a full sentence in our user study. Approach Consider a communication game in which the goal is for a user to communicate a target sequence $x= (x_1, ..., x_m)$ to a system by passing a sequence of keywords $z= (z_1, ..., z_n)$. The user generates keywords $z$ using an encoding strategy $q_{\alpha }(z\mid x)$, and the system attempts to guess the target sequence $x$ via a decoding strategy $p_{\beta }(x\mid z)$. A good communication scheme $(q_{\alpha }, p_{\beta })$ should be both efficient and accurate. Specifically, we prefer schemes that use fewer keywords (cost), and the target sentence $x$ to be reconstructed with high probability (loss) where Based on our assumption that humans have an intuitive sense of retaining important keywords, we restrict the set of schemes to be a (potentially noncontiguous) subsequence of the target sentence. Our hypothesis is that such subsequence schemes naturally ensure interpretability, as efficient human and machine communication schemes are both likely to involve keeping important content words. Approach ::: Modeling with autoencoders. To learn communication schemes without supervision, we model the cooperative communication between a user and system through an encoder-decoder framework. Concretely, we model the user's encoding strategy $q_{\alpha }(z\mid x)$ with an encoder which encodes the target sentence $x$ into the keywords $z$ by keeping a subset of the tokens. This stochastic encoder $q_{\alpha }(z\mid x)$ is defined by a model which returns the probability of each token retained in the final subsequence $z$. Then, we sample from Bernoulli distributions according to these probabilities to either keep or drop the tokens independently (see Appendix for an example). We model the autocomplete system's decoding strategy $p_{\beta }(x\mid z)$ as a probabilistic model which conditions on the keywords $z$ and returns a distribution over predictions $x$. We use a standard sequence-to-sequence model with attention and copying for the decoder, but any model architecture can be used (see Appendix for details). Approach ::: Multi-objective optimization. Our goal now is to learn encoder-decoder pairs which optimally balance the communication cost and reconstruction loss. The simplest approach to balancing efficiency and accuracy is to weight $\mathrm {cost}(x, \alpha )$ and $\mathrm {loss}(x, \alpha , \beta )$ linearly using a weight $\lambda $ as follows, where the expectation is taken over the population distribution of source sentences $x$, which is omitted to simplify notation. However, we observe that naively weighting and searching over $\lambda $ is suboptimal and highly unstable—even slight changes to the weighting results in degenerate schemes which keep all or none of its tokens. This instability motivates us to develop a new stable objective. Our main technical contribution is to draw inspiration from the multi-objective optimization literature and view the tradeoff as a sequence of constrained optimization problems, where we minimize the expected cost subject to varying expected reconstruction error constraints $\epsilon $, This greatly improves the stability of the training procedure. We empirically observe that the model initially keeps most of the tokens to meet the constraints, and slowly learns to drop uninformative words from the keywords to minimize the cost. Furthermore, $\epsilon $ in Eq (DISPLAY_FORM6) allows us to directly control the maximum reconstruction error of resulting schemes, whereas $\lambda $ in Eq (DISPLAY_FORM5) is not directly related to any of our desiderata. To optimize the constrained objective, we consider the Lagrangian of Eq (DISPLAY_FORM6), Much like the objective in Eq (DISPLAY_FORM5) we can compute unbiased gradients by replacing the expectations with their averages over random minibatches. Although gradient descent guarantees convergence on Eq (DISPLAY_FORM7) only when the objective is convex, we find that not only is the optimization stable, the resulting solution achieves better performance than the weighting approach in Eq (DISPLAY_FORM5). Approach ::: Optimization. Optimization with respect to $q_{\alpha }(z\mid x)$ is challenging as $z$ is discrete, and thus, we cannot differentiate $\alpha $ through $z$ via the chain rule. Because of this, we use the stochastic REINFORCE estimate BIBREF5 as follows: We perform joint updates on $(\alpha , \beta , \lambda )$, where $\beta $ and $\lambda $ are updated via standard gradient computations, while $\alpha $ uses an unbiased, stochastic gradient estimate where we approximate the expectation in Eq (DISPLAY_FORM9). We use a single sample from $q_{\alpha }(z\mid x)$ and moving-average of rewards as a baseline to reduce variance. Experiments We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews BIBREF6 (see Appendix for details). We quantify the efficiency of a communication scheme $(q_{\alpha },p_{\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence. Experiments ::: Effectiveness of constrained objective. We first show that the linear objective in Eq (DISPLAY_FORM5) is suboptimal compared to the constrained objective in Eq (DISPLAY_FORM6). Figure FIGREF10 compares the achievable accuracy and efficiency tradeoffs for the two objectives, which shows that the constrained objective results in more efficient schemes than the linear objective at every accuracy level (e.g. 11.73% more accurate at a 53.38% retention rate). We also observe that the linear objective is highly unstable as a function of the tradeoff parameter $\lambda $ and requires careful tuning. Even slight changes to $\lambda $ results in degenerate schemes that keep all or none of the tokens (e.g. $\lambda \le 4.2$ and $\lambda \ge 4.4$). On the other hand, the constrained objective is substantially more stable as a function of $\epsilon $ (e.g. points for $\epsilon $ are more evenly spaced than $\lambda $). Experiments ::: Efficiency-accuracy tradeoff. We quantify the efficiency-accuracy tradeoff compared to two rule-based baselines: Unif and Stopword. The Unif encoder randomly keeps tokens to generate keywords with the probability $\delta $. The Stopword encoder keeps all tokens but drops stop words (e.g. `the', `a', `or') all the time ($\delta =0$) or half of the time ($\delta =0.5$). The corresponding decoders for these encoders are optimized using gradient descent to minimize the reconstruction error (i.e. $\mathrm {loss}(x, \alpha , \beta )$). Figure FIGREF10 shows that two baselines achieve similar tradeoff curves, while the constrained model achieves a substantial 52.16% improvement in accuracy at a 77.37% retention rate compared to Unif, thereby showing the benefits of jointly training the encoder and decoder. Experiments ::: Robustness and analysis. We provide additional experimental results on the robustness of learned communication schemes as well as in-depth analysis on the correlation between the retention rates of tokens and their properties, which we defer to Appendix and for space. Experiments ::: User study. We recruited 100 crowdworkers on Amazon Mechanical Turk (AMT) and measured completion times and accuracies for typing randomly sampled sentences from the Yelp corpus. Each user was shown alternating autocomplete and writing tasks across 50 sentences (see Appendix for user interface). For the autocomplete task, we gave users a target sentence and asked them to type a set of keywords into the system. The users were shown the top three suggestions from the autocomplete system, and were asked to mark whether each of these three suggestions was semantically equivalent to the target sentence. For the writing task, we gave users a target sentence and asked them to either type the sentence verbatim or a sentence that preserves the meaning of the target sentence. Table TABREF13 shows two examples of the autocomplete task and actual user-provided keywords. Each column contains a set of keywords and its corresponding top three suggestions generated by the autocomplete system with beam search. We observe that the system is likely to propose generic sentences for under-specified keywords (left column) and almost the same sentences for over-specified keywords (right column). For properly specified keywords (middle column), the system completes sentences accordingly by adding a verb, adverb, adjective, preposition, capitalization, and punctuation. Overall, the autocomplete system achieved high accuracy in reconstructing the keywords. Users marked the top suggestion from the autocomplete system to be semantically equivalent to the target $80.6$% of the time, and one of the top 3 was semantically equivalent $90.11$% of the time. The model also achieved a high exact match accuracy of 18.39%. Furthermore, the system was efficient, as users spent $3.86$ seconds typing keywords compared to $5.76$ seconds for full sentences on average. The variance of the typing time was $0.08$ second for keywords and $0.12$ second for full sentences, indicating that choosing and typing keywords for the system did not incur much overhead. Experiments ::: Acknowledgments We thank the reviewers and Yunseok Jang for their insightful comments. This work was supported by NSF CAREER Award IIS-1552635 and an Intuit Research Award. Experiments ::: Reproducibility All code, data and experiments are available on CodaLab at https://bit.ly/353fbyn.
How are models evaluated in this human-machine communication game?
by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews
1,873
qasper
3k
Introduction Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks. A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples: In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company. The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work. Related Work We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT. There is much work on a closely related problem viz., classifying sentences in dialogues through dialogue-specific categories called dialogue acts BIBREF16 , which we will not review here. Just as one example, Cotterill BIBREF17 classifies questions in emails into the dialogue acts of YES_NO_QUESTION, WH_QUESTION, ACTION_REQUEST, RHETORICAL, MULTIPLE_CHOICE etc. We could not find much work related to mining of performance appraisals data. Pawar et al. BIBREF18 uses kernel-based classification to classify sentences in both performance appraisal text and product reviews into classes SUGGESTION, APPRECIATION, COMPLAINT. Apte et al. BIBREF6 provides two algorithms for matching the descriptions of goals or tasks assigned to employees to a standard template of model goals. One algorithm is based on the co-training framework and uses goal descriptions and self-appraisal comments as two separate perspectives. The second approach uses semantic similarity under a weak supervision framework. Ramrakhiyani et al. BIBREF5 proposes label propagation algorithms to discover aspects in supervisor assessments in performance appraisals, where an aspect is modelled as a verb-noun pair (e.g. conduct training, improve coding). Dataset In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19. Sentence Classification The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class. STRENGTH: WEAKNESS: SUGGESTION: Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”. We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation. Comparison with Sentiment Analyzer We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem. Discovering Clusters within Sentence Classes After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters. PA along Attributes In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI. In the example in Section SECREF4 , the first sentence (which has class STRENGTH) can be mapped to two attributes: FUNCTIONAL_EXCELLENCE and BUILDING_EFFECTIVE_TEAMS. Similarly, the third sentence (which has class WEAKNESS) can be mapped to the attribute INTERPERSONAL_EFFECTIVENESS and so forth. Thus, in order to answer the second question in Section SECREF1 , we need to map each sentence in each of the 3 classes to zero, one, two or more attributes, which is a multi-class multi-label classification problem. We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2. Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3 It can be observed that INLINEFORM0 would be undefined if INLINEFORM1 is empty and similarly INLINEFORM2 would be undefined when INLINEFORM3 is empty. Hence, overall precision and recall are computed by averaging over all the instances except where they are undefined. Instance-level F-measure can not be computed for instances where either precision or recall are undefined. Therefore, overall F-measure is computed using the overall precision and recall. Summarization of Peer Feedback using ILP The PA system includes a set of peer feedback comments for each employee. To answer the third question in Section SECREF1 , we need to create a summary of all the peer feedback comments about a given employee. As an example, following are the feedback comments from 5 peers of an employee. The individual sentences in the comments written by each peer are first identified and then POS tags are assigned to each sentence. We hypothesize that a good summary of these multiple comments can be constructed by identifying a set of important text fragments or phrases. Initially, a set of candidate phrases is extracted from these comments and a subset of these candidate phrases is chosen as the final summary, using Integer Linear Programming (ILP). The details of the ILP formulation are shown in Table TABREF36 . As an example, following is the summary generated for the above 5 peer comments. humble nature, effective communication, technical expertise, always supportive, vast knowledge Following rules are used to identify candidate phrases: Various parameters are used to evaluate a candidate phrase for its importance. A candidate phrase is more important: A complete list of parameters is described in detail in Table TABREF36 . There is a trivial constraint INLINEFORM0 which makes sure that only INLINEFORM1 out of INLINEFORM2 candidate phrases are chosen. A suitable value of INLINEFORM3 is used for each employee depending on number of candidate phrases identified across all peers (see Algorithm SECREF6 ). Another set of constraints ( INLINEFORM4 to INLINEFORM5 ) make sure that at least one phrase is selected for each of the leadership attributes. The constraint INLINEFORM6 makes sure that multiple phrases sharing the same headword are not chosen at a time. Also, single word candidate phrases are chosen only if they are adjectives or nouns with lexical category noun.attribute. This is imposed by the constraint INLINEFORM7 . It is important to note that all the constraints except INLINEFORM8 are soft constraints, i.e. there may be feasible solutions which do not satisfy some of these constraints. But each constraint which is not satisfied, results in a penalty through the use of slack variables. These constraints are described in detail in Table TABREF36 . The objective function maximizes the total importance score of the selected candidate phrases. At the same time, it also minimizes the sum of all slack variables so that the minimum number of constraints are broken. INLINEFORM0 : No. of candidate phrases INLINEFORM1 : No. of phrases to select as part of summary INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM0 and INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM0 (For determining number of phrases to select to include in summary) Evaluation of auto-generated summaries We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries. Conclusions and Further Work In this paper, we presented an analysis of the text generated in Performance Appraisal (PA) process in a large multi-national IT company. We performed sentence classification to identify strengths, weaknesses and suggestions for improvements found in the supervisor assessments and then used clustering to discover broad categories among them. As this is non-topical classification, we found that SVM with ADWS kernel BIBREF18 produced the best results. We also used multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Logistic Regression classifier was observed to produce the best results for this topical classification. Finally, we proposed an ILP-based summarization technique to produce a summary of peer feedback comments for a given employee and compared it with manual summaries. The PA process also generates much structured data, such as supervisor ratings. It is an interesting problem to compare and combine the insights from discovered from structured data and unstructured text. Also, we are planning to automatically discover any additional performance attributes to the list of 15 attributes currently used by HR.
What evaluation metrics are looked at for classification tasks?
Precision, Recall, F-measure, accuracy
3,044
qasper
3k
Introduction Deep Neural Networks (DNN) have been widely employed in industry for solving various Natural Language Processing (NLP) tasks, such as text classification, sequence labeling, question answering, etc. However, when engineers apply DNN models to address specific NLP tasks, they often face the following challenges. The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas. To satisfy the requirements of all the above three personas, the NLP toolkit has to be generic enough to cover as many tasks as possible. At the same time, it also needs to be flexible enough to allow alternative network architectures as well as customized modules. Therefore, we analyzed the NLP jobs submitted to a commercial centralized GPU cluster. Table TABREF11 showed that about 87.5% NLP related jobs belong to a few common tasks, including sentence classification, text matching, sequence labeling, MRC, etc. It further suggested that more than 90% of the networks were composed of several common components, such as embedding, CNN/RNN, Transformer and so on. Based on the above observations, we developed NeuronBlocks, a DNN toolkit for NLP tasks. The basic idea is to provide two layers of support to the engineers. The upper layer targets common NLP tasks. For each task, the toolkit contains several end-to-end network templates, which can be immediately instantiated with simple configuration. The bottom layer consists of a suite of reusable and standard components, which can be adopted as building blocks to construct networks with complex architecture. By following the interface guidelines, users can also contribute to this gallery of components with their own modules. The technical contributions of NeuronBlocks are summarized into the following three aspects. Related Work There are several general-purpose deep learning frameworks, such as TensorFlow, PyTorch and Keras, which have gained popularity in NLP community. These frameworks offer huge flexibility in DNN model design and support various NLP tasks. However, building models under these frameworks requires a large overhead of mastering these framework details. Therefore, higher level abstraction to hide the framework details is favored by many engineers. There are also several popular deep learning toolkits in NLP, including OpenNMT BIBREF0 , AllenNLP BIBREF1 etc. OpenNMT is an open-source toolkit mainly targeting neural machine translation or other natural language generation tasks. AllenNLP provides several pre-built models for NLP tasks, such as semantic role labeling, machine comprehension, textual entailment, etc. Although these toolkits reduce the development cost, they are limited to certain tasks, and thus not flexible enough to support new network architectures or new components. Design The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts. Block Zoo We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported. Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability. Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 . Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported. Model Zoo In NeuronBlocks, we identify four types of most popular NLP tasks. For each task, we provide various end-to-end network templates. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Text Classification and Matching. Tasks such as domain/intent classification, question answer matching are supported. Sequence Labeling. Predict each token in a sequence into predefined types. Common tasks include NER, POS tagging, Slot tagging, etc. Knowledge Distillation BIBREF7 . Teacher-Student based knowledge distillation is one common approach for model compression. NeuronBlocks provides knowledge distillation template to improve the inference speed of heavy DNN models like BERT/GPT. Extractive Machine Reading Comprehension. Given a pair of question and passage, predict the start and end positions of the answer spans in the passage. User Interface NeuronBlocks provides convenient user interface for users to build, train, and test DNN models. The details are described in the following. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] I/O interface. This part defines model input/output, such as training data, pre-trained models/embeddings, model saving path, etc. Model Architecture interface. This is the key part of the configuration file, which defines the whole model architecture. Figure FIGREF19 shows an example of how to specify a model architecture using the blocks in NeuronBlocks. To be more specific, it consists of a list of layers/blocks to construct the architecture, where the blocks are supplied in the gallery of Block Zoo. Training Parameters interface. In this part, the model optimizer as well as all other training hyper parameters are indicated. Workflow Figure FIGREF34 shows the workflow of building DNN models in NeuronBlocks. Users only need to write a JSON configuration file. They can either instantiate an existing template from Model Zoo, or construct a new architecture based on the blocks from Block Zoo. This configuration file is shared across training, test, and prediction. For model hyper-parameter tuning or architecture modification, users just need to change the JSON configuration file. Advanced users can also contribute novel customized blocks into Block Zoo, as long as they follow the same interface guidelines with the existing blocks. These new blocks can be further shared across all users for model architecture design. Moreover, NeuronBlocks has flexible platform support, such as GPU/CPU, GPU management platforms like PAI. Experiments To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved. Sequence Labeling For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance. GLUE Benchmark The General Language Understanding Evaluation (GLUE) benchmark BIBREF13 is a collection of natural language understanding tasks. We experimented on the GLUE benchmark tasks using BiLSTM and Attention based models. As shown in Table TABREF29 , the models built by NeuronBlocks can achieve competitive or even better results on GLUE tasks with minimal coding efforts. Knowledge Distillation We evaluated Knowledge Distillation task in NeuronBlocks on a dataset collected from one commercial search engine. We refer to this dataset as Domain Classification Dataset. Each sample in this dataset consists of two parts, i.e., a question and a binary label indicating whether the question belongs to a specific domain. Table TABREF36 shows the results, where Area Under Curve (AUC) metric is used as the performance evaluation criteria and Queries per Second (QPS) is used to measure inference speed. By knowledge distillation training approach, the student model by NeuronBlocks managed to get 23-27 times inference speedup with only small performance regression compared with BERTbase fine-tuned classifier. WikiQA The WikiQA corpus BIBREF15 is a publicly available dataset for open-domain question answering. This dataset contains 3,047 questions from Bing query logs, each associated with some candidate answer sentences from Wikipedia. We conducted experiments on WikiQA dataset using CNN, BiLSTM, and Attention based models. The results are shown in Table TABREF41 . The models built in NeuronBlocks achieved competitive or even better results with simple model configurations. Conclusion and Future Work In this paper, we introduce NeuronBlocks, a DNN toolkit for NLP tasks built on PyTorch. NeuronBlocks targets three types of engineers, and provides a two-layer solution to satisfy the requirements from all three types of users. To be more specific, the Model Zoo consists of various templates for the most common NLP tasks, while the Block Zoo supplies a gallery of alternative layers/modules for the networks. Such design achieves a balance between generality and flexibility. Extensive experiments have verified the effectiveness of this approach. NeuronBlocks has been widely used in a product team of a commercial search engine, and significantly improved the productivity for developing NLP DNN approaches. As an open-source toolkit, we will further extend it in various directions. The following names a few examples.
What neural network modules are included in NeuronBlocks?
Embedding Layer, Neural Network Layers, Loss Function, Metrics
1,678
qasper
3k
Introduction The task of speculation detection and scope resolution is critical in distinguishing factual information from speculative information. This has multiple use-cases, like systems that determine the veracity of information, and those that involve requirement analysis. This task is particularly important to the biomedical domain, where patient reports and medical articles often use this feature of natural language. This task is commonly broken down into two subtasks: the first subtask, speculation cue detection, is to identify the uncertainty cue in a sentence, while the second subtask: scope resolution, is to identify the scope of that cue. For instance, consider the example: It might rain tomorrow. The speculation cue in the sentence above is ‘might’ and the scope of the cue ‘might’ is ‘rain tomorrow’. Thus, the speculation cue is the word that expresses the speculation, while the words affected by the speculation are in the scope of that cue. This task was the CoNLL-2010 Shared Task (BIBREF0), which had 3 different subtasks. Task 1B was speculation cue detection on the BioScope Corpus, Task 1W was weasel identification from Wikipedia articles, and Task 2 was speculation scope resolution from the BioScope Corpus. For each task, the participants were provided the train and test set, which is henceforth referred to as Task 1B CoNLL and Task 2 CoNLL throughout this paper. For our experimentation, we use the sub corpora of the BioScope Corpus (BIBREF1), namely the BioScope Abstracts sub corpora, which is referred to as BA, and the BioScope Full Papers sub corpora, which is referred to as BF. We also use the SFU Review Corpus (BIBREF2), which is referred to as SFU. This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8). Inspired by the most recent approach of applying BERT to negation detection and scope resolution (BIBREF12), we take this approach one step further by performing a comparative analysis of three popular transformer-based architectures: BERT (BIBREF20), XLNet (BIBREF21) and RoBERTa (BIBREF22), applied to speculation detection and scope resolution. We evaluate the performance of each model across all datasets via the single dataset training approach, and report all scores including inter-dataset scores (i.e. train on one dataset, evaluate on another) to test the generalizability of the models. This approach outperforms all existing systems on the task of speculation detection and scope resolution. Further, we jointly train on multiple datasets and obtain improvements over the single dataset training approach on most datasets. Contrary to results observed on benchmark GLUE tasks, we observe XLNet consistently outperforming RoBERTa. To confirm this observation, we apply these models to the negation detection and scope resolution task, and observe a continuity in this trend, reporting state-of-the-art results on three of four datasets on the negation scope resolution task. The rest of the paper is organized as follows: In Section 2, we provide a detailed description of our methodology and elaborate on the experimentation details. In Section 3, we present our results and analysis on the speculation detection and scope resolution task, using the single dataset and the multiple dataset training approach. In Section 4, we show the results of applying XLNet and RoBERTa on negation detection and scope resolution and propose a few reasons to explain why XLNet performs better than RoBERTa. Finally, the future scope and conclusion is mentioned in Section 5. Methodology and Experimental Setup We use the methodology by Khandelwal and Sawant (BIBREF12), and modify it to support experimentation with multiple models. For Speculation Cue Detection: Input Sentence: It might rain tomorrow. True Labels: Not-A-Cue, Cue, Not-A-Cue, Not-A-Cue. First, this sentence is preprocessed to get the target labels as per the following annotation schema: 1 – Normal Cue 2 – Multiword Cue 3 – Not a cue 4 – Pad token Thus, the preprocessed sequence is as follows: Input Sentence: [It, might, rain, tomorrow] True Labels: [3,1,3,3] Then, we preprocess the input using the tokenizer for the model being used (BERT, XLNet or RoBERTa): splitting each word into one or more tokens, and converting each token to it’s corresponding tokenID, and padding it to the maximum input length of the model. Thus, Input Sentence: [wtt(It), wtt(might), wtt(rain), wtt(tom), wtt(## or), wtt(## row), wtt(〈pad 〉),wtt(〈pad 〉)...] True Labels: [3,1,3,3,3,3,4,4,4,4,...] The word ‘tomorrow' has been split into 3 tokens, ‘tom', ‘##or' and ‘##row'. The function to convert the word to tokenID is represented by wtt. For Speculation Scope Resolution: If a sentence has multiple cues, each cue's scope will be resolved individually. Input Sentence: It might rain tomorrow. True Labels: Out-Of-Scope, Out-Of-Scope, In-Scope, In-Scope. First, this sentence is preprocessed to get the target labels as per the following annotation schema: 0 – Out-Of-Scope 1 – In-Scope Thus, the preprocessed sequence is as follows: True Scope Labels: [0,0,1,1] As for cue detection, we preprocess the input using the tokenizer for the model being used. Additionally, we need to indicate which cue's scope we want to find in the input sentence. We do this by inserting a special token representing the token type (according to the cue detection annotation schema) before the cue word whose scope is being resolved. Here, we want to find the scope of the cue ‘might'. Thus, Input Sentence: [wtt(It), wtt(〈token[1]〉), wtt(might), wtt(rain), wtt(tom), wtt(##or), wtt(##row), wtt(〈pad〉), wtt(〈pad〉)...] True Scope Labels: [0,0,1,1,1,1,0,0,0,0,...] Now, the preprocessed input for cue detection and similarly for scope detection is fed as input to our model as follows: X = Model (Input) Y = W*X + b The W matrix is a matrix of size n_hidden x num_classes (n_hidden is the size of the representation of a token within the model). These logits are fed to the loss function. We use the following variants of each model: BERT: bert-base-uncaseds3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz (The model used by BIBREF12) RoBERTa: roberta-bases3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin (RoBERTa-base does not have an uncased variant) XLNet: xlnet-base-caseds3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-pytorch_model.bin (XLNet-base does not have an uncased variant) The output of the model is a vector of probabilities per token. The loss is calculated for each token, by using the output vector and the true label for that token. We use class weights for the loss function, setting the weight for label 4 to 0 and all other labels to 1 (for cue detection only) to avoid training on padding token’s output. We calculate the scores (Precision, Recall, F1) for the model per word of the input sentence, not per token that was fed to the model, as the tokens could be different for different models leading to inaccurate scores. For the above example, we calculate the output label for the word ‘tomorrow', not for each token it was split into (‘tom', ‘##or' and ‘##row'). To find the label for each word from the tokens it was split into, we experiment with 2 methods: Average: We average the output vectors (softmax probabilities) for each token that the word was split into by the model's tokenizer. In the example above, we average the output of ‘tom', ‘##or' and ‘##row' to get the output for ‘tomorrow'. Then, we take an argmax over the resultant vector. This is then compared with the true label for the original word. First Token: Here, we only consider the first token's probability vector (among all tokens the word was split into) as the output for that word, and get the label by an argmax over this vector. In the example above, we would consider the output vector corresponding to the token ‘tom' as the output for the word ‘tomorrow'. For cue detection, the results are reported for the Average method only, while we report the scores for both Average and First Token for Scope Resolution. For fair comparison, we use the same hyperparameters for the entire architecture for all 3 models. Only the tokenizer and the model are changed for each model. All other hyperparameters are kept same. We finetune the models for 60 epochs, using early stopping with a patience of 6 on the F1 score (word level) on the validation dataset. We use an initial learning rate of 3e-5, with a batch size of 8. We use the Categorical Cross Entropy loss function. We use the Huggingface’s Pytorch Transformer library (BIBREF23) for the models and train all our models on Google Colaboratory. Results: Speculation Cue Detection and Scope Resolution We use a default train-validation-test split of 70-15-15 for each dataset. For the speculation detection and scope resolution subtasks using single-dataset training, we report the results as an average of 5 runs of the model. For training the model on multiple datasets, we perform a 70-15-15 split of each training dataset, after which the train and validation part of the individual datasets are merged while the scores are reported for the test part of the individual datasets, which is not used for training or validation. We report the results as an average of 3 runs of the model. Figure FIGREF8 contains results for speculation cue detection and scope resolution when trained on a single dataset. All models perform the best when trained on the same dataset as they are evaluated on, except for BF, which gets the best results when trained on BA. This is because of the transfer learning capabilities of the models and the fact that BF is a smaller dataset than BA (BF: 2670 sentences, BA: 11871 sentences). For speculation cue detection, there is lesser generalizability for models trained on BF or BA, while there is more generalizability for models trained on SFU. This could be because of the different nature of the biomedical domain. Figure FIGREF11 contains the results for speculation detection and scope resolution for models trained jointly on multiple datasets. We observe that training on multiple datasets helps the performance of all models on each dataset, as the quantity of data available to train the model increases. We also observe that XLNet consistently outperforms BERT and RoBERTa. To confirm this observation, we apply the 2 models to the related task of negation detection and scope resolution Negation Cue Detection and Scope Resolution We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock). The results for BERT are taken from BIBREF12. The results for XLNet and RoBERTa are averaged across 5 runs for statistical significance. Figure FIGREF14 contains results for negation cue detection and scope resolution. We report state-of-the-art results on negation scope resolution on BF, BA and SFU datasets. Contrary to popular opinion, we observe that XLNet is better than RoBERTa for the cue detection and scope resolution tasks. A few possible reasons for this trend are: Domain specificity, as both negation and speculation are closely related subtasks. Further experimentation on different tasks is needed to verify this. Most benchmark tasks are sentence classification tasks, whereas the subtasks we experiment on are sequence labelling tasks. Given the pre-training objective of XLNet (training on permutations of the input), it may be able to capture long-term dependencies better, essential for sequence labelling tasks. We work with the base variants of the models, while most results are reported with the large variants of the models. Conclusion and Future Scope In this paper, we expanded on the work of Khandelwal and Sawant (BIBREF12) by looking at alternative transfer-learning models and experimented with training on multiple datasets. On the speculation detection task, we obtained a gain of 0.42 F1 points on BF, 1.98 F1 points on BA and 0.29 F1 points on SFU, while on the scope resolution task, we obtained a gain of 8.06 F1 points on BF, 4.27 F1 points on BA and 11.87 F1 points on SFU, when trained on a single dataset. While training on multiple datasets, we observed a gain of 10.6 F1 points on BF and 1.94 F1 points on BA on the speculation detection task and 2.16 F1 points on BF and 0.25 F1 points on SFU on the scope resolution task over the single dataset training approach. We thus significantly advance the state-of-the-art for speculation detection and scope resolution. On the negation scope resolution task, we applied the XLNet and RoBERTa and obtained a gain of 3.16 F1 points on BF, 0.06 F1 points on BA and 0.3 F1 points on SFU. Thus, we demonstrated the usefulness of transformer-based architectures in the field of negation and speculation detection and scope resolution. We believe that a larger and more general dataset would go a long way in bolstering future research and would help create better systems that are not domain-specific.
What were the baselines?
varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12)
2,215
qasper
3k
Introduction We understand from Zipf's Law that in any natural language corpus a majority of the vocabulary word types will either be absent or occur in low frequency. Estimating the statistical properties of these rare word types is naturally a difficult task. This is analogous to the curse of dimensionality when we deal with sequences of tokens - most sequences will occur only once in the training data. Neural network architectures overcome this problem by defining non-linear compositional models over vector space representations of tokens and hence assign non-zero probability even to sequences not seen during training BIBREF0 , BIBREF1 . In this work, we explore a similar approach to learning distributed representations of social media posts by composing them from their constituent characters, with the goal of generalizing to out-of-vocabulary words as well as sequences at test time. Traditional Neural Network Language Models (NNLMs) treat words as the basic units of language and assign independent vectors to each word type. To constrain memory requirements, the vocabulary size is fixed before-hand; therefore, rare and out-of-vocabulary words are all grouped together under a common type `UNKNOWN'. This choice is motivated by the assumption of arbitrariness in language, which means that surface forms of words have little to do with their semantic roles. Recently, BIBREF2 challenge this assumption and present a bidirectional Long Short Term Memory (LSTM) BIBREF3 for composing word vectors from their constituent characters which can memorize the arbitrary aspects of word orthography as well as generalize to rare and out-of-vocabulary words. Encouraged by their findings, we extend their approach to a much larger unicode character set, and model long sequences of text as functions of their constituent characters (including white-space). We focus on social media posts from the website Twitter, which are an excellent testing ground for character based models due to the noisy nature of text. Heavy use of slang and abundant misspellings means that there are many orthographically and semantically similar tokens, and special characters such as emojis are also immensely popular and carry useful semantic information. In our moderately sized training dataset of 2 million tweets, there were about 0.92 million unique word types. It would be expensive to capture all these phenomena in a word based model in terms of both the memory requirement (for the increased vocabulary) and the amount of training data required for effective learning. Additional benefits of the character based approach include language independence of the methods, and no requirement of NLP preprocessing such as word-segmentation. A crucial step in learning good text representations is to choose an appropriate objective function to optimize. Unsupervised approaches attempt to reconstruct the original text from its latent representation BIBREF4 , BIBREF0 . Social media posts however, come with their own form of supervision annotated by millions of users, in the form of hashtags which link posts about the same topic together. A natural assumption is that the posts with the same hashtags should have embeddings which are close to each other. Hence, we formulate our training objective to maximize cross-entropy loss at the task of predicting hashtags for a post from its latent representation. We propose a Bi-directional Gated Recurrent Unit (Bi-GRU) BIBREF5 neural network for learning tweet representations. Treating white-space as a special character itself, the model does a forward and backward pass over the entire sequence, and the final GRU states are linearly combined to get the tweet embedding. Posterior probabilities over hashtags are computed by projecting this embedding to a softmax output layer. Compared to a word-level baseline this model shows improved performance at predicting hashtags for a held-out set of posts. Inspired by recent work in learning vector space text representations, we name our model tweet2vec. Related Work Using neural networks to learn distributed representations of words dates back to BIBREF0 . More recently, BIBREF4 released word2vec - a collection of word vectors trained using a recurrent neural network. These word vectors are in widespread use in the NLP community, and the original work has since been extended to sentences BIBREF1 , documents and paragraphs BIBREF6 , topics BIBREF7 and queries BIBREF8 . All these methods require storing an extremely large table of vectors for all word types and cannot be easily generalized to unseen words at test time BIBREF2 . They also require preprocessing to find word boundaries which is non-trivial for a social network domain like Twitter. In BIBREF2 , the authors present a compositional character model based on bidirectional LSTMs as a potential solution to these problems. A major benefit of this approach is that large word lookup tables can be compacted into character lookup tables and the compositional model scales to large data sets better than other state-of-the-art approaches. While BIBREF2 generate word embeddings from character representations, we propose to generate vector representations of entire tweets from characters in our tweet2vec model. Our work adds to the growing body of work showing the applicability of character models for a variety of NLP tasks such as Named Entity Recognition BIBREF9 , POS tagging BIBREF10 , text classification BIBREF11 and language modeling BIBREF12 , BIBREF13 . Previously, BIBREF14 dealt with the problem of estimating rare word representations by building them from their constituent morphemes. While they show improved performance over word-based models, their approach requires a morpheme parser for preprocessing which may not perform well on noisy text like Twitter. Also the space of all morphemes, though smaller than the space of all words, is still large enough that modelling all morphemes is impractical. Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words. They also show that the learned embeddings can generalize to an unrelated task of document recommendation, justifying the use of hashtags as supervision for learning text representations. Tweet2Vec Bi-GRU Encoder: Figure 1 shows our model for encoding tweets. It uses a similar structure to the C2W model in BIBREF2 , with LSTM units replaced with GRU units. The input to the network is defined by an alphabet of characters $C$ (this may include the entire unicode character set). The input tweet is broken into a stream of characters $c_1, c_2, ... c_m$ each of which is represented by a 1-by- $|C|$ encoding. These one-hot vectors are then projected to a character space by multiplying with the matrix $P_C \in \mathbb {R}^{|C| \times d_c}$ , where $d_c$ is the dimension of the character vector space. Let $x_1, x_2, ... x_m$ be the sequence of character vectors for the input tweet after the lookup. The encoder consists of a forward-GRU and a backward-GRU. Both have the same architecture, except the backward-GRU processes the sequence in reverse order. Each of the GRU units process these vectors sequentially, and starting with the initial state $h_0$ compute the sequence $h_1, h_2, ... h_m$ as follows: $ r_t &= \sigma (W_r x_t + U_r h_{t-1} + b_r), \\ z_t &= \sigma (W_z x_t + U_z h_{t-1} + b_z), \\ \tilde{h}_t &= tanh(W_h x_t + U_h (r_t \odot h_{t-1}) + b_h), \\ h_t &= (1-z_t) \odot h_{t-1} + z_t \odot \tilde{h}_t. $ Here $r_t$ , $z_t$ are called the reset and update gates respectively, and $\tilde{h}_t$ is the candidate output state which is converted to the actual output state $h_t$ . $W_r, W_z, W_h$ are $d_h \times d_c$ matrices and $U_r, U_z, U_h$ are $d_h \times d_h$ matrices, where $d_h$ is the hidden state dimension of the GRU. The final states $h_m^f$ from the forward-GRU, and $z_t$0 from the backward GRU are combined using a fully-connected layer to the give the final tweet embedding $z_t$1 : $$e_t = W^f h_m^f + W^b h_0^b$$ (Eq. 3) Here $W^f, W^b$ are $d_t \times d_h$ and $b$ is $d_t \times 1$ bias term, where $d_t$ is the dimension of the final tweet embedding. In our experiments we set $d_t=d_h$ . All parameters are learned using gradient descent. Softmax: Finally, the tweet embedding is passed through a linear layer whose output is the same size as the number of hashtags $L$ in the data set. We use a softmax layer to compute the posterior hashtag probabilities: $$P(y=j |e) = \frac{exp(w_j^Te + b_j)}{\sum _{i=1}^L exp(w_i^Te + b_j)}.$$ (Eq. 4) Objective Function: We optimize the categorical cross-entropy loss between predicted and true hashtags: $$J = \frac{1}{B} \sum _{i=1}^{B} \sum _{j=1}^{L} -t_{i,j}log(p_{i,j}) + \lambda \Vert \Theta \Vert ^2.$$ (Eq. 5) Here $B$ is the batch size, $L$ is the number of classes, $p_{i,j}$ is the predicted probability that the $i$ -th tweet has hashtag $j$ , and $t_{i,j} \in \lbrace 0,1\rbrace $ denotes the ground truth of whether the $j$ -th hashtag is in the $i$ -th tweet. We use L2-regularization weighted by $\lambda $ . Word Level Baseline Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token. Data Our dataset consists of a large collection of global posts from Twitter between the dates of June 1, 2013 to June 5, 2013. Only English language posts (as detected by the lang field in Twitter API) and posts with at least one hashtag are retained. We removed infrequent hashtags ( $<500$ posts) since they do not have enough data for good generalization. We also removed very frequent tags ( $>19K$ posts) which were almost always from automatically generated posts (ex: #androidgame) which are trivial to predict. The final dataset contains 2 million tweets for training, 10K for validation and 50K for testing, with a total of 2039 distinct hashtags. We use simple regex to preprocess the post text and remove hashtags (since these are to be predicted) and HTML tags, and replace user-names and URLs with special tokens. We also removed retweets and convert the text to lower-case. Implementation Details Word vectors and character vectors are both set to size $d_L=150$ for their respective models. There were 2829 unique characters in the training set and we model each of these independently in a character look-up table. Embedding sizes were chosen such that each model had roughly the same number of parameters (Table 2 ). Training is performed using mini-batch gradient descent with Nesterov's momentum. We use a batch size $B=64$ , initial learning rate $\eta _0=0.01$ and momentum parameter $\mu _0=0.9$ . L2-regularization with $\lambda =0.001$ was applied to all models. Initial weights were drawn from 0-mean gaussians with $\sigma =0.1$ and initial biases were set to 0. The hyperparameters were tuned one at a time keeping others fixed, and values with the lowest validation cost were chosen. The resultant combination was used to train the models until performance on validation set stopped increasing. During training, the learning rate is halved everytime the validation set precision increases by less than 0.01 % from one epoch to the next. The models converge in about 20 epochs. Code for training both the models is publicly available on github. Results We test the character and word-level variants by predicting hashtags for a held-out test set of posts. Since there may be more than one correct hashtag per post, we generate a ranked list of tags for each post from the output posteriors, and report average precision@1, recall@10 and mean rank of the correct hashtags. These are listed in Table 3 . To see the performance of each model on posts containing rare words (RW) and frequent words (FW) we selected two test sets each containing 2,000 posts. We populated these sets with posts which had the maximum and minimum number of out-of-vocabulary words respectively, where vocabulary is defined by the 20K most frequent words. Overall, tweet2vec outperforms the word model, doing significantly better on RW test set and comparably on FW set. This improved performance comes at the cost of increased training time (see Table 2 ), since moving from words to characters results in longer input sequences to the GRU. We also study the effect of model size on the performance of these models. For the word model we set vocabulary size $V$ to 8K, 15K and 20K respectively. For tweet2vec we set the GRU hidden state size to 300, 400 and 500 respectively. Figure 2 shows precision 1 of the two models as the number of parameters is increased, for each test set described above. There is not much variation in the performance, and moreover tweet2vec always outperforms the word based model for the same number of parameters. Table 4 compares the models as complexity of the task is increased. We created 3 datasets (small, medium and large) with an increasing number of hashtags to be predicted. This was done by varying the lower threshold of the minimum number of tags per post for it to be included in the dataset. Once again we observe that tweet2vec outperforms its word-based counterpart for each of the three settings. Finally, table 1 shows some predictions from the word level model and tweet2vec. We selected these to highlight some strengths of the character based approach - it is robust to word segmentation errors and spelling mistakes, effectively interprets emojis and other special characters to make predictions, and also performs comparably to the word-based approach for in-vocabulary tokens. Conclusion We have presented tweet2vec - a character level encoder for social media posts trained using supervision from associated hashtags. Our result shows that tweet2vec outperforms the word based approach, doing significantly better when the input post contains many rare words. We have focused only on English language posts, but the character model requires no language specific preprocessing and can be extended to other languages. For future work, one natural extension would be to use a character-level decoder for predicting the hashtags. This will allow generation of hashtags not seen in the training dataset. Also, it will be interesting to see how our tweet2vec embeddings can be used in domains where there is a need for semantic understanding of social media, such as tracking infectious diseases BIBREF17 . Hence, we provide an off-the-shelf encoder trained on medium dataset described above to compute vector-space representations of tweets along with our code on github. Acknowledgments We would like to thank Alex Smola, Yun Fu, Hsiao-Yu Fish Tung, Ruslan Salakhutdinov, and Barnabas Poczos for useful discussions. We would also like to thank Juergen Pfeffer for providing access to the Twitter data, and the reviewers for their comments.
What other tasks do they test their method on?
null
2,473
qasper
3k
Introduction and Background Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, restaurant search BIBREF2, or booking a taxi BIBREF3. These systems are typically constructed around rigid task-specific ontologies BIBREF1, BIBREF4 which enumerate the constraints the users can express using a collection of slots (e.g., price range for restaurant search) and their slot values (e.g., cheap, expensive for the aforementioned slots). Conversations are then modelled as a sequence of actions that constrain slots to particular values. This explicit semantic space is manually engineered by the system designer. It serves as the output of the natural language understanding component as well as the input to the language generation component both in traditional modular systems BIBREF5, BIBREF6 and in more recent end-to-end task-oriented dialogue systems BIBREF7, BIBREF8, BIBREF9, BIBREF3. Working with such explicit semantics for task-oriented dialogue systems poses several critical challenges on top of the manual time-consuming domain ontology design. First, it is difficult to collect domain-specific data labelled with explicit semantic representations. As a consequence, despite recent data collection efforts to enable training of task-oriented systems across multiple domains BIBREF0, BIBREF3, annotated datasets are still few and far between, as well as limited in size and the number of domains covered. Second, the current approach constrains the types of dialogue the system can support, resulting in artificial conversations, and breakdowns when the user does not understand what the system can and cannot support. In other words, training a task-based dialogue system for voice-controlled search in a new domain always implies the complex, expensive, and time-consuming process of collecting and annotating sufficient amounts of in-domain dialogue data. In this paper, we present a demo system based on an alternative approach to task-oriented dialogue. Relying on non-generative response retrieval we describe the PolyResponse conversational search engine and its application in the task of restaurant search and booking. The engine is trained on hundreds of millions of real conversations from a general domain (i.e., Reddit), using an implicit representation of semantics that directly optimizes the task at hand. It learns what responses are appropriate in different conversational contexts, and consequently ranks a large pool of responses according to their relevance to the current user utterance and previous dialogue history (i.e., dialogue context). The technical aspects of the underlying conversational search engine are explained in detail in our recent work BIBREF11, while the details concerning the Reddit training data are also available in another recent publication BIBREF12. In this demo, we put focus on the actual practical usefulness of the search engine by demonstrating its potential in the task of restaurant search, and extending it to deal with multi-modal data. We describe a PolyReponse system that assists the users in finding a relevant restaurant according to their preference, and then additionally helps them to make a booking in the selected restaurant. Due to its retrieval-based design, with the PolyResponse engine there is no need to engineer a structured ontology, or to solve the difficult task of general language generation. This design also bypasses the construction of dedicated decision-making policy modules. The large ranking model already encapsulates a lot of knowledge about natural language and conversational flow. Since retrieved system responses are presented visually to the user, the PolyResponse restaurant search engine is able to combine text responses with relevant visual information (e.g., photos from social media associated with the current restaurant and related to the user utterance), effectively yielding a multi-modal response. This setup of using voice as input, and responding visually is becoming more and more prevalent with the rise of smart screens like Echo Show and even mixed reality. Finally, the PolyResponse restaurant search engine is multilingual: it is currently deployed in 8 languages enabling search over restaurants in 8 cities around the world. System snapshots in four different languages are presented in Figure FIGREF16, while screencast videos that illustrate the dialogue flow with the PolyResponse engine are available at: https://tinyurl.com/y3evkcfz. PolyResponse: Conversational Search The PolyResponse system is powered by a single large conversational search engine, trained on a large amount of conversational and image data, as shown in Figure FIGREF2. In simple words, it is a ranking model that learns to score conversational replies and images in a given conversational context. The highest-scoring responses are then retrieved as system outputs. The system computes two sets of similarity scores: 1) $S(r,c)$ is the score of a candidate reply $r$ given a conversational context $c$, and 2) $S(p,c)$ is the score of a candidate photo $p$ given a conversational context $c$. These scores are computed as a scaled cosine similarity of a vector that represents the context ($h_c$), and a vector that represents the candidate response: a text reply ($h_r$) or a photo ($h_p$). For instance, $S(r,c)$ is computed as $S(r,c)=C cos(h_r,h_c)$, where $C$ is a learned constant. The part of the model dealing with text input (i.e., obtaining the encodings $h_c$ and $h_r$) follows the architecture introduced recently by Henderson:2019acl. We provide only a brief recap here; see the original paper for further details. PolyResponse: Conversational Search ::: Text Representation. The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset. During training, we obtain $d$-dimensional feature representations ($d=320$) shared between contexts and replies for each unigram and bigram jointly with other neural net parameters. A state-of-the-art architecture based on transformers BIBREF13 is then applied to unigram and bigram vectors separately, which are then averaged to form the final 320-dimensional encoding. That encoding is then passed through three fully-connected non-linear hidden layers of dimensionality $1,024$. The final layer is linear and maps the text into the final $l$-dimensional ($l=512$) representation: $h_c$ and $h_r$. Other standard and more sophisticated encoder models can also be used to provide final encodings $h_c$ and $h_r$, but the current architecture shows a good trade-off between speed and efficacy with strong and robust performance in our empirical evaluations on the response retrieval task using Reddit BIBREF14, OpenSubtitles BIBREF15, and AmazonQA BIBREF16 conversational test data, see BIBREF12 for further details. In training the constant $C$ is constrained to lie between 0 and $\sqrt{l}$. Following Henderson:2017arxiv, the scoring function in the training objective aims to maximise the similarity score of context-reply pairs that go together, while minimising the score of random pairings: negative examples. Training proceeds via SGD with batches comprising 500 pairs (1 positive and 499 negatives). PolyResponse: Conversational Search ::: Photo Representation. Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \times 224$ pixels as in BIBREF18. This provides a $1,280 \times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$. PolyResponse: Conversational Search ::: Data Source 1: Reddit. For training text representations we use a Reddit dataset similar to AlRfou:2016arxiv. Our dataset is large and provides natural conversational structure: all Reddit data from January 2015 to December 2018, available as a public BigQuery dataset, span almost 3.7B comments BIBREF12. We preprocess the dataset to remove uninformative and long comments by retaining only sentences containing more than 8 and less than 128 word tokens. After pairing all comments/contexts $c$ with their replies $r$, we obtain more than 727M context-reply $(c,r)$ pairs for training, see Figure FIGREF2. PolyResponse: Conversational Search ::: Data Source 2: Yelp. Once the text encoding sub-networks are trained, a photo encoder is learned on top of a pretrained MobileNet CNN, using data taken from the Yelp Open dataset: it contains around 200K photos and their captions. Training of the multi-modal sub-network then maximises the similarity of captions encoded with the response encoder $h_r$ to the photo representation $h_p$. As a result, we can compute the score of a photo given a context using the cosine similarity of the respective vectors. A photo will be scored highly if it looks like its caption would be a good response to the current context. PolyResponse: Conversational Search ::: Index of Responses. The Yelp dataset is used at inference time to provide text and photo candidates to display to the user at each step in the conversation. Our restaurant search is currently deployed separately for each city, and we limit the responses to a given city. For instance, for our English system for Edinburgh we work with 396 restaurants, 4,225 photos (these include additional photos obtained using the Google Places API without captions), 6,725 responses created from the structured information about restaurants that Yelp provides, converted using simple templates to sentences of the form such as “Restaurant X accepts credit cards.”, 125,830 sentences extracted from online reviews. PolyResponse: Conversational Search ::: PolyResponse in a Nutshell. The system jointly trains two encoding functions (with shared word embeddings) $f(context)$ and $g(reply)$ which produce encodings $h_c$ and $h_r$, so that the similarity $S(c,r)$ is high for all $(c,r)$ pairs from the Reddit training data and low for random pairs. The encoding function $g()$ is then frozen, and an encoding function $t(photo)$ is learnt which makes the similarity between a photo and its associated caption high for all (photo, caption) pairs from the Yelp dataset, and low for random pairs. $t$ is a CNN pretrained on ImageNet, with a shallow one-layer DNN on top. Given a new context/query, we then provide its encoding $h_c$ by applying $f()$, and find plausible text replies and photo responses according to functions $g()$ and $t()$, respectively. These should be responses that look like answers to the query, and photos that look like they would have captions that would be answers to the provided query. At inference, finding relevant candidates given a context reduces to computing $h_c$ for the context $c$ , and finding nearby $h_r$ and $h_p$ vectors. The response vectors can all be pre-computed, and the nearest neighbour search can be further optimised using standard libraries such as Faiss BIBREF19 or approximate nearest neighbour retrieval BIBREF20, giving an efficient search that scales to billions of candidate responses. The system offers both voice and text input and output. Speech-to-text and text-to-speech conversion in the PolyResponse system is currently supported by the off-the-shelf Google Cloud tools. Dialogue Flow The ranking model lends itself to the one-shot task of finding the most relevant responses in a given context. However, a restaurant-browsing system needs to support a dialogue flow where the user finds a restaurant, and then asks questions about it. The dialogue state for each search scenario is represented as the set of restaurants that are considered relevant. This starts off as all the restaurants in the given city, and is assumed to monotonically decrease in size as the conversation progresses until the user converges to a single restaurant. A restaurant is only considered valid in the context of a new user input if it has relevant responses corresponding to it. This flow is summarised here: S1. Initialise $R$ as the set of all restaurants in the city. Given the user's input, rank all the responses in the response pool pertaining to restaurants in $R$. S2. Retrieve the top $N$ responses $r_1, r_2, \ldots , r_N$ with corresponding (sorted) cosine similarity scores: $s_1 \ge s_2 \ge \ldots \ge s_N$. S3. Compute probability scores $p_i \propto \exp (a \cdot s_i)$ with $\sum _{i=1}^N p_i$, where $a>0$ is a tunable constant. S4. Compute a score $q_e$ for each restaurant/entity $e \in R$, $q_e = \sum _{i: r_i \in e} p_i$. S5. Update $R$ to the smallest set of restaurants with highest $q$ whose $q$-values sum up to more than a predefined threshold $t$. S6. Display the most relevant responses associated with the updated $R$, and return to S2. If there are multiple relevant restaurants, one response is shown from each. When only one restaurant is relevant, the top $N$ responses are all shown, and relevant photos are also displayed. The system does not require dedicated understanding, decision-making, and generation modules, and this dialogue flow does not rely on explicit task-tailored semantics. The set of relevant restaurants is kept internally while the system narrows it down across multiple dialogue turns. A simple set of predefined rules is used to provide a templatic spoken system response: e.g., an example rule is “One review of $e$ said $r$”, where $e$ refers to the restaurant, and $r$ to a relevant response associated with $e$. Note that while the demo is currently focused on the restaurant search task, the described “narrowing down” dialogue flow is generic and applicable to a variety of applications dealing with similar entity search. The system can use a set of intent classifiers to allow resetting the dialogue state, or to activate the separate restaurant booking dialogue flow. These classifiers are briefly discussed in §SECREF4. Other Functionality ::: Multilinguality. The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work. Other Functionality ::: Voice-Controlled Menu Search. An additional functionality enables the user to get parts of the restaurant menu relevant to the current user utterance as responses. This is achieved by performing an additional ranking step of available menu items and retrieving the ones that are semantically relevant to the user utterance using exactly the same methodology as with ranking other responses. An example of this functionality is shown in Figure FIGREF21. Other Functionality ::: Resetting and Switching to Booking. The restaurant search system needs to support the discrete actions of restarting the conversation (i.e., resetting the set $R$), and should enable transferring to the slot-based table booking flow. This is achieved using two binary intent classifiers, that are run at each step in the dialogue. These classifiers make use of the already-computed $h_c$ vector that represents the user's latest text. A single-layer neural net is learned on top of the 512-dimensional encoding, with a ReLU activation and 100 hidden nodes. To train the classifiers, sets of 20 relevant paraphrases (e.g., “Start again”) are provided as positive examples. Finally, when the system successfully switches to the booking scenario, it proceeds to the slot filling task: it aims to extract all the relevant booking information from the user (e.g., date, time, number of people to dine). The entire flow of the system illustrating both the search phase and the booking phase is provided as the supplemental video material. Conclusion and Future Work This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts.
Was PolyReponse evaluated against some baseline?
No
2,738
qasper
3k
Introduction Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services BIBREF0 . For this paper, we compiled a very large geolocated collection of blogs, written by individuals located in the U.S., with the purpose of creating insightful mappings of the blogging community. In particular, during May-July 2015, we gathered the profile information for all the users that have self-reported their location in the U.S., along with a number of posts for all their associated blogs. We utilize this blog collection to generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes. We believe that these maps can provide valuable insights and partial verification of previous claims in support of research in linguistic geography BIBREF1 , regional personality BIBREF2 , and language analysis BIBREF3 , BIBREF4 , as well as psychology and its relation to human geography BIBREF5 . Data Collection Our premise is that we can generate informative maps using geolocated information available on social media; therefore, we guide the blog collection process with the constraint that we only accept blogs that have specific location information. Moreover, we aim to find blogs belonging to writers from all 50 U.S. states, which will allow us to build U.S. maps for various dimensions of interest. We first started by collecting a set of profiles of bloggers that met our location specifications by searching individual states on the profile finder on http://www.blogger.com. Starting with this list, we can locate the profile page for a user, and subsequently extract additional information, which includes fields such as name, email, occupation, industry, and so forth. It is important to note that the profile finder only identifies users that have an exact match to the location specified in the query; we thus built and ran queries that used both state abbreviations (e.g., TX, AL), as well as the states' full names (e.g., Texas, Alabama). After completing all the processing steps, we identified 197,527 bloggers with state location information. For each of these bloggers, we found their blogs (note that a blogger can have multiple blogs), for a total of 335,698 blogs. For each of these blogs, we downloaded the 21 most recent blog postings, which were cleaned of HTML tags and tokenized, resulting in a collection of 4,600,465 blog posts. Maps from Blogs Our dataset provides mappings between location, profile information, and language use, which we can leverage to generate maps that reflect demographic, linguistic, and psycholinguistic properties of the population represented in the dataset. People Maps The first map we generate depicts the distribution of the bloggers in our dataset across the U.S. Figure FIGREF1 shows the density of users in our dataset in each of the 50 states. For instance, the densest state was found to be California with 11,701 users. The second densest is Texas, with 9,252 users, followed by New York, with 9,136. The state with the fewest bloggers is Delaware with 1,217 users. Not surprisingly, this distribution correlates well with the population of these states, with a Spearman's rank correlation INLINEFORM0 of 0.91 and a p-value INLINEFORM1 0.0001, and is very similar to the one reported in Lin and Halavais Lin04. Figure FIGREF1 also shows the cities mentioned most often in our dataset. In particular, it illustrates all 227 cities that have at least 100 bloggers. The bigger the dot on the map, the larger the number of users found in that city. The five top blogger-dense cities, in order, are: Chicago, New York, Portland, Seattle, and Atlanta. We also generate two maps that delineate the gender distribution in the dataset. Overall, the blogging world seems to be dominated by females: out of 153,209 users who self-reported their gender, only 52,725 are men and 100,484 are women. Figures FIGREF1 and FIGREF1 show the percentage of male and female bloggers in each of the 50 states. As seen in this figure, there are more than the average number of male bloggers in states such as California and New York, whereas Utah and Idaho have a higher percentage of women bloggers. Another profile element that can lead to interesting maps is the Industry field BIBREF6 . Using this field, we created different maps that plot the geographical distribution of industries across the country. As an example, Figure FIGREF2 shows the percentage of the users in each state working in the automotive and tourism industries respectively. Linguistic Maps Another use of the information found in our dataset is to build linguistic maps, which reflect the geographic lexical variation across the 50 states BIBREF7 . We generate maps that represent the relative frequency by which a word occurs in the different states. Figure FIGREF3 shows sample maps created for two different words. The figure shows the map generated for one location specific word, Maui, which unsurprisingly is found predominantly in Hawaii, and a map for a more common word, lake, which has a high occurrence rate in Minnesota (Land of 10,000 Lakes) and Utah (home of the Great Salt Lake). Our demo described in Section SECREF4 , can also be used to generate maps for function words, which can be very telling regarding people's personality BIBREF8 . Psycholinguistic and Semantic Maps LIWC. In addition to individual words, we can also create maps for word categories that reflect a certain psycholinguistic or semantic property. Several lexical resources, such as Roget or Linguistic Inquiry and Word Count BIBREF9 , group words into categories. Examples of such categories are Money, which includes words such as remuneration, dollar, and payment; or Positive feelings with words such as happy, cheerful, and celebration. Using the distribution of the individual words in a category, we can compile distributions for the entire category, and therefore generate maps for these word categories. For instance, figure FIGREF8 shows the maps created for two categories: Positive Feelings and Money. The maps are not surprising, and interestingly they also reflect an inverse correlation between Money and Positive Feelings . Values. We also measure the usage of words related to people's core values as reported by Boyd et al. boyd2015. The sets of words, or themes, were excavated using the Meaning Extraction Method (MEM) BIBREF10 . MEM is a topic modeling approach applied to a corpus of texts created by hundreds of survey respondents from the U.S. who were asked to freely write about their personal values. To illustrate, Figure FIGREF9 shows the geographical distributions of two of these value themes: Religion and Hard Work. Southeastern states often considered as the nation's “Bible Belt” BIBREF11 were found to have generally higher usage of Religion words such as God, bible, and church. Another broad trend was that western-central states (e.g., Wyoming, Nebraska, Iowa) commonly blogged about Hard Work, using words such as hard, work, and job more often than bloggers in other regions. Web Demonstration A prototype, interactive charting demo is available at http://lit.eecs.umich.edu/~geoliwc/. In addition to drawing maps of the geographical distributions on the different LIWC categories, the tool can report the three most and least correlated LIWC categories in the U.S. and compare the distributions of any two categories. Conclusions In this paper, we showed how we can effectively leverage a prodigious blog dataset. Not only does the dataset bring out the extensive linguistic content reflected in the blog posts, but also includes location information and rich metadata. These data allow for the generation of maps that reflect the demographics of the population, variations in language use, and differences in psycholinguistic and semantic categories. These mappings can be valuable to both psychologists and linguists, as well as lexicographers. A prototype demo has been made available together with the code used to collect our dataset. Acknowledgments This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation. We would like to thank our colleagues Hengjing Wang, Jiatao Fan, Xinghai Zhang, and Po-Jung Huang who provided technical help with the implementation of the demo.
How do they obtain psychological dimensions of people?
using the Meaning Extraction Method
1,440
qasper
3k
Introduction The automatic identification, extraction and representation of the information conveyed in texts is a key task nowadays. In fact, this research topic is increasing its relevance with the exponential growth of social networks and the need to have tools that are able to automatically process them BIBREF0. Some of the domains where it is more important to be able to perform this kind of action are the juridical and legal ones. Effectively, it is crucial to have the capability to analyse open access text sources, like social nets (Twitter and Facebook, for instance), blogs, online newspapers, and to be able to extract the relevant information and represent it in a knowledge base, allowing posterior inferences and reasoning. In the context of this work, we will present results of the R&D project Agatha, where we developed a pipeline of processes that analyses texts (in Portuguese, Spanish, or English) and is able to populate a specialized ontology BIBREF1 (related to criminal law) for the representation of events, depicted in such texts. Events are represented by objects having associated actions, agents, elements, places and time. After having populated the event ontology, we have an automatic process linking the identified entities to external referents, creating, this way, a linked data knowledge base. It is important to point out that, having the text information represented in an ontology allows us to perform complex queries and inferences, which can detect patterns of typical criminal actions. Another axe of innovation in this research is the development, for the Portuguese language, of a pipeline of Natural Language Processing (NLP) processes, that allows us to fully process sentences and represent their content in an ontology. Although there are several tools for the processing of the Portuguese language, the combination of all these steps in a integrated tool is a new contribution. Moreover, we have already explored other related research path, namely author profiling BIBREF2, aggression identification BIBREF3 and hate-speech detection BIBREF4 over social media, plus statute law retrieval and entailment for Japanese BIBREF5. The remainder of this paper is organized as follows: Section SECREF2 describes our proposed architecture together with the Portuguese modules for its computational processing. Section SECREF3 discusses different design options and Section SECREF4 provides our conclusions together with some pointers for future work. Framework for Processing Portuguese Text The framework for processing Portuguese texts is depicted in Fig. FIGREF2, which illustrates how relevant pieces of information are extracted from the text. Namely, input files (Portuguese texts) go through a series of modules: part-of-speech tagging, named entity recognition, dependency parsing, semantic role labeling, subject-verb-object triple extraction, and lexicon matching. The main goal of all the modules except lexicon matching is to identify events given in the text. These events are then used to populate an ontology. The lexicon matching, on the other hand, was created to link words that are found in the text source with the data available not only on Eurovoc BIBREF6 thesaurus but also on the EU's terminology database IATE BIBREF7 (see Section SECREF12 for details). Most of these modules are deeply related and are detailed in the subsequent subsections. Framework for Processing Portuguese Text ::: Part-Of-Speech Tagging Part-of-speech tagging happens after language detection. It labels each word with a tag that indicates its syntactic role in the sentence. For instance, a word could be a noun, verb, adjective or adverb (or other syntactic tag). We used Freeling BIBREF8 library to provide the tags. This library resorts to a Hidden Markov Model as described by Brants BIBREF9. The end result is a tag for each word as described by the EAGLES tagset . Framework for Processing Portuguese Text ::: Named Entity Recognition We use the named entity recognition module after part-of-speech tagging. This module labels each part of the sentence into different categories such as "PERSON", "LOCATION", or "ORGANIZATION". We also used Freeling to label the named entities and the details of the algorithm are shown in the paper by Carreras et al BIBREF10. Aside from the three aforementioned categories, we also extracted "DATE/TIME" and "CURRENCY" values by looking at the part-of-speech tags: date/time words have a tag of "W", while currencies have "Zm". Framework for Processing Portuguese Text ::: Dependency Parsing Dependency parsing involves tagging a word based on different features to indicate if it is dependent on another word. The Freeling library also has dependency parsing models for Portuguese. Since we wanted to build a SRL (Semantic Role Labeling) module on top of the dependency parser and the current released version of Freeling does not have an SRL module for Portuguese, we trained a different Portuguese dependency parsing model that was compatible (in terms of used tags) with the available annotated. We used the dataset from System-T BIBREF11, which has SRL tags, as well as, the other preceding tags. It was necessary to do some pre-processing and tag mapping in order to make it viable to train a Portuguese model. We made 589 tag conversions over 14 different categories. The breakdown of tag conversions per category is given by table TABREF7. These rules can be further seen in the corresponding Github repository BIBREF12. The modified training and development datasets are also available on another Github repositorypage BIBREF13 for further research and comparison purposes. Framework for Processing Portuguese Text ::: Semantic Role Labeling We execute the SRL (Semantic Role Labeling) module after obtaining the word dependencies. This module aims at giving a semantic role to a syntactic constituent of a sentence. The semantic role is always in relation to a verb and these roles could either be an actor, object, time, or location, which are then tagged as A0, A1, AM-TMP, AM-LOC, respectively. We trained a model for this module on top of the dependency parser described in the previous subsection using the modified dataset from System-T. The module also needs co-reference resolution to work and, to achieve this, we adapted the Spanish co-reference modules for Portuguese, changing the words that are equivalent (in total, we changed 253 words). Framework for Processing Portuguese Text ::: SVO Extraction From the yield of the SRL (Semantic Role Labeling) module, our framework can distinguish actors, actions, places, time and objects from the sentences. Utilizing this extracted data, we can distinguish subject-verb-object (SVO) triples using the SVO extraction algorithm BIBREF14. The algorithm finds, for each sentence, the verb and the tuples related to that verb using Semantic Role Labeling (subsection SECREF8). After the extraction of SVOs from texts, they are inserted into a specific event ontology (see section SECREF12 for the creation of a knowledge base). Framework for Processing Portuguese Text ::: Lexicon Matching The sole purpose of this module is to find important terms and/or concepts from the extracted text. To do this, we use Euvovoc BIBREF6, a multilingual thesaurus that was developed for and by the European Union. The Euvovoc has 21 fields and each field is further divided into a variable number of micro-thesauri. Here, due to the application of this work in the Agatha project (mentioned in Section SECREF1), we use the terms of the criminal law BIBREF15 micro-thesaurus. Further, we classified each term of the criminal law micro-thesaurus into four categories namely, actor, event, place and object. The term classification can be seen in Table TABREF11. After the classification of these terms, we implemented two different matching algorithms between the extracted words and the criminal law micro-thesaurus terms. The first is an exact string match wherein lowercase equivalents of the words of the input sentences are matched exactly with lower case equivalents of the predefined terms. The second matching algorithm uses Levenshtein distance BIBREF16, allowing some near-matches that are close enough to the target term. Framework for Processing Portuguese Text ::: Linked Data: Ontology, Thesaurus and Terminology In the computer science field, an ontology can be defined has: a formal specification of a conceptualization; shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations; the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories. A knowledge base is one kind of repository typically used to store answers to questions or solutions to problems enabling rapid search, retrieval, and reuse, either by an ontology or directly by those requesting support. For a more detailed description of ontologies and knowledge bases, see for instance BIBREF17. For designing the ontology adequate for our goals, we referred to the Simple Event Model (SEM) BIBREF18 as a baseline model. A pictorial representation of this ontology is given in Figure FIGREF16 Considering the criminal law domain case study, we made a few changes to the original SEM ontology. The entities of the model are: Actor: person involved with event Place: location of the event Time: time of the event Object: that actor act upon Organization: organization involved with event Currency: money involved with event The proposed ontology was designed in such a manner that it can incorporate information extracted from multiple documents. In this context, suppose that the source of documents is aare a legal police department, where each document isare under the hood of a particular case/crime; furthermoreFurther, a single case can have documents from multiple languages. Now, considering case 1 has 100 documents and case 2 has 100 documents then there is not only a connection among the documents of a single case but rather among all the cases with all the combined 200 documents. In this way, the proposed method is able to produce a detailed and well-connected knowledge base. Figure FIGREF23 shows the proposed ontology, which, in our evaluation procedure, was populated with 3121 events entries from 51 documents. Protege BIBREF19 tool was used for creating the ontology and GraphDB BIBREF20 for populating & querying the data. GraphDB is an enterprise-ready Semantic Graph Database, compliant with W3C Standards. Semantic Graph Databases (also called RDF triplestores) provide the core infrastructure for solutions where modeling agility, data integration, relationship exploration, and cross-enterprise data publishing and consumption are important. GraphDB has a SPARQL (SQL-like query language) interface for RDF graph databases with the following types: SELECT: returns tabular results CONSTRUCT: creates a new RDF graph based on query results ASK: returns "YES", if the query has a solution, otherwise "NO" DESCRIBE: returns RDF data about a resource. This is useful when the RDF data structure in the data source is not known INSERT: inserts triples into a graph DELETE: deletes triples from a graph Furthermore, we have extended the ontology BIBREF21 to connect the extracted terms with Eurovoc criminal law (discussed in subsection SECREF10) and IATE BIBREF7 terms. IATE (Interactive Terminology for Europe) is the EU's general terminology database and its aim is to provide a web-based infrastructure for all EU terminology resources, enhancing the availability and standardization of the information. The extended ontology has a number of sub-classes for Actor, Event, Object and Place classes detailed in Table TABREF30. Discussion We have defined a major design principle for our architecture: it should be modular and not rely on human made rules allowing, as much as possible, its independence from a specific language. In this way, its potential application to another language would be easier, simply by changing the modules or the models of specific modules. In fact, we have explored the use of already existing modules and adopted and integrated several of these tools into our pipeline. It is important to point out that, as far as we know, there is no integrated architecture supporting the full processing pipeline for the Portuguese language. We evaluated several systems like Rembrandt BIBREF22 or LinguaKit: the former only has the initial steps of our proposal (until NER) and the later performed worse than our system. This framework, developed within the context of the Agatha project (described in Section SECREF1) has the full processing pipeline for Portuguese texts: it receives sentences as input and outputs ontological information: a) first performs all NLP typical tasks until semantic role labelling; b) then, it extracts subject-verb-object triples; c) and, then, it performs ontology matching procedures. As a final result, the obtained output is inserted into a specialized ontology. We are aware that each of the architecture modules can, and should, be improved but our main goal was the creation of a full working text processing pipeline for the Portuguese language. Conclusions and Future Work Besides the end–to–end NLP pipeline for the Portuguese language, the other main contributions of this work can be summarize as follows: Development of an ontology for the criminal law domain; Alignment of the Eurovoc thesaurus and IATE terminology with the ontology created; Representation of the extracted events from texts in the linked knowledge base defined. The obtained results support our claim that the proposed system can be used as a base tool for information extraction for the Portuguese language. Being composed by several modules, each of them with a high level of complexity, it is certain that our approach can be improved and an overall better performance can be achieved. As future work we intend, not only to continue improving the individual modules, but also plan to extend this work to the: automatic creation of event timelines; incorporation in the knowledge base of information obtained from videos or pictures describing scenes relevant to criminal investigations. Acknowledgments The authors would like to thank COMPETE 2020, PORTUGAL 2020 Program, the European Union, and ALENTEJO 2020 for supporting this research as part of Agatha Project SI & IDT number 18022 (Intelligent analysis system of open of sources information for surveillance/crime control). The authors would also like to thank LISP - Laboratory of Informatics, Systems and Parallelism.
Were any of the pipeline components based on deep learning models?
No
2,276
qasper
3k
Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed.
How is the quality of the data empirically evaluated?
Validated transcripts were sent to professional translators., various sanity checks to the translations, sanity check the overlaps of train, development and test sets
2,435
qasper
3k
Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences. In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models. In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data. Related Work Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content. At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output. Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora. Simplified Corpora We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K. Text Simplification using Neural Machine Translation Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence. The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 . State INLINEFORM0 is calculated by DISPLAYFORM0 where INLINEFORM0 is the activation function GRU. The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding. Synthetic Simplified Sentences We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system. Evaluation We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. Conclusion In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated.
by how much did their model improve?
For the WikiLarge dataset, the improvement over baseline NMT is 2.11 BLEU, 1.7 FKGL and 1.07 SARI. For the WikiSmall dataset, the improvement over baseline NMT is 8.37 BLEU.
2,271
qasper
3k
Introduction In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.). Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'. Trump tweeted: $\underbrace{\text{`}`{\texttt {BUILD THE WALL!}"}}_{\text{slogan}}$ Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s). Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively. System Description ::: Linguistic, Layout and Topical Features Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring). For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8. System Description ::: Sentence-level Propaganda Detection Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification. One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019. Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\mathcal {M}=3$) classifiers and thus, obtain $\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively. System Description ::: Fragment-level Propaganda Detection Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\_e$) and character embeddings $c\_e$, token-level features ($t\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification). Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span. Experiments and Evaluation Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal). Experiments and Evaluation ::: Results: Sentence-Level Propaganda Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\tau \ge 0.35$ is found optimal. We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\lambda \in \lbrace .99, .95\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess. Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position. Experiments and Evaluation ::: Results: Fragment-Level Propaganda Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external). Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position. Conclusion and Future Work Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI.
Which basic neural architecture perform best by itself?
BERT
1,507
qasper
3k
Introduction Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages to legitimate users, etc. A certain amount of spammers could sway the public opinion and cause distrust of the social platform. Despite the use of rigid anti-spam rules, human-like spammers whose homepages having photos, detailed profiles etc. have emerged. Unlike previous "simple" spammers, whose tweets contain only malicious links, those "smart" spammers are more difficult to distinguish from legitimate users via content-based features alone BIBREF0 . There is a considerable amount of previous work on spammer detection on social platforms. Researcher from Twitter Inc. BIBREF1 collect bot accounts and perform analysis on the user behavior and user profile features. Lee et al. lee2011seven use the so-called social honeypot by alluring social spammers' retweet to build a benchmark dataset, which has been extensively explored in our paper. Some researchers focus on the clustering of urls in tweets and network graph of social spammers BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , showing the power of social relationship features.As for content information modeling, BIBREF6 apply improved sparse learning methods. However, few studies have adopted topic-based features. Some researchers BIBREF7 discuss topic characteristics of spamming posts, indicating that spammers are highly likely to dwell on some certain topics such as promotion. But this may not be applicable to the current scenario of smart spammers. In this paper, we propose an efficient feature extraction method. In this method, two new topic-based features are extracted and used to discriminate human-like spammers from legitimate users. We consider the historical tweets of each user as a document and use the Latent Dirichlet Allocation (LDA) model to compute the topic distribution for each user. Based on the calculated topic probability, two topic-based features, the Local Outlier Standard Score (LOSS) which captures the user's interests on different topics and the Global Outlier Standard Score (GOSS) which reveals the user's interests on specific topic in comparison with other users', are extracted. The two features contain both local and global information, and the combination of them can distinguish human-like spammers effectively. To the best of our knowledge, it is the first time that features based on topic distributions are used in spammer classification. Experimental results on one public dataset and one self-collected dataset further validate that the two sets of extracted topic-based features get excellent performance on human-like spammer classification problem compared with other state-of-the-art methods. In addition, we build a Weibo dataset, which contains both legitimate users and spammers. To summarize, our major contributions are two-fold: In the following sections, we first propose the topic-based features extraction method in Section 2, and then introduce the two datasets in Section 3. Experimental results are discussed in Section 4, and we conclude the paper in Section 5. Future work is presented in Section 6. Methodology In this section, we first provide some observations we obtained after carefully exploring the social network, then the LDA model is introduced. Based on the LDA model, the ways to obtain the topic probability vector for each user and the two topic-based features are provided. Observation After exploring the homepages of a substantial number of spammers, we have two observations. 1) social spammers can be divided into two categories. One is content polluters, and their tweets are all about certain kinds of advertisement and campaign. The other is fake accounts, and their tweets resemble legitimate users' but it seems they are simply random copies of others to avoid being detected by anti-spam rules. 2) For legitimate users, content polluters and fake accounts, they show different patterns on topics which interest them. Legitimate users mainly focus on limited topics which interest him. They seldom post contents unrelated to their concern. Content polluters concentrate on certain topics. Fake accounts focus on a wide range of topics due to random copying and retweeting of other users' tweets. Spammers and legitimate users show different interests on some topics e.g. commercial, weather, etc. To better illustrate our observation, Figure. 1 shows the topic distribution of spammers and legitimate users in two employed datasets(the Honeypot dataset and Weibo dataset). We can see that on both topics (topic-3 and topic-11) there exists obvious difference between the red bars and green bars, representing spammers and legitimate users. On the Honeypot dataset, spammers have a narrower shape of distribution (the outliers on the red bar tail are not counted) than that of legitimate users. This is because there are more content polluters than fake accounts. In other word, spammers in this dataset tend to concentrate on limited topics. While on the Weibo dataset, fake accounts who are interested in different topics take large proportion of spammers. Their distribution is more flat (i.e. red bars) than that of the legitimate users. Therefore we can detect spammers by means of the difference of their topic distribution patterns. LDA model Blei et al.blei2003latent first presented Latent Dirichlet Allocation(LDA) as an example of topic model. Each document $i$ is deemed as a bag of words $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $ and $M$ is the number of words. Each word is attributable to one of the document's topics $Z=\left\lbrace z_{i1},z_{i2},...,z_{iK}\right\rbrace $ and $K$ is the number of topics. $\psi _{k}$ is a multinomial distribution over words for topic $k$ . $\theta _i$ is another multinomial distribution over topics for document $i$ . The smoothed generative model is illustrated in Figure. 2 . $\alpha $ and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $0 are hyper parameter that affect scarcity of the document-topic and topic-word distributions. In this paper, $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $1 , $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $2 and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $3 are empirically set to 0.3, 0.01 and 15. The entire content of each Twitter user is regarded as one document. We adopt Gibbs Sampling BIBREF8 to speed up the inference of LDA. Based on LDA, we can get the topic probabilities for all users in the employed dataset as: $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $4 , where $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $5 is the number of users. Each element $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $6 is a topic probability vector for the $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $7 document. $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $8 is the raw topic probability vector and our features are developed on top of it. Topic-based Features Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\in X_{i}$ denotes the likelihood that the $\emph {i}^{th}$ tweet account favors $\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below. Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users. Specifically, the "GOSS" score of user $i$ on topic $k$ can be calculated as Eq.( 12 ): $$\centering \begin{array}{ll} \mu \left(x_{k}\right)=\frac{\sum _{i=1}^{n} x_{ik}}{n},\\ GOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_k\right)}{\sqrt{\underset{i}{\sum }\left(x_{ik}-\mu \left(x_{k}\right)\right)^{2}}}. \end{array}$$ (Eq. 12) The value of $GOSS\left(x_{ik}\right)$ indicates the interesting degree of this person to the $\emph {k}^{th}$ topic. Specifically, if $GOSS\left(x_{ik}\right)$ > $GOSS\left(x_{jk}\right)$ , it means that the $\emph {i}^{th}$ person has more interest in topic $k$ than the $\emph {j}^{th}$ person. If the value $GOSS\left(x_{ik}\right)$ is extremely high or low, the $\emph {i}^{th}$ person showing extreme interest or no interest on topic $k$ which will probably be a distinctive pattern in the fowllowing classfication. Therefore, the topics interested or disliked by the $\emph {k}^{th}$0 person can be manifested by $\emph {k}^{th}$1 , from which the pattern of the interested topics with regarding to this person is found. Denote $\emph {k}^{th}$2 our first topic-based feature, and it hopefully can get good performance on spammer detection. Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only. For instance, the "LOSS" score of account $i$ on topic $k$ can be calculated as Eq.( 13 ): $$\centering \begin{array}{ll} \mu \left(x_{i}\right)=\frac{\sum _{k=1}^{K} x_{ik}}{K},\\ LOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_i\right)}{\sqrt{\underset{k}{\sum }\left(x_{ik}-\mu \left(x_{i}\right)\right)^{2}}}. \end{array}$$ (Eq. 13) $\mu (x_i)$ represents the averaged interesting degree for all topics with regarding to $\emph {i}^{th}$ user and his tweet content. Similarly to $GOSS$ , the topics interested or disliked by the $\emph {i}^{th}$ person via considering his single post information can be manifested by $f_{LOSS}^{i}=[LOSS(x_{i1})\cdots LOSS(x_{iK})]$ , and $LOSS$ becomes our second topic-based features for the $\emph {i}^{th}$ person. Dataset We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features. Social Honeypot Dataset: Lee et al. lee2010devils created and deployed 60 seed social accounts on Twitter to attract spammers by reporting back what accounts interact with them. They collected 19,276 legitimate users and 22,223 spammers in their datasets along with their tweet content in 7 months. This is our first test dataset. Our Weibo Dataset: Sina Weibo is one of the most famous social platforms in China. It has implemented many features from Twitter. The 2197 legitimate user accounts in this dataset are provided by the Tianchi Competition held by Sina Weibo. The spammers are all purchased commercially from multiple vendors on the Internet. We checked them manually and collected 802 suitable "smart" spammers accounts. Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with "Jieba", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination. Evaluation Metrics The evaluating indicators in our model are show in 2 . We calculate precision, recall and F1-score (i.e. F1 score) as in Eq. ( 19 ). Precision is the ratio of selected accounts that are spammers. Recall is the ratio of spammers that are detected so. F1-score is the harmonic mean of precision and recall. $$precision =\frac{TP}{TP+FP}, recall =\frac{TP}{TP+FN}\nonumber \\ F1-score = \frac{2\times precision \times recall}{precision + recall}$$ (Eq. 19) Performance Comparisons with Baseline Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability. Comparison with Other Features To compare our extracted features with previously used features for spammer detection, we use three most discriminative feature sets according to Lee et al. lee2011seven( 4 ). Two classifiers (Adaboost and SVM) are selected to conduct feature performance comparisons. Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check. They keep posting all kinds of news pieces covering diverse topics, which is similar to the behavior of fake accounts. However, UFN based on friendship networks is more useful for public accounts who possess large number of followers. The best recall value of our LOSS+GOSS features using SVM is up to 6% higher than the results by other feature groups. Regarding F1-score, our features outperform all other features. To further show the advantages of our proposed features, we compare our combined LOSS+GOSS with the combination of all the features from Lee et al. lee2011seven (UFN+UC+UH). It's obvious that LOSS+GOSS have a great advantage over UFN+UC+UH in terms of recall and F1-score. Moreover, by combining our LOSS+GOSS features and UFN+UC+UH features together, we obtained another 7.1% and 2.3% performance gain with regard to precision and F1-score by Adaboost. Though there is a slight decline in terms of recall. By SVM, we get comparative results on recall and F1-score but about 3.5% improvement on precision. Conclusion In this paper, we propose a novel feature extraction method to effectively detect "smart" spammers who post seemingly legitimate tweets and are thus difficult to identify by existing spammer classification methods. Using the LDA model, we obtain the topic probability for each Twitter user. By utilizing the topic probability result, we extract our two topic-based features: GOSS and LOSS which represent the account with global and local information. Experimental results on a public dataset and a self-built Chinese microblog dataset validate the effectiveness of the proposed features. Future Work In future work, the combination method of local and global information can be further improved to maximize their individual strengths. We will also apply decision theory to enhancing the performance of our proposed features. Moreover, we are also building larger datasets on both Twitter and Weibo to validate our method. Moreover, larger datasets on both Twitter and Weibo will be built to further validate our method.
What is the benchmark dataset and is its quality high?
Social Honeypot dataset (public) and Weibo dataset (self-collected); yes
2,242
qasper
3k
Introduction This paper describes our approach and results for Task 2 of the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs. Morphological (re)inflection from context is of particular relevance to the field of computational linguistics: it is compelling to estimate how well a machine-learned system can capture the morphosyntactic properties of a word given its context, and map those properties to the correct surface form for a given lemma. There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances. The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages. In analysing the performance of our system, we found that encoding the full context improves performance considerably for all languages: 11.15 percentage points on average, although it also highly increases the variance in results. Multi-task learning, paired with multilingual training and subsequent monolingual finetuning, scored highest for five out of seven languages, improving accuracy by another 9.86% on average. System Description Our system is a modification of the provided CoNLL–SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce. Baseline The CoNLL–SIGMORPHON 2018 baseline is described as follows: The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism. To that we add a few details regarding model size and training schedule: the number of LSTM layers is one; embedding size, LSTM layer size and attention layer size is 100; models are trained for 20 epochs; on every epoch, training data is subsampled at a rate of 0.3; LSTM dropout is applied at a rate 0.3; context word forms are randomly dropped at a rate of 0.1; the Adam optimiser is used, with a default learning rate of 0.001; and trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets). Our system Here we compare and contrast our system to the baseline system. A diagram of our system is shown in Figure FIGREF4 . The idea behind this modification is to provide the encoder with access to all morpho-syntactic cues present in the sentence. In contrast to the baseline, which only encodes the immediately adjacent context of a target word, we encode the entire context. All context word forms, lemmas, and MSD tags (in Track 1) are embedded in their respective high-dimensional spaces as before, and their embeddings are concatenated. However, we now reduce the entire past context to a fixed-size vector by encoding it with a forward LSTM, and we similarly represent the future context by encoding it with a backwards LSTM. We introduce an auxiliary objective that is meant to increase the morpho-syntactic awareness of the encoder and to regularise the learning process—the task is to predict the MSD tag of the target form. MSD tag predictions are conditioned on the context encoding, as described in UID15 . Tags are generated with an LSTM one component at a time, e.g. the tag PRO;NOM;SG;1 is predicted as a sequence of four components, INLINEFORM0 PRO, NOM, SG, 1 INLINEFORM1 . For every training instance, we backpropagate the sum of the main loss and the auxiliary loss without any weighting. As MSD tags are only available in Track 1, this augmentation only applies to this track. The parameters of the entire MSD (auxiliary-task) decoder are shared across languages. Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages. After 20 epochs of multilingual training, we perform 5 epochs of monolingual finetuning for each language. For this phase, we reduce the learning rate to a tenth of the original learning rate, i.e. 0.0001, to ensure that the models are indeed being finetuned rather than retrained. We keep all hyperparameters the same as in the baseline. Training data is split 90:10 for training and validation. We train our models for 50 epochs, adding early stopping with a tolerance of five epochs of no improvement in the validation loss. We do not subsample from the training data. We train models for 50 different random combinations of two to three languages in Track 1, and 50 monolingual models for each language in Track 2. Instead of picking the single model that performs best on the development set and thus risking to select a model that highly overfits that data, we use an ensemble of the five best models, and make the final prediction for a given target form with a majority vote over the five predictions. Results and Discussion Test results are listed in Table TABREF17 . Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2—only in the high resource setting is our system not definitively superior to the baseline. Interestingly, our results in the low resource setting are often higher for Track 2 than for Track 1, even though contextual information is less explicit in the Track 2 data and the multilingual multi-tasking approach does not apply to this track. We interpret this finding as an indicator that a simpler model with fewer parameters works better in a setting of limited training data. Nevertheless, we focus on the low resource setting in the analysis below due to time limitations. As our Track 1 results are still substantially higher than the baseline results, we consider this analysis valid and insightful. Ablation Study We analyse the incremental effect of the different features in our system, focusing on the low-resource setting in Track 1 and using development data. Encoding the entire context with an LSTM highly increases the variance of the observed results. So we trained fifty models for each language and each architecture. Figure FIGREF23 visualises the means and standard deviations over the trained models. In addition, we visualise the average accuracy for the five best models for each language and architecture, as these are the models we use in the final ensemble prediction. Below we refer to these numbers only. The results indicate that encoding the full context with an LSTM highly enhances the performance of the model, by 11.15% on average. This observation explains the high results we obtain also for Track 2. Adding the auxiliary objective of MSD prediction has a variable effect: for four languages (de, en, es, and sv) the effect is positive, while for the rest it is negative. We consider this to be an issue of insufficient data for the training of the auxiliary component in the low resource setting we are working with. We indeed see results improving drastically with the introduction of multilingual training, with multilingual results being 7.96% higher than monolingual ones on average. We studied the five best models for each language as emerging from the multilingual training (listed in Table TABREF27 ) and found no strong linguistic patterns. The en–sv pairing seems to yield good models for these languages, which could be explained in terms of their common language family and similar morphology. The other natural pairings, however, fr–es, and de–sv, are not so frequent among the best models for these pairs of languages. Finally, monolingual finetuning improves accuracy across the board, as one would expect, by 2.72% on average. The final observation to be made based on this breakdown of results is that the multi-tasking approach paired with multilingual training and subsequent monolingual finetuning outperforms the other architectures for five out of seven languages: de, en, fr, ru and sv. For the other two languages in the dataset, es and fi, the difference between this approach and the approach that emerged as best for them is less than 1%. The overall improvement of the multilingual multi-tasking approach over the baseline is 18.30%. Error analysis Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 . MSD prediction Figure FIGREF32 summarises the average MSD-prediction accuracy for the multi-tasking experiments discussed above. Accuracy here is generally higher than on the main task, with the multilingual finetuned setup for Spanish and the monolingual setup for French scoring best: 66.59% and 65.35%, respectively. This observation illustrates the added difficulty of generating the correct surface form even when the morphosyntactic description has been identified correctly. We observe some correlation between these numbers and accuracy on the main task: for de, en, ru and sv, the brown, pink and blue bars here pattern in the same way as the corresponding INLINEFORM0 's in Figure FIGREF23 . One notable exception to this pattern is fr where inflection gains a lot from multilingual training, while MSD prediction suffers greatly. Notice that the magnitude of change is not always the same, however, even when the general direction matches: for ru, for example, multilingual training benefits inflection much more than in benefits MSD prediction, even though the MSD decoder is the only component that is actually shared between languages. This observation illustrates the two-fold effect of multi-task training: an auxiliary task can either inform the main task through the parameters the two tasks share, or it can help the main task learning through its regularising effect. Related Work Our system is inspired by previous work on multi-task learning and multi-lingual learning, mainly building on two intuitions: (1) jointly learning related tasks tends to be beneficial BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 ; and (2) jointly learning related languages in an MTL-inspired framework tends to be beneficial BIBREF8 , BIBREF9 , BIBREF10 . In the context of computational morphology, multi-lingual approaches have previously been employed for morphological reinflection BIBREF2 and for paradigm completion BIBREF11 . In both of these cases, however, the available datasets covered more languages, 40 and 21, respectively, which allowed for linguistically-motivated language groupings and for parameter sharing directly on the level of characters. BIBREF10 explore parameter sharing between related languages for dependency parsing, and find that sharing is more beneficial in the case of closely related languages. Conclusions In this paper we described our system for the CoNLL–SIGMORPHON 2018 shared task on Universal Morphological Reinflection, Task 2, which achieved the best performance out of all systems submitted, an overall accuracy of 49.87. We showed in an ablation study that this is due to three core innovations, which extend a character-based encoder-decoder model: (1) a wide context window, encoding the entire available context; (2) multi-task learning with the auxiliary task of MSD prediction, which acts as a regulariser; (3) a multilingual approach, exploiting information across languages. In future work we aim to gain better understanding of the increase in variance of the results introduced by each of our modifications and the reasons for the varying effect of multi-task learning for different languages. Acknowledgements We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
What architecture does the decoder have?
LSTM
2,289
qasper
3k
Introduction In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.). Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'. Trump tweeted: $\underbrace{\text{`}`{\texttt {BUILD THE WALL!}"}}_{\text{slogan}}$ Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s). Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively. System Description ::: Linguistic, Layout and Topical Features Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring). For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8. System Description ::: Sentence-level Propaganda Detection Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification. One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019. Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\mathcal {M}=3$) classifiers and thus, obtain $\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively. System Description ::: Fragment-level Propaganda Detection Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\_e$) and character embeddings $c\_e$, token-level features ($t\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification). Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span. Experiments and Evaluation Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal). Experiments and Evaluation ::: Results: Sentence-Level Propaganda Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\tau \ge 0.35$ is found optimal. We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\lambda \in \lbrace .99, .95\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess. Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position. Experiments and Evaluation ::: Results: Fragment-Level Propaganda Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external). Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position. Conclusion and Future Work Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI.
What is best performing model among author's submissions, what performance it had?
For SLC task, the "ltuorp" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the "newspeak" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively).
1,541
qasper
3k
Introduction Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task. Transfer learning has also been explored in the multilingual Neural Machine Translation BIBREF3 , BIBREF9 , BIBREF10 . The goal is to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target translation is the parent task). Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model. The child model can now be fine-tuned on the source-target language pairs, if parallel corpus is available. The divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning. Multiple studies have shown that transfer learning works best when the languages are related BIBREF3 , BIBREF10 , BIBREF9 . Several studies have tried to address lexical divergence between the source and the target languages BIBREF10 , BIBREF11 , BIBREF12 . However, the effect of word order divergence and its mitigation has not been explored. In a practical setting, it is not uncommon to have source and assisting languages with different word order. For instance, it is possible to find parallel corpora between English and some Indian languages, but very little parallel corpora between Indian languages. Hence, it is natural to use English as an assisting language for inter-Indian language translation. To see how word order divergence can be detrimental, let us consider the case of the standard RNN (Bi-LSTM) encoder-attention-decoder architecture BIBREF13 . The encoder generates contextual representations (annotation vectors) for each source word, which are used by the attention network to match the source words to the current decoder state. The contextual representation is word-order dependent. Hence, if the assisting and the source languages do not have similar word order the generated contextual representations will not be consistent. The attention network (and hence the decoder) sees different contextual representations for similar words in parallel sentences across different languages. This makes it difficult to transfer knowledge learned from the assisting language to the source language. We illustrate this by visualizing the contextual representations generated by the encoder of an English to Hindi NMT system for two versions of the English input: (a) original word order (SVO) (b) word order of the source language (SOV, for Bengali). Figure FIGREF1 shows that the encoder representations obtained are very different. The attention network and the decoder now have to work with very different representations. Note that the plot below does not take into account further lexical and other divergences between source and assisting languages, since we demonstrated word order divergence with the same language on the source side. To address this word order divergence, we propose to pre-order the assisting language sentences to match the word order of the source language. We consider an extremely resource constrained scenario, where we do not have any parallel corpus for the child task. We are limited to a bilingual dictionary for transfer information from the assisting to the source language. From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair. Addressing Lexical Divergence BIBREF3 explored transfer learning for NMT on low-resource languages. They studied the influence of language divergence between languages chosen for training the parent and child model, and showed that choosing similar languages for training the parent and child model leads to better improvements from transfer learning. A limitation of BIBREF3 approach is that they ignore the lexical similarity between languages and also the source language embeddings are randomly initialized. BIBREF10 , BIBREF11 , BIBREF12 take advantage of lexical similarity between languages in their work. BIBREF10 proposed to use Byte-Pair Encoding (BPE) to represent the sentences in both the parent and the child language to overcome the above limitation. They show using BPE benefits transfer learning especially when the involved languages are closely-related agglutinative languages. Similarly, BIBREF11 utilize lexical similarity between the source and assisting languages by training a character-level NMT system. BIBREF12 address lexical divergence by using bilingual embeddings and mixture of universal token embeddings. One of the languages' vocabulary, usually English vocabulary is considered as universal tokens and every word in the other languages is represented as a mixture of universal tokens. They show results on extremely low-resource languages. Addressing Word Order Divergence To the best of our knowledge, no work has addressed word order divergence in transfer learning for multilingual NMT. However, some work exists for other NLP tasks that could potentially address word order. For Named Entity Recognition (NER), BIBREF14 use a self-attention layer after the Bi-LSTM layer to address word-order divergence for Named Entity Recognition (NER) task. The approach does not show any significant improvements over multiple languages. A possible reason is that the divergence has to be addressed before/during construction of the contextual embeddings in the Bi-LSTM layer, and the subsequent self-attention layer does not address word-order divergence. BIBREF15 use adversarial training for cross-lingual question-question similarity ranking in community question answering. The adversarial training tries to force the encoder representations of similar sentences from different input languages to have similar representations. Use of Pre-ordering Pre-ordering the source language sentences to match the target language word order has been useful in addressing word-order divergence for Phrase-Based SMT BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Recently, BIBREF20 proposed a way to measure and reduce the divergence between the source and target languages based on morphological and syntactic properties, also termed as anisomorphism. They demonstrated that by reducing the anisomorphism between the source and target languages, consistent improvements in NMT performance were obtained. The NMT system used additional features like word forms, POS tags and dependency relations in addition to parallel corpora. On the other hand, BIBREF21 observed a drop in performance due to pre-ordering for NMT. Unlike BIBREF20 , the NMT system was trained on pre-ordered sentences and no additional features were provided to the system. Note that all these works address source-target divergence, not divergence between source languages in multilingual NMT. Proposed Solution Consider the task of translating from an extremely low-resource language (source) to a target language. The parallel corpus between the two languages if available may be too small to train a NMT model. Similar to existing works BIBREF3 , BIBREF10 , BIBREF12 , we use transfer learning to overcome data sparsity and train a NMT model between the source and the target languages. Specifically, the NMT model (parent model) is trained on the assisting language and target language pairs. We choose English as the assisting language in all our experiments. In our resource-scarce scenario, we have no parallel corpus for the child task. Hence, at test time, the source language sentence is translated using the parent model after performing a word-by-word translation into the assisting language. Since the source language and the assisting language (English) have different word order, we hypothesize that it leads to inconsistencies in the contextual representations generated by the encoder for the two languages. In this paper, we propose to pre-order English sentences (assisting language sentences) to match the word-order of the source language and train the parent model on this pre-ordered corpus. In our experiments, we look at scenarios where the assisting language has SVO word order and the source language has SOV word order. For instance, consider the English sentence Anurag will meet Thakur. One of the pre-ordering rule swaps the position of the noun phrase followed by a transitive verb with the transitive verb. The original and the resulting re-ordered parse tree will be as shown in the Table TABREF5 . Applying this reordering rule to the above sentence Anurag will meet Thakur will yield the reordered sentence Anurag Thakur will meet. Additionally, the Table TABREF5 shows the parse trees for the above sentence with and without pre-ordering. Pre-ordering should also be beneficial for other word order divergence scenarios (e.g., SOV to SVO), but we leave verification of these additional scenarios for future work. Experimental Setup In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments. Languages We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order. Datasets For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set. Network We use OpenNMT-Torch BIBREF24 to train the NMT system. We use the standard sequence-to-sequence architecture with attention BIBREF13 . We use an encoder which contains two layers of bidirectional LSTMs with 500 neurons each. The decoder contains two LSTM layers with 500 neurons each. Input feeding approach BIBREF1 is used where the previous attention hidden state is fed as input to the decoder LSTM. We use a mini-batch of size 50 and use a dropout layer. We begin with an initial learning rate of INLINEFORM0 and decay the learning rate by a factor of INLINEFORM1 when the perplexity on validation set increases. The training is stopped when the learning rate falls below INLINEFORM2 or number of epochs=22. The English input is initialized with pre-trained embeddings trained using fastText BIBREF25 . English vocabulary consists of INLINEFORM0 tokens appearing at least 2 times in the English training corpus. For constructing the Hindi vocabulary we considered only those tokens appearing at least 5 times in the training split resulting in a vocabulary size of INLINEFORM1 tokens. For representing English and other source languages into a common space, we translate each word in the source language into English using a bilingual dictionary (Google Translate word translation in our case). In an end-to-end solution, it would have been ideal to use bilingual embeddings or obtain word-by-word translations via bilingual embeddings BIBREF14 . But, the quality of publicly available bilingual embeddings for English-Indian languages is very low for obtaining good-quality, bilingual representations BIBREF26 , BIBREF27 . We also found that these embeddings were not useful for transfer learning. We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 . These Hindi-tuned rules have been found to improve reordering for many English to Indian language pairs BIBREF29 . Results In this section, we describe the results from our experiments on NMT task. We report the results on X-Hindi pair, where X is one of Bengali, Gujarati, Marathi, Tamil, and Malayalam. The results are presented in the Table TABREF6 . We report BLEU scores and LeBLEU scores BIBREF30 . We observe that both the pre-ordering configurations significantly improve the BLEU scores over the baseline scores. We observe larger gains when generic pre-ordering rules are used compared to the Hindi-tuned pre-ordering rules. These results support our hypothesis that word-order divergence can limit the benefits of multilingual translation. Reducing the word order divergence can improve translation in extremely low-resource scenarios. An analysis of the outputs revealed that pre-ordering significantly reducing the number of UNK tokens (placeholder for unknown words) in the test output (Table TABREF14 ). We hypothesize that due to word order divergence between English and Indian languages, the encoder representation generated is not consistent leading to decoder generating unknown words. However, the pre-ordered models generate better contextual representations leading to less number of unknown tokens and better translation which is also reflected in the BLEU scores. Conclusion In this paper, we show that handling word-order divergence between source and assisting languages is crucial for the success of multilingual NMT in an extremely low-resource setting. We show that pre-ordering the assisting language to match the word order of the source language significantly improves translation quality in an extremely low-resource setting. While the current work focused on Indian languages, we would like to validate the hypothesis on a more diverse set of languages.
How do they match words before reordering them?
Unanswerable
2,231
qasper
3k
Introduction The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by clinicians and researchers in the field. Often the outputs of BioIE systems are used to assist in the creation of databases, or to suggest new paths for research. For example, a ranked list of interacting proteins that are extracted from biomedical literature, but are not present in existing databases, can allow researchers to make informed decisions about which protein/gene to study further. Interactions between drugs are necessary for clinicians who simultaneously administer multiple drugs to their patients. A database of diseases, treatments and tests is beneficial for doctors consulting in complicated medical cases. The main problems in BioIE are similar to those in Information Extraction: This paper discusses, in each section, various methods that have been adopted to solve the listed problems. Each section also highlights the difficulty of Information Extraction tasks in the biomedical domain. This paper is intended as a primer to Biomedical Information Extraction for current NLP researchers. It aims to highlight the diversity of the various techniques from Information Extraction that have been applied in the Biomedical domain. The state of biomedical text mining is reviewed regularly. For more extensive surveys, consult BIBREF0 , BIBREF1 , BIBREF2 . Named Entity Recognition and Fact Extraction Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges: Some of the earliest systems were heavily dependent on hand-crafted features. The method proposed in BIBREF4 for recognition of protein names in text does not require any prepared dictionary. The work gives examples of diversity in protein names and lists multiple rules depending on simple word features as well as POS tags. BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements. BIBREF8 propose a boostrapping mechanism to bootstrap biomedical ontologies using NELL BIBREF9 , which uses a coupled semi-supervised bootstrapping approach to extract facts from text, given an ontology and a small number of “seed” examples for each category. This interesting approach (called BioNELL) uses an ontology of over 100 categories. In contrast to NELL, BioNELL does not contain any relations in the ontology. BioNELL is motivated by the fact that a lot of scientific literature available online is highly reliable due to peer-review. The authors note that the algorithm used by NELL to bootstrap fails in BioNELL due to ambiguities in biomedical literature, and heavy semantic drift. One of the causes for this is that often common words such as “white”, “dad”, “arm” are used as names of genes- this can easily result in semantic drift in one iteration of the bootstrapping. In order to mitigate this, they use Pointwise Mutual Information scores for corpus level statistics, which attributes a small score to common words. In addition, in contrast to NELL, BioNELL only uses high instances as seeds in the next iteration, but adds low ranking instances to the knowledge base. Since evaluation is not possible using Mechanical Turk or a small number of experts (due to the complexity of the task), they use Freebase BIBREF10 , a knowledge base that has some biomedical concepts as well. The lexicon learned using BioNELL is used to train an NER system. The system shows a very high precision, thereby showing that BioNELL learns very few ambiguous terms. More recently, deep learning techniques have been developed to further enhance the performance of NER systems. BIBREF11 explore recurrent neural networks for the problem of NER in biomedical text. Relation Extraction In Biomedical Information Extraction, Relation Extraction involves finding related entities of many different kinds. Some of these include protein-protein interactions, disease-gene relations and drug-drug interactions. Due to the explosion of available biomedical literature, it is impossible for one person to extract relevant relations from published material. Automatic extraction of relations assists in the process of database creation, by suggesting potentially related entities with links to the source article. For example, a database of drug-drug interactions is important for clinicians who administer multiple drugs simultaneously to their patients- it is imperative to know if one drug will have an adverse effect on the other. A variety of methods have been developed for relation extractions, and are often inspired by Relation Extraction in NLP tasks. These include rule-based approaches, hand-crafted patterns, feature-based and kernel machine learning methods, and more recently deep learning architectures. Relation Extraction systems over Biomedical Corpora are often affected by noisy extraction of entities, due to ambiguities in names of proteins, genes, drugs etc. BIBREF12 was one of the first large scale Information Extraction efforts to study the feasibility of extraction of protein-protein interactions (such as “protein A activates protein B") from Biomedical text. Using 8 hand-crafted regular expressions over a fixed vocabulary, the authors were able to achieve a recall of 30% for interactions present in The Dictionary of Interacting Proteins (DIP) from abstracts in Medline. The method did not differentiate between the type of relation. The reasons for the low recall were the inconsistency in protein nomenclature, information not present in the abstract, and due to specificity of the hand-crafted patterns. On a small subset of extracted relations, they found that about 60% were true interactions between proteins not present in DIP. BIBREF13 combine sentence level relation extraction for protein interactions with corpus level statistics. Similar to BIBREF12 , they do not consider the type of interaction between proteins- only whether they interact in the general sense of the word. They also do not differentiate between genes and their protein products (which may share the same name). They use Pointwise Mutual Information (PMI) for corpus level statistics to determine whether a pair of proteins occur together by chance or because they interact. They combine this with a confidence aggregator that takes the maximum of the confidence of the extractor over all extractions for the same protein-pair. The extraction uses a subsequence kernel based on BIBREF14 . The integrated model, that combines PMI with aggregate confidence, gives the best performance. Kernel methods have widely been studied for Relation Extraction in Biomedical Literature. Common kernels used usually exploit linguistic information by utilising kernels based on the dependency tree BIBREF15 , BIBREF16 , BIBREF17 . BIBREF18 look at the extraction of diseases and their relevant genes. They use a dictionary from six public databases to annotate genes and diseases in Medline abstracts. In their work, the authors note that when both genes and diseases are correctly identified, they are related in 94% of the cases. The problem then reduces to filtering incorrect matches using the dictionary, which occurs due to false positives resulting from ambiguities in the names as well as ambiguities in abbreviations. To this end, they train a Max-Ent based NER classifier for the task, and get a 26% gain in precision over the unfiltered baseline, with a slight hit in recall. They use POS tags, expanded forms of abbreviations, indicators for Greek letters as well as suffixes and prefixes commonly used in biomedical terms. BIBREF19 adopt a supervised feature-based approach for the extraction of drug-drug interaction (DDI) for the DDI-2013 dataset BIBREF20 . They partition the data in subsets depending on the syntactic features, and train a different model for each. They use lexical, syntactic and verb based features on top of shallow parse features, in addition to a hand-crafted list of trigger words to define their features. An SVM classifier is then trained on the feature vectors, with a positive label if the drug pair interacts, and negative otherwise. Their method beats other systems on the DDI-2013 dataset. Some other feature-based approaches are described in BIBREF21 , BIBREF22 . Distant supervision methods have also been applied to relation extraction over biomedical corpora. In BIBREF23 , 10,000 neuroscience articles are distantly supervised using information from UMLS Semantic Network to classify brain-gene relations into geneExpression and otherRelation. They use lexical (bag of words, contextual) features as well as syntactic (dependency parse features). They make the “at-least one” assumption, i.e. at least one of the sentences extracted for a given entity-pair contains the relation in database. They model it as a multi-instance learning problem and adopt a graphical model similar to BIBREF24 . They test using manually annotated examples. They note that the F-score achieved are much lesser than that achieved in the general domain in BIBREF24 , and attribute to generally poorer performance of NER tools in the biomedical domain, as well as less training examples. BIBREF25 explore distant supervision methods for protein-protein interaction extraction. More recently, deep learning methods have been applied to relation extraction in the biomedical domain. One of the main advantages of such methods over traditional feature or kernel based learning methods is that they require minimal feature engineering. In BIBREF26 , skip-gram vectors BIBREF27 are trained over 5.6Gb of unlabelled text. They use these vectors to extract protein-protein interactions by converting them into features for entities, context and the entire sentence. Using an SVM for classification, their method is able to outperform many kernel and feature based methods over a variety of datasets. BIBREF28 follow a similar method by using word vectors trained on PubMed articles. They use it for the task of relation extraction from clinical text for entities that include problem, treatment and medical test. For a given sentence, given labelled entities, they predict the type of relation exhibited (or None) by the entity pair. These types include “treatment caused medical problem”, “test conducted to investigate medical problem”, “medical problem indicates medical problems”, etc. They use a Convolutional Neural Network (CNN) followed by feedforward neural network architecture for prediction. In addition to pre-trained word vectors as features, for each token they also add features for POS tags, distance from both the entities in the sentence, as well BIO tags for the entities. Their model performs better than a feature based SVM baseline that they train themselves. The BioNLP'16 Shared Tasks has also introduced some Relation Extraction tasks, in particular the BB3-event subtask that involves predicting whether a “lives-in” relation holds for a Bacteria in a location. Some of the top performing models for this task are deep learning models. BIBREF29 train word embeddings with six billions words of scientific texts from PubMed. They then consider the shortest dependency path between the two entities (Bacteria and location). For each token in the path, they use word embedding features, POS type embeddings and dependency type embeddings. They train a unidirectional LSTM BIBREF30 over the dependency path, that achieves an F-Score of 52.1% on the test set. BIBREF31 improve the performance by making modifications to the above model. Instead of using the shortest dependency path, they modify the parse tree based on some pruning strategies. They also add feature embeddings for each token to represent the distance from the entities in the shortest path. They then train a Bidirectional LSTM on the path, and obtain an F-Score of 57.1%. The recent success of deep learning models in Biomedical Relation Extraction that require minimal feature engineering is promising. This also suggests new avenues of research in the field. An approach as in BIBREF32 can be used to combine multi-instance learning and distant supervision with a neural architecture. Event Extraction Event Extraction in the Biomedical domain is a task that has gained more importance recently. Event Extraction goes beyond Relation Extraction. In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA. Generally, it includes detection of targeted event types such as gene expression, regulation, localisation and transcription. Each event type in addition can have multiple arguments that need to be detected. An additional layer of complexity comes from the fact that events can also be arguments of other events, giving rise to a nested structure. This helps to capture the underlying biology better BIBREF1 . Detecting the event type often involves recognising and classifying trigger words. Often, these words are verbs such as “activates”, “inhibits”, “phosphorylation” that may indicate a single, or sometimes multiple event types. In this section, we will discuss some of the successful models for Event Extraction in some detail. Event Extraction gained a lot of interest with the availability of an annotated corpus with the BioNLP'09 Shared Task on Event Extraction BIBREF34 . The task involves prediction of trigger words over nine event types such as expression, transcription, catabolism, binding, etc. given only annotation of named entities (proteins, genes, etc.). For each event, its class, trigger expression and arguments need to be extracted. Since the events can be arguments to other events, the final output in general is a graph representation with events and named entities as nodes, and edges that correspond to event arguments. BIBREF33 present a pipeline based method that is heavily dependent on dependency parsing. Their pipeline approach consists of three steps: trigger detection, argument detection and semantic post-processing. While the first two components are learning based systems, the last component is a rule based system. For the BioNLP'09 corpus, only 5% of the events span multiple sentences. Hence the approach does not get affected severely by considering only single sentences. It is important to note that trigger words cannot simply be reduced to a dictionary lookup. This is because a specific word may belong to multiple classes, or may not always be a trigger word for an event. For example, “activate” is found to not be a trigger word in over 70% of the cases. A multi-class SVM is trained for trigger detection on each token, using a large feature set consisting of semantic and syntactic features. It is interesting to note that the hyperparameters of this classifier are optimised based on the performance of the entire end-to-end system. For the second component to detect arguments, labels for edges between entities must be predicted. For the BioNLP'09 Shared Task, each directed edge from one event node to another event node, or from an event node to a named entity node are classified as “theme”, “cause”, or None. The second component of the pipeline makes these predictions independently. This is also trained using a multi-class SVM which involves heavy use of syntactic features, including the shortest dependency path between the nodes. The authors note that the precision-recall choice of the first component affects the performance of the second component: since the second component is only trained on Gold examples, any error by the first component will lead to a cascading of errors. The final component, which is a semantic post-processing step, consists of rules and heuristics to correct the output of the second component. Since the edge predictions are made independently, it is possible that some event nodes do not have any edges, or have an improper combination of edges. The rule based component corrects these and applies rules to break directed cycles in the graph, and some specific heuristics for different types of events. The final model gives a cumulative F-Score of 52% on the test set, and was the best model on the task. BIBREF35 note that previous approaches on the task suffer due to the pipeline nature and the propagation of errors. To counter this, they adopt a joint inference method based on Markov Logic Networks BIBREF36 for the same task on BioNLP'09. The Markov Logic Network jointly predicts whether each token is a trigger word, and if yes, the class it belongs to; for each dependency edge, whether it is an argument path leading to a “theme” or a “cause”. By formulating the Event Extraction problem using an MLN, the approach becomes computationally feasible and only linear in the length of the sentence. They incorporate hard constraints to encode rules such as “an argument path must have an event”, “a cause path must start with a regulation event”, etc. In addition, they also include some domain specific soft constraints as well as some linguistically-motivated context-specific soft constraints. In order to train the MLN, stochastic gradient descent was used. Certain heuristic methods are implemented in order to deal with errors due to syntactic parsing, especially ambiguities in PP-attachment and coordination. Their final system is competitive and comes very close to the system by BIBREF33 with an average F-Score of 50%. To further improve the system, they suggest leveraging additional joint-inference opportunities and integrating the syntactic parser better. Some other more recent models for Biomedical Event Extraction include BIBREF37 , BIBREF38 . Conclusion We have discussed some of the major problems and challenges in BioIE, and seen some of the diverse approaches adopted to solve them. Some interesting problems such as Pathway Extraction for Biological Systems BIBREF39 , BIBREF40 have not been discussed. Biomedical Information Extraction is a challenging and exciting field for NLP researchers that demands application of state-of-the-art methods. Traditionally, there has been a dependence on hand-crafted features or heavily feature-engineered methods. However, with the advent of deep learning methods, a lot of BioIE tasks are seeing an improvement by adopting deep learning models such as Convolutional Neural Networks and LSTMs, which require minimal feature engineering. Rapid progress in developing better systems for BioIE will be extremely helpful for clinicians and researchers in the Biomedical domain.
Does the paper explore extraction from electronic health records?
Yes
3,035
qasper
3k
Introduction Neural networks have been successfully used to describe images with text using sequence-to-sequence models BIBREF0. However, the results are simple and dry captions which are one or two phrases long. Humans looking at a painting see more than just objects. Paintings stimulate sentiments, metaphors and stories as well. Therefore, our goal is to have a neural network describe the painting artistically in a style of choice. As a proof of concept, we present a model which generates Shakespearean prose for a given painting as shown in Figure FIGREF1. Accomplishing this task is difficult with traditional sequence to sequence models since there does not exist a large collection of Shakespearean prose which describes paintings: Shakespeare's works describes a single painting shown in Figure FIGREF3. Fortunately we have a dataset of modern English poems which describe images BIBREF1 and line-by-line modern paraphrases of Shakespeare's plays BIBREF2. Our solution is therefore to combine two separately trained models to synthesize Shakespearean prose for a given painting. Introduction ::: Related work A general end-to-end approach to sequence learning BIBREF3 makes minimal assumptions on the sequence structure. This model is widely used in tasks such as machine translation, text summarization, conversational modeling, and image captioning. A generative model using a deep recurrent architecture BIBREF0 has also beeen used for generating phrases describing an image. The task of synthesizing multiple lines of poetry for a given image BIBREF1 is accomplished by extracting poetic clues from images. Given the context image, the network associates image attributes with poetic descriptions using a convolutional neural net. The poem is generated using a recurrent neural net which is trained using multi-adversarial training via policy gradient. Transforming text from modern English to Shakespearean English using text "style transfer" is challenging. An end to end approach using a sequence-to-sequence model over a parallel text corpus BIBREF2 has been proposed based on machine translation. In the absence of a parallel text corpus, generative adversarial networks (GANs) have been used, which simultaneously train two models: a generative model which captures the data distribution, and a discriminative model which evaluates the performance of the generator. Using a target domain language model as a discriminator has also been employed BIBREF4, providing richer and more stable token-level feedback during the learning process. A key challenge in both image and text style transfer is separating content from style BIBREF5, BIBREF6, BIBREF7. Cross-aligned auto-encoder models have focused on style transfer using non-parallel text BIBREF7. Recently, a fine grained model for text style transfer has been proposed BIBREF8 which controls several factors of variation in textual data by using back-translation. This allows control over multiple attributes, such as gender and sentiment, and fine-grained control over the trade-off between content and style. Methods We use a total three datasets: two datasets for generating an English poem from an image, and Shakespeare plays and their English translations for text style transfer. We train a model for generating poems from images based on two datasets BIBREF1. The first dataset consists of image and poem pairs, namely a multi-modal poem dataset (MultiM-Poem), and the second dataset is a large poem corpus, namely a uni-modal poem dataset (UniM-Poem). The image and poem pairs are extended by adding the nearest three neighbor poems from the poem corpus without redundancy, and an extended image and poem pair dataset is constructed and denoted as MultiM-Poem(Ex)BIBREF1. We use a collection of line-by-line modern paraphrases for 16 of Shakespeare’s plays BIBREF2, for training a style transfer network from English poems to Shakespearean prose. We use 18,395 sentences from the training data split. We keep 1,218 sentences in the validation data set and 1,462 sentences in our test set. Methods ::: Image To Poem Actor-Critic Model For generating a poem from images we use an existing actor-critic architecture BIBREF1. This involves 3 parallel CNNs: an object CNN, sentiment CNN, and scene CNN, for feature extraction. These features are combined with a skip-thought model which provides poetic clues, which are then fed into a sequence-to-sequence model trained by policy gradient with 2 discriminator networks for rewards. This as a whole forms a pipeline that takes in an image and outputs a poem as shown on the top left of Figure FIGREF4. A CNN-RNN generative model acts as an agent. The parameters of this agent define a policy whose execution determines which word is selected as an action. When the agent selects all words in a poem, it receives a reward. Two discriminative networks, shown on the top right of Figure FIGREF4, are defined to serve as rewards concerning whether the generated poem properly describes the input image and whether the generated poem is poetic. The goal of the poem generation model is to generate a sequence of words as a poem for an image to maximize the expected return. Methods ::: Shakespearizing Poetic Captions For Shakespearizing modern English texts, we experimented with various types of sequence to sequence models. Since the size of the parallel translation data available is small, we leverage a dictionary providing a mapping between Shakespearean words and modern English words to retrofit pre-trained word embeddings. Incorporating this extra information improves the translation task. The large number of shared word types between the source and target sentences indicates that sharing the representation between them is beneficial. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with Attention We use a sequence-to-sequence model which consists of a single layer unidrectional LSTM encoder and a single layer LSTM decoder and pre-trained retrofitted word embeddings shared between source and target sentences. We experimented with two different types of attention: global attention BIBREF9, in which the model makes use of the output from the encoder and decoder for the current time step only, and Bahdanau attention BIBREF10, where computing attention requires the output of the decoder from the prior time step. We found that global attention performs better in practice for our task of text style transfer. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with a Pointer Network Since a pair of corresponding Shakespeare and modern English sentences have significant vocabulary overlap we extend the sequence-to-sequence model mentioned above using pointer networks BIBREF11 that provide location based attention and have been used to enable copying of tokens directly from the input. Moreover, there are lot of proper nouns and rare words which might not be predicted by a vanilla sequence to sequence model. Methods ::: Shakespearizing Poetic Captions ::: Prediction For both seq2seq models, we use the attention matrices returned at each decoder time step during inference, to compute the next word in the translated sequence if the decoder output at the current time step is the UNK token. We replace the UNKs in the target output with the highest aligned, maximum attention, source word. The seq2seq model with global attention gives the best results with an average target BLEU score of 29.65 on the style transfer dataset, compared with an average target BLEU score of 26.97 using the seq2seq model with pointer networks. Results We perform a qualitative analysis of the Shakespearean prose generated for the input paintings. We conducted a survey, in which we presented famous paintings including those shown in Figures FIGREF1 and FIGREF10 and the corresponding Shakespearean prose generated by the model, and asked 32 students to rate them on the basis of content, creativity and similarity to Shakespearean style on a Likert scale of 1-5. Figure FIGREF12 shows the result of our human evaluation. The average content score across the paintings is 3.7 which demonstrates that the prose generated is relevant to the painting. The average creativity score is 3.9 which demonstrates that the model captures more than basic objects in the painting successfully using poetic clues in the scene. The average style score is 3.9 which demonstrates that the prose generated is perceived to be in the style of Shakespeare. We also perform a quantitative analysis of style transfer by generating BLEU scores for the model output using the style transfer dataset. The variation of the BLEU scores with the source sentence lengths is shown in Figure FIGREF11. As expected, the BLEU scores decrease with increase in source sentence lengths. Results ::: Implementation All models were trained on Google Colab with a single GPU using Python 3.6 and Tensorflow 2.0. The number of hidden units for the encoder and decoder is 1,576 and 256 for seq2seq with global attention and seq2seq with pointer networks respectively. Adam optimizer was used with the default learning rate of 0.001. The model was trained for 25 epochs. We use pre-trained retrofitted word embeddings of dimension 192. Results ::: Limitations Since we do not have an end-to-end dataset, the generated English poem may not work well with Shakespeare style transfer as shown in Figure FIGREF12 for "Starry Night" with a low average content score. This happens when the style transfer dataset does not have similar words in the training set of sentences. A solution would be to expand the style transfer dataset, for a better representation of the poem data. Results ::: Conclusions and Future Work In conclusion, combining two pipelines with an intermediate representation works well in practice. We observe that a CNN-RNN based image-to-poem net combined with a seq2seq model with parallel text corpus for text style transfer synthesizes Shakespeare-style prose for a given painting. For the seq2seq model used, we observe that it performs better in practice using global attention as compared with local attention. We make our models and code publicly available BIBREF12. In future work we would like to experiment with GANs in the absence of non-parallel datasets, so that we can use varied styles for text style transfer. We would also like to experiment with cross aligned auto-encoders, which form a latent content representation, to efficiently separate style and content.
What models are used for painting embedding and what for language style transfer?
generating a poem from images we use an existing actor-critic architecture, various types of sequence to sequence models
1,653
qasper
3k
Introduction Bidirectional Encoder Representations from Transformers (BERT) is a novel Transformer BIBREF0 model, which recently achieved state-of-the-art performance in several language understanding tasks, such as question answering, natural language inference, semantic similarity, sentiment analysis, and others BIBREF1. While well-suited to dealing with relatively short sequences, Transformers suffer from a major issue that hinders their applicability in classification of long sequences, i.e. they are able to consume only a limited context of symbols as their input BIBREF2. There are several natural language (NLP) processing tasks that involve such long sequences. Of particular interest are topic identification of spoken conversations BIBREF3, BIBREF4, BIBREF5 and call center customer satisfaction prediction BIBREF6, BIBREF7, BIBREF8, BIBREF9. Call center conversations, while usually quite short and to the point, often involve agents trying to solve very complex issues that the customers experience, resulting in some calls taking even an hour or more. For speech analytics purposes, these calls are typically transcribed using an automatic speech recognition (ASR) system, and processed in textual representations further down the NLP pipeline. These transcripts sometimes exceed the length of 5000 words. Furthermore, temporal information might play an important role in tasks like CSAT. For example, a customer may be angry at the beginning of the call, but after her issue is resolved, she would be very satisfied with the way it was handled. Therefore, simple bag of words models, or any model that does not include temporal dependencies between the inputs, may not be well-suited to handle this category of tasks. This motivates us to employ model such as BERT in this task. In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences. Our novel contributions are: Two extensions - RoBERT and ToBERT - to the BERT model, which enable its application in classification of long texts by performing segmentation and using another layer on top of the segment representations. State-of-the-art results on the Fisher topic classification task. Significant improvement on the CSAT prediction task over the MS-CNN model. Related work Several dimensionality reduction algorithms such as RBM, autoencoders, subspace multinomial models (SMM) are used to obtain a low dimensional representation of documents from a simple BOW representation and then classify it using a simple linear classifiers BIBREF11, BIBREF12, BIBREF13, BIBREF4. In BIBREF14 hierarchical attention networks are used for document classification. They evaluate their model on several datasets with average number of words around 150. Character-level CNN are explored in BIBREF15 but it is prohibitive for very long documents. In BIBREF16, dataset collected from arXiv papers is used for classification. For classification, they sample random blocks of words and use them together for classification instead of using full document which may work well as arXiv papers are usually coherent and well written on a well defined topic. Their method may not work well on spoken conversations as random block of words usually do not represent topic of full conversation. Several researchers addressed the problem of predicting customer satisfaction BIBREF6, BIBREF7, BIBREF8, BIBREF9. In most of these works, logistic regression, SVM, CNN are applied on different kinds of representations. In BIBREF17, authors use BERT for document classification but the average document length is less than BERT maximum length 512. TransformerXL BIBREF2 is an extension to the Transformer architecture that allows it to better deal with long inputs for the language modelling task. It relies on the auto-regressive property of the model, which is not the case in our tasks. Method ::: BERT Because our work builds heavily upon BERT, we provide a brief summary of its features. BERT is built upon the Transformer architecture BIBREF0, which uses self-attention, feed-forward layers, residual connections and layer normalization as the main building blocks. It has two pre-training objectives: Masked language modelling - some of the words in a sentence are being masked and the model has to predict them based on the context (note the difference from the typical autoregressive language model training objective); Next sentence prediction - given two input sequences, decide whether the second one is the next sentence or not. BERT has been shown to beat the state-of-the-art performance on 11 tasks with no modifications to the model architecture, besides adding a task-specific output layer BIBREF1. We follow same procedure suggested in BIBREF1 for our tasks. Fig. FIGREF8 shows the BERT model for classification. We obtain two kinds of representation from BERT: pooled output from last transformer block, denoted by H, and posterior probabilities, denoted by P. There are two variants of BERT - BERT-Base and BERT-Large. In this work we are using BERT-Base for faster training and experimentation, however, our methods are applicable to BERT-Large as well. BERT-Base and BERT-Large are different in model parameters such as number of transformer blocks, number of self-attention heads. Total number of parameters in BERT-Base are 110M and 340M in BERT-Large. BERT suffers from major limitations in terms of handling long sequences. Firstly, the self-attention layer has a quadratic complexity $O(n^2)$ in terms of the sequence length $n$ BIBREF0. Secondly, BERT uses a learned positional embeddings scheme BIBREF1, which means that it won't likely be able to generalize to positions beyond those seen in the training data. To investigate the effect of fine-tuning BERT on task performance, we use either the pre-trained BERT weights, or the weights from a BERT fine-tuned on the task-specific dataset on a segment-level (i.e. we preserve the original label but fine-tune on each segment separately instead of on the whole text sequence). We compare these results to using the fine-tuned segment-level BERT predictions directly as inputs to the next layer. Method ::: Recurrence over BERT Given that BERT is limited to a particular input length, we split the input sequence into segments of a fixed size with overlap. For each of these segments, we obtain H or P from BERT model. We then stack these segment-level representations into a sequence, which serves as input to a small (100-dimensional) LSTM layer. Its output serves as a document embedding. Finally, we use two fully connected layers with ReLU (30-dimensional) and softmax (the same dimensionality as the number of classes) activations to obtain the final predictions. With this approach, we overcome BERT's computational complexity, reducing it to $O(n/k * k^2) = O(nk)$ for RoBERT, with $k$ denoting the segment size (the LSTM component has negligible linear complexity $O(k)$). The positional embeddings are also no longer an issue. Method ::: Transformer over BERT Given that Transformers' edge over recurrent networks is their ability to effectively capture long distance relationships between words in a sequence BIBREF0, we experiment with replacing the LSTM recurrent layer in favor of a small Transformer model (2 layers of transformer building block containing self-attention, fully connected, etc.). To investigate if preserving the information about the input sequence order is important, we also build a variant of ToBERT which learns positional embeddings at the segment-level representations (but is limited to sequences of length seen during the training). ToBERT's computational complexity $O(\frac{n^2}{k^2})$ is asymptotically inferior to RoBERT, as the top-level Transformer model again suffers from quadratic complexity in the number of segments. However, in practice this number is much smaller than the input sequence length (${\frac{n}{k}} << n$), so we haven't observed performance or memory issues with our datasets. Experiments We evaluated our models on 3 different datasets: CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR). 20 newsgroups for topic identification task, consisting of written text; Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual); Experiments ::: CSAT CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset. As the distribution is skewed towards extremes, we choose to do binary classification with ratings above 4.5 as satisfied and below 4.5 as dissatisfied. Quantization of ratings also helped us to create a balanced dataset. This dataset contains 4331 calls and we split them into 3 sets for our experiments: 2866 calls for training, 362 calls for validation and, finally, 1103 calls for testing. We obtained the transcripts by employing an ASR system. The ASR system uses TDNN-LSTM acoustic model trained on Fisher and Switchboard datasets with lattice-free maximum mutual information criterion BIBREF18. The word error rates using four-gram language models were 9.2% and 17.3% respectively on Switchboard and CallHome portions of Eval2000 dataset. Experiments ::: 20 newsgroups 20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website. Experiments ::: Fisher Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic. We used same training and test splits as BIBREF3 in which 1374 and 1372 documents are used for training and testing respectively. For validation of our model, we used 10% of training dataset and the remaining 90% was used for actual model training. The number of topics in this data set is 40. Experiments ::: Dataset Statistics Table TABREF22 shows statistics of our datasets. It can be observed that average length of Fisher is much higher than 20 newsgroups and CSAT. Cumulative distribution of document lengths for each dataset is shown in Fig. FIGREF21. It can be observed that almost all of the documents in Fisher dataset have length more than 1000 words. For CSAT, more than 50% of the documents have length greater than 500 and for 20newsgroups only 10% of the documents have length greater than 500. Note that, for CSAT and 20newsgroups, there are few documents with length more than 5000. Experiments ::: Architecture and Training Details In this work, we split document into segments of 200 tokens with a shift of 50 tokens to extract features from BERT model. For RoBERT, LSTM model is trained to minimize cross-entropy loss with Adam optimizer BIBREF19. The initial learning rate is set to $0.001$ and is reduced by a factor of $0.95$ if validation loss does not decrease for 3-epochs. For ToBERT, the Transformer is trained with the default BERT version of Adam optimizer BIBREF1 with an initial learning rate of $5e$-5. We report accuracy in all of our experiments. We chose a model with the best validation accuracy to calculate accuracy on the test set. To accomodate for non-determinism of some TensorFlow GPU operations, we report accuracy averaged over 5 runs. Results Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small. Table TABREF27 presents results using fine-tuned BERT predictions instead of the pooled output from final transformer block. For each document, having obtained segment-wise predictions we can obtain final prediction for the whole document in three ways: Compute the average of all segment-wise predictions and find the most probable class; Find the most frequently predicted class; Train a classification model. It can be observed from Table TABREF27 that a simple averaging operation or taking most frequent predicted class works competitively for CSAT and 20newsgroups but not for the Fisher dataset. We believe the improvements from using RoBERT or ToBERT, compared to simple averaging or most frequent operations, are proportional to the fraction of long documents in the dataset. CSAT and 20newsgroups have (on average) significantly shorter documents than Fisher, as seen in Fig. FIGREF21. Also, significant improvements for Fisher could be because of less confident predictions from BERT model as this dataset has 40 classes. Fig. FIGREF31 presents the comparison of average voting and ToBERT for various document length ranges for Fisher dataset. We used fine-tuned BERT segment-level predictions (P) for this analysis. It can be observed that ToBERT outperforms average voting in every interval. To the best of our knowledge, this is a state-of-the-art result reported on the Fisher dataset. Table TABREF32 presents the effect of position embeddings on the model performance. It can be observed that position embeddings did not significantly affect the model performance for Fisher and 20newsgroups, but they helped slightly in CSAT prediction (an absolute improvement of 0.64% F1-score). We think that this is explained by the fact that Fisher and 20newsgroups are topic identification tasks, and the topic does not change much throughout these documents. However, CSAT may vary during the call, and in some cases a naive assumption that the sequential nature of the transcripts is irrelevant may lead to wrong conclusions. Table TABREF33 compares our results with previous works. It can be seen that our model ToBERT outperforms CNN based experiments by significant margin on CSAT and Fisher datasets. For CSAT dataset, we used multi-scale CNN (MS-CNN) as the baseline, given its strong results on Fisher and 20newsgroups. The setup was replicated from BIBREF5 for comparison. We also see that our result on 20 newsgroups is 0.6% worse than the state-of-the-art. Conclusions In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner).
On top of BERT does the RNN layer work better or the transformer layer?
Transformer over BERT (ToBERT)
2,655
qasper
3k
Introduction Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Related Work In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . zeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge. socher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top. Convolutional Neural Networks (CNN) CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence. Input: Extended Middle Context One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: “He had chest pain and <e1>headaches</e1> from <e2>mold</e2> in the bedroom.” If we only considered the middle context “from”, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context. Convolutional Layer Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words. Recurrent Neural Networks (RNN) Traditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 . Input of the RNNs Initial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings. Connectionist Bi-directional RNNs Especially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 Thus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 . Combining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”. In our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections. Word Representations Words are represented by concatenated vectors: a word embedding and a position feature vector. Pretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated. Position features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 . Objective Function: Ranking Loss Ranking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 with INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training. Experiments and Results We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task. RNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev. Beside of training single models, we also report ensemble results for which we combined the presented single models with a voting process. Performance of CNNs As a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features). In the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance. Performance of RNNs As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set. Afterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant. Similar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400. The results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance. Combination of CNNs and RNNs Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations. Comparison with State of the Art Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features. Conclusion In this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data. Acknowledgments Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship. This research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2.
How do they obtain the new context represetation?
They use two independent convolutional and max-pooling layers on (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. They concatenated the two results after pooling to get the new context representation.
2,435
qasper
3k
Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data.
How many different types of entities exist in the dataset?
OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities
2,851
qasper
3k
Data We build and test our MMT models on the Multi30K dataset BIBREF21 . Each image in Multi30K contains one English (EN) description taken from Flickr30K BIBREF22 and human translations into German (DE), French (FR) and Czech BIBREF23 , BIBREF24 , BIBREF25 . The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test. We only experiment with German and French, which are languages for which we have in-house expertise for the type of analysis we present. In addition to the official Multi30K test set (test 2016), we also use the test set from the latest WMT evaluation competition, test 2018 BIBREF25 . Degradation of source In addition to using the Multi30K dataset as is (standard setup), we probe the ability of our models to address the three linguistic phenomena where additional context has been proved important (Section ): ambiguities, gender-neutral words and noisy input. In a controlled experiment where we aim to remove the influence of frequency biases, we degrade the source sentences by masking words through three strategies to replace words by a placeholder: random source words, ambiguous source words and gender unmarked source words. The procedure is applied to the train, validation and test sets. For the resulting dataset generated for each setting, we compare models having access to text-only context versus additional text and multimodal contexts. We seek to get insights into the contribution of each type of context to address each type of degradation. In this setting (RND) we simulate erroneous source words by randomly dropping source content words. We first tag the entire source sentences using the spacy toolkit BIBREF26 and then drop nouns, verbs, adjectives and adverbs and replace these with a default BLANK token. By focusing on content words, we differ from previous work that suggests that neural machine translation is robust to non-content word noise in the source BIBREF27 . In this setting (AMB), we rely on the MLT dataset BIBREF11 which provides a list of source words with multiple translations in the Multi30k training set. We replace ambiguous words with the BLANK token in the source language, which results in two language-specific datasets. In this setting (PERS), we use the Flickr Entities dataset BIBREF28 to identify all the words that were annotated by humans as corresponding to the category person. We then replace such source words with the BLANK token. The statistics of the resulting datasets for the three degradation strategies are shown in Table TABREF10 . We note that RND and PERS are the same for language pairs as the degradation only depends on the source side, while for AMB the words replaced depend on the target language. Models Based on the models described in Section we experiment with eight variants: (a) baseline transformer model (base); (b) base with AIC (base+sum); (c) base with AIF using spacial (base+att) or object based (base+obj) image features; (d) standard deliberation model (del); (e) deliberation models enriched with image information: del+sum, del+att and del+obj. Training In all cases, we optimise our models with cross entropy loss. For deliberation network models, we first train the standard transformer model until convergence, and use it to initialise the encoder and first-pass decoder. For each of the training samples, we follow BIBREF19 and obtain a set of 10-best samples from the first pass decoder, with a beam search of size 10. We use these as the first-pass decoder samples. We use Adam as optimiser BIBREF29 and train the model until convergence. Results In this section we present results of our experiments, first in the original dataset without any source degradation (Section SECREF18 ) and then in the setup with various source degradation strategies (Section SECREF25 ). Standard setup Table TABREF14 shows the results of our main experiments on the 2016 and 2018 test sets for French and German. We use Meteor BIBREF31 as the main metric, as in the WMT tasks BIBREF25 . We compare our transformer baseline to transformer models enriched with image information, as well as to the deliberation models, with or without image information. We first note that our multimodal models achieve the state of the art performance for transformer networks (constrained models) on the English-German dataset, as compared to BIBREF30 . Second, our deliberation models lead to significant improvements over this baseline across test sets (average INLINEFORM0 , INLINEFORM1 ). Transformer-based models enriched with image information (base+sum, base+att and base+obj), on the other hand, show no major improvements with respect to the base performance. This is also the case for deliberation models with image information (del+sum, del+att, del+obj), which do not show significant improvement over the vanilla deliberation performance (del). However, as it has been shown in the WMT shared tasks on MMT BIBREF23 , BIBREF24 , BIBREF25 , automatic metrics often fail to capture nuances in translation quality, such as, the ones we expect the visual modality to help with, which – according to human perception – lead to better translations. To test this assumption in our settings, we performed human evaluation involving professional translators and native speakers of both French and German (three annotators). The annotators were asked to rank randomly selected test samples according to how well they convey the meaning of the source, given the image (50 samples per language pair per annotator). For each source segment, the annotator was shown the outputs of three systems: base+att, the current MMT state-of-the-art BIBREF30 , del and del+obj. A rank could be assigned from 1 to 3, allowing ties BIBREF32 . Annotators could assign zero rank to all translations if they were judged incomprehensible. Following the common practice in WMT BIBREF32 , each system was then assigned a score which reflects the proportion of times it was judged to be better or equal other systems. Table TABREF19 shows the human evaluation results. They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French. Manual inspection of translations suggests that deliberation setups tend to improve both the grammaticality and adequacy of the first pass outputs. For German, the most common modifications performed by the second-pass decoder are substitutions of adjectives and verbs (for test 2016, 15% and 12% respectively, of all the edit distance operations). Changes to adjectives are mainly grammatical, changes to verbs are contextual (e.g., changing laufen to rennen, both verbs mean run, but the second refers to running very fast). For French, 15% of all the changes are substitutions of nouns (for test 2016). These are again very contextual. For example, the French word travailleur (worker) is replaced by ouvrier (manual worker) in the contexts where tools, machinery or buildings are mentioned. For our analysis we used again spacy. The information on detected objects is particularly helpful for specific adequacy issues. Figure FIGREF15 demonstrates some such cases. In the first case, the base+att model misses the translation of race car: the German word Rennen translates only the word race. del introduces the word car (Auto) into the translation. Finally, del+obj correctly translates the expression race car (Rennwagen) by exploiting the object information. For French, del translates the source part in a body of water, missing from the base+att translation. del+obj additionally translated the word paddling according to the detected object Paddle. Source degradation setup Results of our source degradation experiments are shown in Table TABREF20 . A first observation is that – as with the standard setup – the performance of our deliberation models is overall better than that of the base models. The results of the multimodal models differ for German and French. For German, del+obj is the most successful configuration and shows statistically significant improvements over base for all setups. Moreover, for RND and AMB, it shows statistically significant improvements over del. However, especially for RND and AMB, del and del+sum are either the same or slightly worse than base. For French, all the deliberation models show statistically significant improvements over base (average INLINEFORM0 , INLINEFORM1 ), but the image information added to del only improve scores significantly for test 2018 RND. This difference in performances for French and German is potentially related to the need of more significant restructurings while translating from English into German. This is where a more complex del+obj architecture is more helpful. This is especially true for RND and AMB setups where blanked words could also be verbs, the part-of-speech most influenced by word order differences between English and German (see the decreasing complexity of translations for del and del+obj for the example (c) in Figure FIGREF21 ). To get an insight into the contribution of different contexts to the resolution of blanks, we performed manual analysis of examples coming from the English-German base, del and del+obj setups (50 random examples per setup), where we count correctly translated blanks per system. The results are shown in Table TABREF27 . As expected, they show that the RND and AMB blanks are more difficult to resolve (at most 40% resolved as compared to 61% for PERS). Translations of the majority of those blanks tend to be guessed by the textual context alone (especially for verbs). Image information is more helpful for PERS: we observe an increase of 10% in resolved blanks for del+obj as compared to del. However, for PERS the textual context is still enough in the majority of the cases: models tend to associate men with sports or women with cooking and are usually right (see Figure FIGREF21 example (c)). The cases where image helps seem to be those with rather generic contexts: see Figure FIGREF21 (b) where enjoying a summer day is not associated with any particular gender and make other models choose homme (man) or femme (woman), and only base+obj chooses enfant (child) (the option closest to the reference). In some cases detected objects are inaccurate or not precise enough to be helpful (e.g., when an object Person is detected) and can even harm correct translations. Conclusions We have proposed a novel approach to multimodal machine translation which makes better use of context, both textual and visual. Our results show that further exploring textual context through deliberation networks already leads to better results than the previous state of the art. Adding visual information, and in particular structural representations of this information, proved beneficial when input text contains noise and the language pair requires substantial restructuring from source to target. Our findings suggest that the combination of a deliberation approach and information from additional modalities is a promising direction for machine translation that is robust to noisy input. Our code and pre-processing scripts are available at https://github.com/ImperialNLP/MMT-Delib. Acknowledgments The authors thank the anonymous reviewers for their useful feedback. This work was supported by the MultiMT (H2020 ERC Starting Grant No. 678017) and MMVC (Newton Fund Institutional Links Grant, ID 352343575) projects. We also thank the annotators for their valuable help.
What dataset does this approach achieve state of the art results on?
the English-German dataset
1,833
qasper
3k
Introduction As social media, specially Twitter, takes on an influential role in presidential elections in the U.S., natural language processing of political tweets BIBREF0 has the potential to help with nowcasting and forecasting of election results as well as identifying the main issues with a candidate – tasks of much interest to journalists, political scientists, and campaign organizers BIBREF1. As a methodology to obtain training data for a machine learning system that analyzes political tweets, BIBREF2 devised a crowdsourcing scheme with variable crowdworker numbers based on the difficulty of the annotation task. They provided a dataset of tweets where the sentiments towards political candidates were labeled both by experts in political communication and by crowdworkers who were likely not domain experts. BIBREF2 revealed that crowdworkers can match expert performance relatively accurately and in a budget-efficient manner. Given this result, the authors envisioned future work in which groundtruth labels would be crowdsourced for a large number of tweets and then used to design an automated NLP tool for political tweet analysis. The question we address here is: How accurate are existing NLP tools for political tweet analysis? These tools would provide a baseline performance that any new machine learning system for political tweet analysis would compete against. We here explore whether existing NLP systems can answer the questions "What sentiment?" and "Towards whom?" accurately for the dataset of political tweets provided by BIBREF2. In our analysis, we include NLP tools with publicly-available APIs, even if the tools were not specifically designed for short texts like tweets, and, in particular, political tweets. Our experiments reveal that the task of entity-level sentiment analysis is difficult for existing tools to answer accurately while the recognition of the entity, here, which politician, was easier. NLP Toolkits NLP toolkits typically have the following capabilities: tokenization, part-of-speech (PoS) tagging, chunking, named entity recognition and sentiment analysis. In a study by BIBREF3, it is shown that the well-known NLP toolkits NLTK BIBREF4, Stanford CoreNLP BIBREF5, and TwitterNLP BIBREF6 have tokenization, PoS tagging and NER modules in their pipelines. There are two main approaches for NER: (1) rule-based and (2) statistical or machine learning based. The most ubiquitous algorithms for sequence tagging use Hidden Markov Models BIBREF7, Maximum Entropy Markov Models BIBREF7, BIBREF8, or Conditional Random Fields BIBREF9. Recent works BIBREF10, BIBREF11 have used recurrent neural networks with attention modules for NER. Sentiment detection tools like SentiStrength BIBREF12 and TensiStrength BIBREF13 are rule-based tools, relying on various dictionaries of emoticons, slangs, idioms, and ironic phrases, and set of rules that can detect the sentiment of a sentence overall or a targeted sentiment. Given a list of keywords, TensiStrength (similar to SentiStrength) reports the sentiment towards selected entities in a sentence, based on five levels of relaxation and five levels of stress. Among commercial NLP toolkits (e.g., BIBREF14, BIBREF15, BIBREF16), we selected BIBREF17 and BIBREF18 for our experiments, which, to the best of our knowledge, are the only publicly accessible commercial APIs for the task of entity-level sentiment analysis that is agnostic to the text domain. We also report results of TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, and Stanford NLP NER BIBREF21. Dataset and Analysis Methodology We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of "negative," "neutral," and "positive" was used by the annotators. BIBREF2 proposed a decision tree approach for computing the number of crowdworkers who should analyze a tweet based on the difficulty of the task. Tweets are labeled by 2, 3, 5, or 7 workers based on the difficulty of the task and the level of disagreement between the crowdworkers. The model computes the number of workers based on how long a tweet is, the presence of a link in a tweet, and the number of present sarcasm signals. Sarcasm is often used in political tweets and causes disagreement between the crowdworkers. The tweets that are deemed to be sarcastic by the decision tree model, are expected to be more difficult to annotate, and hence are allocated more crowdworkers to work on. We conducted two sets of experiments. In the first set, we used BIBREF23, BIBREF17, and BIBREF18, for entity-level sentiment analysis; in the second set, BIBREF17, BIBREF19, BIBREF24, BIBREF25, and BIBREF26, BIBREF18 for named-entity recognition. In the experiments that we conducted with TwitterNLP for named-entity recognition, we worked with the default values of the model. Furthermore, we selected the 3-class Stanford NER model, which uses the classes “person,” “organization,” and “location” because it resulted in higher accuracy compared to the 7-class model. For CogComp-NLP NER we used Ontonotes 5.0 NER model BIBREF27. For spaCy NER we used the `en_core_web_lg' model. We report the experimental results for our two tasks in terms of the correct classification rate (CCR). For sentiment analysis, we have a three-class problem (positive, negative, and neutral), where the classes are mutually exclusive. The CCR, averaged for a set of tweets, is defined to be the number of correctly-predicted sentiments over the number of groundtruth sentiments in these tweets. For NER, we consider that each tweet may reference up to four candidates, i.e., targeted entities. The CCR, averaged for a set of tweets, is the number of correctly predicted entities (candidates) over the number of groundtruth entities (candidates) in this set. Results and Discussion The dataset of 1,000 randomly selected tweets contains more than twice as many tweets about Trump than about the other candidates. In the named-entity recognition experiment, the average CCR of crowdworkers was 98.6%, while the CCR of the automated systems ranged from 77.2% to 96.7%. For four of the automated systems, detecting the entity Trump was more difficult than the other entities (e.g., spaCy 72.7% for the entity Trump vs. above 91% for the other entities). An example of incorrect NER is shown in Figure FIGREF1 top. The difficulties the automated tools had in NER may be explained by the fact that the tools were not trained on tweets, except for TwitterNLP, which was not in active development when the data was created BIBREF1. In the sentiment analysis experiments, we found that a tweet may contain multiple sentiments. The groundtruth labels contain 210 positive sentiments, 521 neutral sentiments, and 305 negative sentiments to the candidates. We measured the CCR, across all tweets, to be 31.7% for Rosette Text Analytics, 43.2% for Google Cloud, 44.2% for TensiStrength, and 74.7% for the crowdworkers. This means the difference between the performance of the tools and the crowdworkers is significant – more than 30 percent points. Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom. Conclusions and Future Work Our results show that existing NLP systems cannot accurately perform sentiment analysis of political tweets in the dataset we experimented with. Labeling by humans, even non-expert crowdworkers, yields accuracy results that are well above the results of existing automated NLP systems. In future work we will therefore use a crowdworker-labeled dataset to train a new machine-learning based NLP system for tweet analysis. We will ensure that the training data is balanced among classes. Our plan is to use state-of-the-art deep neural networks and compare their performance for entity-level sentiment analysis of political tweets. Acknowledgments Partial support of this work by the Hariri Institute for Computing and Computational Science & Engineering at Boston University (to L.G.) and a Google Faculty Research Award (to M.B. and L.G.) is gratefully acknowledged. Additionally, we would like to thank Daniel Khashabi for his help in running the CogComp-NLP Python API and Mike Thelwal for his help with TensiStrength. We are also grateful to the Stanford NLP group for clarifying some of the questions we had with regards to the Stanford NER tool.
Which toolkits do they use?
BIBREF17, BIBREF18, TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, Stanford NLP NER BIBREF21
1,452
qasper
3k
Background Teaching machine to read and comprehend a given passage/paragraph and answer its corresponding questions is a challenging task. It is also one of the long-term goals of natural language understanding, and has important applications in e.g., building intelligent agents for conversation and customer service support. In a real world setting, it is necessary to judge whether the given questions are answerable given the available knowledge, and then generate correct answers for the ones which are able to infer an answer in the passage or an empty answer (as an unanswerable question) otherwise. In comparison with many existing MRC systems BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , which extract answers by finding a sub-string in the passages/paragraphs, we propose a model that not only extracts answers but also predicts whether such an answer should exist. Using a multi-task learning approach (c.f. BIBREF5 ), we extend the Stochastic Answer Network (SAN) BIBREF1 for MRC answer span detector to include a classifier that whether the question is unanswerable. The unanswerable classifier is a pair-wise classification model BIBREF6 which predicts a label indicating whether the given pair of a passage and a question is unanswerable. The two models share the same lower layer to save the number of parameters, and separate the top layers for different tasks (the span detector and binary classifier). Our model is pretty simple and intuitive, yet efficient. Without relying on the large pre-trained language models (ELMo) BIBREF7 , the proposed model achieves competitive results to the state-of-the-art on Stanford Question Answering Dataset (SQuAD) 2.0. The contribution of this work is summarized as follows. First, we propose a simple yet efficient model for MRC that handles unanswerable questions and is optimized jointly. Second, our model achieves competitive results on SQuAD v2.0. Model The Machine Reading Comprehension is a task which takes a question INLINEFORM0 and a passage/paragraph INLINEFORM1 as inputs, and aims to find an answer span INLINEFORM2 in INLINEFORM3 . We assume that if the question is answerable, the answer INLINEFORM4 exists in INLINEFORM5 as a contiguous text string; otherwise, INLINEFORM6 is an empty string indicating an unanswerable question. Note that to handle the unanswerable questions, we manually append a dumpy text string NULL at the end of each corresponding passage/paragraph. Formally, the answer is formulated as INLINEFORM7 . In case of unanswerable questions, INLINEFORM8 points to the last token of the passage. Our model is a variation of SAN BIBREF1 , as shown in Figure FIGREF2 . The main difference is the additional binary classifier added in the model justifying whether the question is unanswerable. Roughly, the model includes two different layers: the shared layer and task specific layer. The shared layer is almost identical to the lower layers of SAN, which has a lexicon encoding layer, a contextual layer and a memory generation layer. On top of it, there are different answer modules for different tasks. We employ the SAN answer module for the span detector and a one-layer feed forward neural network for the binary classification task. It can also be viewed as a multi-task learning BIBREF8 , BIBREF5 , BIBREF9 . We will briefly describe the model from ground up as follows. Detailed descriptions can be found in BIBREF1 . Lexicon Encoding Layer. We map the symbolic/surface feature of INLINEFORM0 and INLINEFORM1 into neural space via word embeddings , 16-dim part-of-speech (POS) tagging embeddings, 8-dim named-entity embeddings and 4-dim hard-rule features. Note that we use small embedding size of POS and NER to reduce model size and they mainly serve the role of coarse-grained word clusters. Additionally, we use question enhanced passages word embeddings which can viewwed as soft matching between questions and passages. At last, we use two separate two-layer position-wise Feed-Forward Networks (FFN) BIBREF11 , BIBREF1 to map both question and passage encodings into the same dimension. As results, we obtain the final lexicon embeddings for the tokens for INLINEFORM2 as a matrix INLINEFORM3 , and tokens in INLINEFORM4 as INLINEFORM5 . Contextual Encoding Layer. A shared two-layers BiLSTM is used on the top to encode the contextual information of both passages and questions. To avoid overfitting, we concatenate a pre-trained 600-dimensional CoVe vectors BIBREF12 trained on German-English machine translation dataset, with the aforementioned lexicon embeddings as the final input of the contextual encoding layer, and also with the output of the first contextual encoding layer as the input of its second encoding layer. Thus, we obtain the final representation of the contextual encoding layer by a concatenation of the outputs of two BiLSTM: INLINEFORM0 for questions and INLINEFORM1 for passages. Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2 Note that INLINEFORM0 and INLINEFORM1 is transformed from INLINEFORM2 and INLINEFORM3 by one layer neural network INLINEFORM4 , respectively. A question-aware passage representation is computed as INLINEFORM5 . After that, we use the method of BIBREF13 to apply self attention to the passage: INLINEFORM6 where INLINEFORM0 means that we only drop diagonal elements on the similarity matrix (i.e., attention with itself). At last, INLINEFORM1 and INLINEFORM2 are concatenated and are passed through a BiLSTM to form the final memory: INLINEFORM3 . Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1 The final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 . Unanswerable classifier. We adopt a one-layer neural network as our unanswerable binary classifier: DISPLAYFORM0 , where INLINEFORM0 is the summary of the memory: INLINEFORM1 , where INLINEFORM2 . INLINEFORM3 denotes the probability of the question which is unanswerable. Objective The objective function of the joint model has two parts: DISPLAYFORM0 Following BIBREF0 , the span loss function is defined: DISPLAYFORM0 The objective function of the binary classifier is defined: DISPLAYFORM0 where INLINEFORM0 is a binary variable: INLINEFORM1 indicates the question is unanswerable and INLINEFORM2 denotes the question is answerable. Setup We evaluate our system on SQuAD 2.0 dataset BIBREF14 , a new MRC dataset which is a combination of Stanford Question Answering Dataset (SQuAD) 1.0 BIBREF15 and additional unanswerable question-answer pairs. The answerable pairs are around 100K; while the unanswerable questions are around 53K. This dataset contains about 23K passages and they come from approximately 500 Wikipedia articles. All the questions and answers are obtained by crowd-sourcing. Two evaluation metrics are used: Exact Match (EM) and Macro-averaged F1 score (F1) BIBREF14 . Implementation details We utilize spaCy tool to tokenize the both passages and questions, and generate lemma, part-of-speech and named entity tags. The word embeddings are initialized with pre-trained 300-dimensional GloVe BIBREF10 . A 2-layer BiLSTM is used encoding the contextual information of both questions and passages. Regarding the hidden size of our model, we search greedily among INLINEFORM0 . During training, Adamax BIBREF16 is used as our optimizer. The min-batch size is set to 32. The learning rate is initialized to 0.002 and it is halved after every 10 epochs. The dropout rate is set to 0.1. To prevent overfitting, we also randomly set 0.5% words in both passages and questions as unknown words during the training. Here, we use a special token unk to indicate a word which doesn't appear in GloVe. INLINEFORM1 in Eq EQREF9 is set to 1. Results We would like to investigate effectiveness the proposed joint model. To do so, the same shared layer/architecture is employed in the following variants of the proposed model: The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability. Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future. Analysis. To better understand our model, we analyze the accuracy of the classifier in our joint model. We obtain 75.3 classification accuracy on the development with the threshold 0.5. By increasing value of INLINEFORM0 in Eq EQREF9 , the classification accuracy reached to 76.8 ( INLINEFORM1 ), however the final results of our model only have a small improvement (+0.2 in terms of F1 score). It shows that it is important to make balance between these two components: the span detector and unanswerable classifier. Conclusion To sum up, we proposed a simple yet efficient model based on SAN. It showed that the joint learning algorithm boosted the performance on SQuAD 2.0. We also would like to incorporate ELMo into our model in future. Acknowledgments We thank Yichong Xu, Shuohang Wang and Sheng Zhang for valuable discussions and comments. We also thank Robin Jia for the help on SQuAD evaluations.
Do they use attention?
Yes
1,687
qasper
3k
Introduction Bidirectional Encoder Representations from Transformers (BERT) is a novel Transformer BIBREF0 model, which recently achieved state-of-the-art performance in several language understanding tasks, such as question answering, natural language inference, semantic similarity, sentiment analysis, and others BIBREF1. While well-suited to dealing with relatively short sequences, Transformers suffer from a major issue that hinders their applicability in classification of long sequences, i.e. they are able to consume only a limited context of symbols as their input BIBREF2. There are several natural language (NLP) processing tasks that involve such long sequences. Of particular interest are topic identification of spoken conversations BIBREF3, BIBREF4, BIBREF5 and call center customer satisfaction prediction BIBREF6, BIBREF7, BIBREF8, BIBREF9. Call center conversations, while usually quite short and to the point, often involve agents trying to solve very complex issues that the customers experience, resulting in some calls taking even an hour or more. For speech analytics purposes, these calls are typically transcribed using an automatic speech recognition (ASR) system, and processed in textual representations further down the NLP pipeline. These transcripts sometimes exceed the length of 5000 words. Furthermore, temporal information might play an important role in tasks like CSAT. For example, a customer may be angry at the beginning of the call, but after her issue is resolved, she would be very satisfied with the way it was handled. Therefore, simple bag of words models, or any model that does not include temporal dependencies between the inputs, may not be well-suited to handle this category of tasks. This motivates us to employ model such as BERT in this task. In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences. Our novel contributions are: Two extensions - RoBERT and ToBERT - to the BERT model, which enable its application in classification of long texts by performing segmentation and using another layer on top of the segment representations. State-of-the-art results on the Fisher topic classification task. Significant improvement on the CSAT prediction task over the MS-CNN model. Related work Several dimensionality reduction algorithms such as RBM, autoencoders, subspace multinomial models (SMM) are used to obtain a low dimensional representation of documents from a simple BOW representation and then classify it using a simple linear classifiers BIBREF11, BIBREF12, BIBREF13, BIBREF4. In BIBREF14 hierarchical attention networks are used for document classification. They evaluate their model on several datasets with average number of words around 150. Character-level CNN are explored in BIBREF15 but it is prohibitive for very long documents. In BIBREF16, dataset collected from arXiv papers is used for classification. For classification, they sample random blocks of words and use them together for classification instead of using full document which may work well as arXiv papers are usually coherent and well written on a well defined topic. Their method may not work well on spoken conversations as random block of words usually do not represent topic of full conversation. Several researchers addressed the problem of predicting customer satisfaction BIBREF6, BIBREF7, BIBREF8, BIBREF9. In most of these works, logistic regression, SVM, CNN are applied on different kinds of representations. In BIBREF17, authors use BERT for document classification but the average document length is less than BERT maximum length 512. TransformerXL BIBREF2 is an extension to the Transformer architecture that allows it to better deal with long inputs for the language modelling task. It relies on the auto-regressive property of the model, which is not the case in our tasks. Method ::: BERT Because our work builds heavily upon BERT, we provide a brief summary of its features. BERT is built upon the Transformer architecture BIBREF0, which uses self-attention, feed-forward layers, residual connections and layer normalization as the main building blocks. It has two pre-training objectives: Masked language modelling - some of the words in a sentence are being masked and the model has to predict them based on the context (note the difference from the typical autoregressive language model training objective); Next sentence prediction - given two input sequences, decide whether the second one is the next sentence or not. BERT has been shown to beat the state-of-the-art performance on 11 tasks with no modifications to the model architecture, besides adding a task-specific output layer BIBREF1. We follow same procedure suggested in BIBREF1 for our tasks. Fig. FIGREF8 shows the BERT model for classification. We obtain two kinds of representation from BERT: pooled output from last transformer block, denoted by H, and posterior probabilities, denoted by P. There are two variants of BERT - BERT-Base and BERT-Large. In this work we are using BERT-Base for faster training and experimentation, however, our methods are applicable to BERT-Large as well. BERT-Base and BERT-Large are different in model parameters such as number of transformer blocks, number of self-attention heads. Total number of parameters in BERT-Base are 110M and 340M in BERT-Large. BERT suffers from major limitations in terms of handling long sequences. Firstly, the self-attention layer has a quadratic complexity $O(n^2)$ in terms of the sequence length $n$ BIBREF0. Secondly, BERT uses a learned positional embeddings scheme BIBREF1, which means that it won't likely be able to generalize to positions beyond those seen in the training data. To investigate the effect of fine-tuning BERT on task performance, we use either the pre-trained BERT weights, or the weights from a BERT fine-tuned on the task-specific dataset on a segment-level (i.e. we preserve the original label but fine-tune on each segment separately instead of on the whole text sequence). We compare these results to using the fine-tuned segment-level BERT predictions directly as inputs to the next layer. Method ::: Recurrence over BERT Given that BERT is limited to a particular input length, we split the input sequence into segments of a fixed size with overlap. For each of these segments, we obtain H or P from BERT model. We then stack these segment-level representations into a sequence, which serves as input to a small (100-dimensional) LSTM layer. Its output serves as a document embedding. Finally, we use two fully connected layers with ReLU (30-dimensional) and softmax (the same dimensionality as the number of classes) activations to obtain the final predictions. With this approach, we overcome BERT's computational complexity, reducing it to $O(n/k * k^2) = O(nk)$ for RoBERT, with $k$ denoting the segment size (the LSTM component has negligible linear complexity $O(k)$). The positional embeddings are also no longer an issue. Method ::: Transformer over BERT Given that Transformers' edge over recurrent networks is their ability to effectively capture long distance relationships between words in a sequence BIBREF0, we experiment with replacing the LSTM recurrent layer in favor of a small Transformer model (2 layers of transformer building block containing self-attention, fully connected, etc.). To investigate if preserving the information about the input sequence order is important, we also build a variant of ToBERT which learns positional embeddings at the segment-level representations (but is limited to sequences of length seen during the training). ToBERT's computational complexity $O(\frac{n^2}{k^2})$ is asymptotically inferior to RoBERT, as the top-level Transformer model again suffers from quadratic complexity in the number of segments. However, in practice this number is much smaller than the input sequence length (${\frac{n}{k}} << n$), so we haven't observed performance or memory issues with our datasets. Experiments We evaluated our models on 3 different datasets: CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR). 20 newsgroups for topic identification task, consisting of written text; Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual); Experiments ::: CSAT CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset. As the distribution is skewed towards extremes, we choose to do binary classification with ratings above 4.5 as satisfied and below 4.5 as dissatisfied. Quantization of ratings also helped us to create a balanced dataset. This dataset contains 4331 calls and we split them into 3 sets for our experiments: 2866 calls for training, 362 calls for validation and, finally, 1103 calls for testing. We obtained the transcripts by employing an ASR system. The ASR system uses TDNN-LSTM acoustic model trained on Fisher and Switchboard datasets with lattice-free maximum mutual information criterion BIBREF18. The word error rates using four-gram language models were 9.2% and 17.3% respectively on Switchboard and CallHome portions of Eval2000 dataset. Experiments ::: 20 newsgroups 20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website. Experiments ::: Fisher Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic. We used same training and test splits as BIBREF3 in which 1374 and 1372 documents are used for training and testing respectively. For validation of our model, we used 10% of training dataset and the remaining 90% was used for actual model training. The number of topics in this data set is 40. Experiments ::: Dataset Statistics Table TABREF22 shows statistics of our datasets. It can be observed that average length of Fisher is much higher than 20 newsgroups and CSAT. Cumulative distribution of document lengths for each dataset is shown in Fig. FIGREF21. It can be observed that almost all of the documents in Fisher dataset have length more than 1000 words. For CSAT, more than 50% of the documents have length greater than 500 and for 20newsgroups only 10% of the documents have length greater than 500. Note that, for CSAT and 20newsgroups, there are few documents with length more than 5000. Experiments ::: Architecture and Training Details In this work, we split document into segments of 200 tokens with a shift of 50 tokens to extract features from BERT model. For RoBERT, LSTM model is trained to minimize cross-entropy loss with Adam optimizer BIBREF19. The initial learning rate is set to $0.001$ and is reduced by a factor of $0.95$ if validation loss does not decrease for 3-epochs. For ToBERT, the Transformer is trained with the default BERT version of Adam optimizer BIBREF1 with an initial learning rate of $5e$-5. We report accuracy in all of our experiments. We chose a model with the best validation accuracy to calculate accuracy on the test set. To accomodate for non-determinism of some TensorFlow GPU operations, we report accuracy averaged over 5 runs. Results Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small. Table TABREF27 presents results using fine-tuned BERT predictions instead of the pooled output from final transformer block. For each document, having obtained segment-wise predictions we can obtain final prediction for the whole document in three ways: Compute the average of all segment-wise predictions and find the most probable class; Find the most frequently predicted class; Train a classification model. It can be observed from Table TABREF27 that a simple averaging operation or taking most frequent predicted class works competitively for CSAT and 20newsgroups but not for the Fisher dataset. We believe the improvements from using RoBERT or ToBERT, compared to simple averaging or most frequent operations, are proportional to the fraction of long documents in the dataset. CSAT and 20newsgroups have (on average) significantly shorter documents than Fisher, as seen in Fig. FIGREF21. Also, significant improvements for Fisher could be because of less confident predictions from BERT model as this dataset has 40 classes. Fig. FIGREF31 presents the comparison of average voting and ToBERT for various document length ranges for Fisher dataset. We used fine-tuned BERT segment-level predictions (P) for this analysis. It can be observed that ToBERT outperforms average voting in every interval. To the best of our knowledge, this is a state-of-the-art result reported on the Fisher dataset. Table TABREF32 presents the effect of position embeddings on the model performance. It can be observed that position embeddings did not significantly affect the model performance for Fisher and 20newsgroups, but they helped slightly in CSAT prediction (an absolute improvement of 0.64% F1-score). We think that this is explained by the fact that Fisher and 20newsgroups are topic identification tasks, and the topic does not change much throughout these documents. However, CSAT may vary during the call, and in some cases a naive assumption that the sequential nature of the transcripts is irrelevant may lead to wrong conclusions. Table TABREF33 compares our results with previous works. It can be seen that our model ToBERT outperforms CNN based experiments by significant margin on CSAT and Fisher datasets. For CSAT dataset, we used multi-scale CNN (MS-CNN) as the baseline, given its strong results on Fisher and 20newsgroups. The setup was replicated from BIBREF5 for comparison. We also see that our result on 20 newsgroups is 0.6% worse than the state-of-the-art. Conclusions In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner).
What datasets did they use for evaluation?
CSAT dataset, 20 newsgroups, Fisher Phase 1 corpus
2,652
qasper
3k
Introduction The recently introduced BERT model BIBREF0 exhibits strong performance on several language understanding benchmarks. To what extent does it capture syntax-sensitive structures? Recent work examines the extent to which RNN-based models capture syntax-sensitive phenomena that are traditionally taken as evidence for the existence in hierarchical structure. In particular, in BIBREF1 we assess the ability of LSTMs to learn subject-verb agreement patterns in English, and evaluate on naturally occurring wikipedia sentences. BIBREF2 also consider subject-verb agreement, but in a “colorless green ideas” setting in which content words in naturally occurring sentences are replaced with random words with the same part-of-speech and inflection, thus ensuring a focus on syntax rather than on selectional-preferences based cues. BIBREF3 consider a wider range of syntactic phenomena (subject-verb agreement, reflexive anaphora, negative polarity items) using manually constructed stimuli, allowing for greater coverage and control than in the naturally occurring setting. The BERT model is based on the “Transformer” architecture BIBREF4 , which—in contrast to RNNs—relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on syntax-sensitive tasks compared to RNN (LSTM) models that do model word order directly, and explicitly track states across the sentence. Indeed, BIBREF5 finds that transformer-based models perform worse than LSTM models on the BIBREF1 agreement prediction dataset. In contrast, BIBREF6 find that self-attention performs on par with LSTM for syntax sensitive dependencies in the context of machine-translation, and performance on syntactic tasks is correlated with the number of attention heads in multi-head attention. I adapt the evaluation protocol and stimuli of BIBREF1 , BIBREF2 and BIBREF3 to the bidirectional setting required by BERT, and evaluate the pre-trained BERT models (both the Large and the Base models). Surprisingly (at least to me), the out-of-the-box models (without any task-specific fine-tuning) perform very well on all the syntactic tasks. Methodology I use the stimuli provided by BIBREF1 , BIBREF2 , BIBREF3 , but change the experimental protocol to adapt it to the bidirectional nature of the BERT model. This requires discarding some of the stimuli, as described below. Thus, the numbers are not strictly comparable to those reported in previous work. Previous setups All three previous work use uni-directional language-model-like models. BIBREF1 start with existing sentences from wikipedia that contain a present-tense verb. They feed each sentence word by word into an LSTM, stop right before the focus verb, and ask the model to predict a binary plural/singular decision (supervised setup) or compare the probability assigned by a pre-trained language model (LM) to the plural vs singular forms of the verb (LM setup). The evaluation is then performed on sentences with “agreement attractors” in which at there is at least one noun between the verb and its subject, and all of the nouns between the verb and subject are of the opposite number from the subject. BIBREF2 also start with existing sentences. However, in order to control for the possibillity of the model learning to rely on “semantic” selectional-preferences cues rather than syntactic ones, they replace each content word with random words from the same part-of-speech and inflection. This results in “coloreless green ideas” nonce sentences. The evaluation is then performed similarly to the LM setup of BIBREF1 : the sentence is fed into a pre-traiend LSTM LM up to the focus verb, and the model is considered correct if the probability assigned to the correct inflection of the original verb form given the prefix is larger than that assigned to the incorrect inflection. BIBREF3 focus on manually constructed and controlled stimuli, that also emphasizes linguistic structure over selectional preferences. They construct minimal pairs of grammatical and ungrammatical sentences, feed each one in its entirety into a pre-trained LSTM-LM, and compare the perplexity assigned by the model to the grammatical and ungrammatical sentences. The model is “correct” if it assigns the grammatical sentence a higher probability than to the ungrammatical one. Since the minimal pairs for most phenomena differ only in a single word (the focus verb), this scoring is very similar to the one used in the two previous works. However, it does consider the continuation of the sentence after the focus verb, and also allows for assessing phenomena that require change into two or more words (like negative polarity items). Adaptation to the BERT model In contrast to these works, the BERT model is bi-directional: it is trained to predict the identity of masked words based on both the prefix and suffix surrounding these words. I adapt the uni-directional setup by feeding into BERT the complete sentence, while masking out the single focus verb. I then ask BERT for its word predictions for the masked position, and compare the score assigned to the original correct verb to the score assigned to the incorrect one. For example, for the sentence: a 2002 systemic review of herbal products found that several herbs , including peppermint and caraway , have anti-dyspeptic effects for non-ulcer dyspepsia with “ encouraging safety profiles ” . (from BIBREF1 ) I feed into BERT: [CLS] a 2002 systemic review of herbal products found that several herbs , including peppermint and caraway , [MASK] anti-dyspeptic effects for non-ulcer dyspepsia with “ encouraging safety profiles ” . and look for the score assigned to the words have and has at the masked position. Similarly, for the pair the game that the guard hates is bad . the game that the guard hates are bad . (from BIBREF3 ), I feed into BERT: [CLS] the game that the guard hates [MASK] bad . and compare the scores predicted for is and are. This differs from BIBREF1 and BIBREF2 by considering the entire sentence (excluding the verb) and not just its prefix leading to the verb, and differs from BIBREF3 by conditioning the focus verb on bidirectional context. I use the PyTorch implementation of BERT, with the pre-trained models supplied by Google. I experiment with the bert-large-uncased and bert-base-uncased models. The bi-directional setup precludes using using the NPI stimuli of BIBREF3 , in which the minimal pair differs in two words position, which I discard from the evaluation. I also discard the agreement cases involving the verbs is or are in BIBREF1 and in BIBREF2 , because some of them are copular construction, in which strong agreement hints can be found also on the object following the verb. This is not an issue in the manually constructed BIBREF3 stimuli due to the patterns they chose. Finally, I discard stimuli in which the focus verb or its plural/singular inflection does not appear as a single word in the BERT word-piece-based vocabulary (and hence cannot be predicted by the model). This include discarding BIBREF3 stimuli involving the words swims or admires, resulting in 23,368 discarded pairs (out of 152,300). I similarly discard 680 sentences from BIBREF1 where the focus verb or its inflection were one of 108 out-of-vocabulary tokens, and 28 sentence-pairs (8 tokens) from BIBREF2 . The BERT results are not directly comparable to the numbers reported in previous work. Beyond the differences due to bidirectionality and the discarded stimuli, the BERT models are also trained on a different and larger corpus (covering both wikipedia and books). Code is available at https://github.com/yoavg/bert-syntax. Results Tables 1 , 2 and 3 show the results. All cases exhibit high scores—in the vast majority of the cases substantially higher than reported in previous work. As discussed above, the results are not directly comparable to previous work: the BERT models are trained on different (and larger) data, are allowed to access the suffix of the sentence in addition to its prefix, and are evaluated on somewhat different data due to discarding OOV items. Still, taken together, the high performance numbers indicate that the purely attention-based BERT models are likely capable of capturing the same kind of syntactic regularities that LSTM-based models are capable of capturing, at least as well as the LSTM models and probably better. Another noticeable and interesting trend is that larger is not necessarily better: the BERT-Base model outperforms the BERT-Large model on many of the syntactic conditions. Discussion The BERT models perform remarkably well on all the syntactic test cases. I expected the attention-based mechanism to fail on these (compared to the LSTM-based models), and am surprised by these results. The BIBREF2 and BIBREF3 conditions rule out the possibility of overly relying on selectional preference cues or memorizing the wikipedia training data, and suggest real syntactic generalization is taking place. Exploring the extent to which deep purely-attention-based architectures such as BERT are capable of capturing hierarchy-sensitive and syntactic dependencies—as well as the mechanisms by which this is achieved—is a fascinating area for future research.
Were any of these tasks evaluated in any previous work?
Yes
1,464
qasper
3k
Introduction As social media, specially Twitter, takes on an influential role in presidential elections in the U.S., natural language processing of political tweets BIBREF0 has the potential to help with nowcasting and forecasting of election results as well as identifying the main issues with a candidate – tasks of much interest to journalists, political scientists, and campaign organizers BIBREF1. As a methodology to obtain training data for a machine learning system that analyzes political tweets, BIBREF2 devised a crowdsourcing scheme with variable crowdworker numbers based on the difficulty of the annotation task. They provided a dataset of tweets where the sentiments towards political candidates were labeled both by experts in political communication and by crowdworkers who were likely not domain experts. BIBREF2 revealed that crowdworkers can match expert performance relatively accurately and in a budget-efficient manner. Given this result, the authors envisioned future work in which groundtruth labels would be crowdsourced for a large number of tweets and then used to design an automated NLP tool for political tweet analysis. The question we address here is: How accurate are existing NLP tools for political tweet analysis? These tools would provide a baseline performance that any new machine learning system for political tweet analysis would compete against. We here explore whether existing NLP systems can answer the questions "What sentiment?" and "Towards whom?" accurately for the dataset of political tweets provided by BIBREF2. In our analysis, we include NLP tools with publicly-available APIs, even if the tools were not specifically designed for short texts like tweets, and, in particular, political tweets. Our experiments reveal that the task of entity-level sentiment analysis is difficult for existing tools to answer accurately while the recognition of the entity, here, which politician, was easier. NLP Toolkits NLP toolkits typically have the following capabilities: tokenization, part-of-speech (PoS) tagging, chunking, named entity recognition and sentiment analysis. In a study by BIBREF3, it is shown that the well-known NLP toolkits NLTK BIBREF4, Stanford CoreNLP BIBREF5, and TwitterNLP BIBREF6 have tokenization, PoS tagging and NER modules in their pipelines. There are two main approaches for NER: (1) rule-based and (2) statistical or machine learning based. The most ubiquitous algorithms for sequence tagging use Hidden Markov Models BIBREF7, Maximum Entropy Markov Models BIBREF7, BIBREF8, or Conditional Random Fields BIBREF9. Recent works BIBREF10, BIBREF11 have used recurrent neural networks with attention modules for NER. Sentiment detection tools like SentiStrength BIBREF12 and TensiStrength BIBREF13 are rule-based tools, relying on various dictionaries of emoticons, slangs, idioms, and ironic phrases, and set of rules that can detect the sentiment of a sentence overall or a targeted sentiment. Given a list of keywords, TensiStrength (similar to SentiStrength) reports the sentiment towards selected entities in a sentence, based on five levels of relaxation and five levels of stress. Among commercial NLP toolkits (e.g., BIBREF14, BIBREF15, BIBREF16), we selected BIBREF17 and BIBREF18 for our experiments, which, to the best of our knowledge, are the only publicly accessible commercial APIs for the task of entity-level sentiment analysis that is agnostic to the text domain. We also report results of TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, and Stanford NLP NER BIBREF21. Dataset and Analysis Methodology We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of "negative," "neutral," and "positive" was used by the annotators. BIBREF2 proposed a decision tree approach for computing the number of crowdworkers who should analyze a tweet based on the difficulty of the task. Tweets are labeled by 2, 3, 5, or 7 workers based on the difficulty of the task and the level of disagreement between the crowdworkers. The model computes the number of workers based on how long a tweet is, the presence of a link in a tweet, and the number of present sarcasm signals. Sarcasm is often used in political tweets and causes disagreement between the crowdworkers. The tweets that are deemed to be sarcastic by the decision tree model, are expected to be more difficult to annotate, and hence are allocated more crowdworkers to work on. We conducted two sets of experiments. In the first set, we used BIBREF23, BIBREF17, and BIBREF18, for entity-level sentiment analysis; in the second set, BIBREF17, BIBREF19, BIBREF24, BIBREF25, and BIBREF26, BIBREF18 for named-entity recognition. In the experiments that we conducted with TwitterNLP for named-entity recognition, we worked with the default values of the model. Furthermore, we selected the 3-class Stanford NER model, which uses the classes “person,” “organization,” and “location” because it resulted in higher accuracy compared to the 7-class model. For CogComp-NLP NER we used Ontonotes 5.0 NER model BIBREF27. For spaCy NER we used the `en_core_web_lg' model. We report the experimental results for our two tasks in terms of the correct classification rate (CCR). For sentiment analysis, we have a three-class problem (positive, negative, and neutral), where the classes are mutually exclusive. The CCR, averaged for a set of tweets, is defined to be the number of correctly-predicted sentiments over the number of groundtruth sentiments in these tweets. For NER, we consider that each tweet may reference up to four candidates, i.e., targeted entities. The CCR, averaged for a set of tweets, is the number of correctly predicted entities (candidates) over the number of groundtruth entities (candidates) in this set. Results and Discussion The dataset of 1,000 randomly selected tweets contains more than twice as many tweets about Trump than about the other candidates. In the named-entity recognition experiment, the average CCR of crowdworkers was 98.6%, while the CCR of the automated systems ranged from 77.2% to 96.7%. For four of the automated systems, detecting the entity Trump was more difficult than the other entities (e.g., spaCy 72.7% for the entity Trump vs. above 91% for the other entities). An example of incorrect NER is shown in Figure FIGREF1 top. The difficulties the automated tools had in NER may be explained by the fact that the tools were not trained on tweets, except for TwitterNLP, which was not in active development when the data was created BIBREF1. In the sentiment analysis experiments, we found that a tweet may contain multiple sentiments. The groundtruth labels contain 210 positive sentiments, 521 neutral sentiments, and 305 negative sentiments to the candidates. We measured the CCR, across all tweets, to be 31.7% for Rosette Text Analytics, 43.2% for Google Cloud, 44.2% for TensiStrength, and 74.7% for the crowdworkers. This means the difference between the performance of the tools and the crowdworkers is significant – more than 30 percent points. Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom. Conclusions and Future Work Our results show that existing NLP systems cannot accurately perform sentiment analysis of political tweets in the dataset we experimented with. Labeling by humans, even non-expert crowdworkers, yields accuracy results that are well above the results of existing automated NLP systems. In future work we will therefore use a crowdworker-labeled dataset to train a new machine-learning based NLP system for tweet analysis. We will ensure that the training data is balanced among classes. Our plan is to use state-of-the-art deep neural networks and compare their performance for entity-level sentiment analysis of political tweets. Acknowledgments Partial support of this work by the Hariri Institute for Computing and Computational Science & Engineering at Boston University (to L.G.) and a Google Faculty Research Award (to M.B. and L.G.) is gratefully acknowledged. Additionally, we would like to thank Daniel Khashabi for his help in running the CogComp-NLP Python API and Mike Thelwal for his help with TensiStrength. We are also grateful to the Stanford NLP group for clarifying some of the questions we had with regards to the Stanford NER tool.
Is datasets for sentiment analysis balanced?
No
1,441
qasper
3k
Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences. In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models. In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data. Related Work Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content. At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output. Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora. Simplified Corpora We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K. Text Simplification using Neural Machine Translation Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence. The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 . State INLINEFORM0 is calculated by DISPLAYFORM0 where INLINEFORM0 is the activation function GRU. The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding. Synthetic Simplified Sentences We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system. Evaluation We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. Conclusion In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated.
what are the sizes of both datasets?
training set has 89,042 sentence pairs, and the test set has 100 pairs, training set contains 296,402, 2,000 for development and 359 for testing
2,266
qasper
3k
Introduction Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 . Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses: Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows: Related Work Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language. Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection. Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity. Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse. Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks. Hierarchically Modelling Offensive Content In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 . Level A: Offensive language Detection Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets. Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words. Level B: Categorization of Offensive Language Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C: Offensive Language Target Identification Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Data Collection The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 . Experiments and Evaluation We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM. Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes. Offensive Language Detection The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80. Categorization of Offensive Language In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 . The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT). Offensive Language Target Identification The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models. The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND. Conclusion and Future Work This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks. In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different. Acknowledgments The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton.
What models are used in the experiment?
linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)
2,250
qasper
3k
Introduction From a group of small users at the time of its inception in 2009, Quora has evolved in the last few years into one of the largest community driven Q&A sites with diverse user communities. With the help of efficient content moderation/review policies and active in-house review team, efficient Quora bots, this site has emerged into one of the largest and reliable sources of Q&A on the Internet. On Quora, users can post questions, follow questions, share questions, tag them with relevant topics, follow topics, follow users apart from answering, commenting, upvoting/downvoting etc. The integrated social structure at the backbone of it and the topical organization of its rich content have made Quora unique with respect to other Q&A sites like Stack Overflow, Yahoo! Answers etc. and these are some of the prime reasons behind its popularity in recent times. Quality question posting and getting them answered are the key objectives of any Q&A site. In this study we focus on the answerability of questions on Quora, i.e., whether a posted question shall eventually get answered. In Quora, the questions with no answers are referred to as “open questions”. These open questions need to be studied separately to understand the reason behind their not being answered or to be precise, are there any characteristic differences between `open' questions and the answered ones. For example, the question “What are the most promising advances in the treatment of traumatic brain injuries?” was posted on Quora on 23rd June, 2011 and got its first answer after almost 2 years on 22nd April, 2013. The reason that this question remained open so long might be the hardness of answering it and the lack of visibility and experts in the domain. Therefore, it is important to identify the open questions and take measures based on the types - poor quality questions can be removed from Quora and the good quality questions can be promoted so that they get more visibility and are eventually routed to topical experts for better answers. Characterization of the questions based on question quality requires expert human interventions often judging if a question would remain open based on factors like if it is subjective, controversial, open-ended, vague/imprecise, ill-formed, off-topic, ambiguous, uninteresting etc. Collecting judgment data for thousands of question posts is a very expensive process. Therefore, such an experiment can be done only for a small set of questions and it would be practically impossible to scale it up for the entire collection of posts on the Q&A site. In this work, we show that appropriate quantification of various linguistic activities can naturally correspond to many of the judgment factors mentioned above (see table 2 for a collection of examples). These quantities encoding such linguistic activities can be easily measured for each question post and thus helps us to have an alternative mechanism to characterize the answerability on the Q&A site. There are several research works done in Q&A focusing on content of posts. BIBREF0 exploit community feedback to identify high quality content on Yahoo! Answers. BIBREF1 use textual features to predict answer quality on Yahoo! Answers. BIBREF2 , investigate predictors of answer quality through a comparative, controlled field study of user responses. BIBREF3 study the problem of how long questions remain unanswered. BIBREF4 propose a prediction model on how many answers a question shall receive. BIBREF5 analyze and predict unanswered questions on Yahoo Answers. BIBREF6 study question quality in Yahoo! Answers. Dataset description We obtained our Quora dataset BIBREF7 through web-based crawls between June 2014 to August 2014. This crawling exercise has resulted in the accumulation of a massive Q&A dataset spanning over a period of over four years starting from January 2010 to May 2014. We initiated crawling with 100 questions randomly selected from different topics so that different genre of questions can be covered. The crawling of the questions follow a BFS pattern through the related question links. We obtained 822,040 unique questions across 80,253 different topics with a total of 1,833,125 answers to these questions. For each question, we separately crawl their revision logs that contain different types of edit information for the question and the activity log of the question asker. Linguistic activities on Quora In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable. Content of a question text is important to attract people and make them engage more toward it. The linguistic structure (i.e., the usage of POS tags, the use of Out-of-Vocabulary words, character usage etc.) one adopts are key factors for answerability of questions. We shall discuss the linguistic structure that often represents the writing style of a question asker. In fig 1 (a), we observe that askers of open questions generally use more no. of words compared to answered questions. To understand the nature of words (standard English words or chat-like words frequently used in social media) used in the text, we compare the words with GNU Aspell dictionary to see whether they are present in the dictionary or not. We observe that both open questions and answered questions follow similar distribution (see fig 1 (b)). Part-of-Speech (POS) tags are indicators of grammatical aspects of texts. To observe how the Part-of-Speech tags are distributed in the question texts, we define a diversity metric. We use the standard CMU POS tagger BIBREF8 for identifying the POS tags of the constituent words in the question. We define the POS tag diversity (POSDiv) of a question $q_i$ as follows: $POSDiv(q_i) = -\sum _{j \in pos_{set}}p_j\times \log (p_j)$ where $p_j$ is the probability of the $j^{th}$ POS in the set of POS tags. Fig 1 (c) shows that the answered questions have lower POS tag diversity compared to open questions. Question texts undergo several edits so that their readability and the engagement toward them are enhanced. It is interesting to identify how far such edits can make the question different from the original version of it. To capture this phenomena, we have adopted ROUGE-LCS recall BIBREF9 from the domain of text summarization. Higher the recall value, lesser are the changes in the question text. From fig 1 (d), we observe that open questions tend to have higher recall compared to the answered ones which suggests that they have not gone through much of text editing thus allowing for almost no scope of readability enhancement. Psycholinguistic analysis: The way an individual talks or writes, give us clue to his/her linguistic, emotional, and cognitive states. A question asker's linguistic, emotional, cognitive states are also revealed through the language he/she uses in the question text. In order to capture such psycholinguistic aspects of the asker, we use Linguistic Inquiry and Word Count (LIWC) BIBREF10 that analyzes various emotional, cognitive, and structural components present in individuals' written texts. LIWC takes a text document as input and outputs a score for the input for each of the LIWC categories such as linguistic (part-of-speech of the words, function words etc.) and psychological categories (social, anger, positive emotion, negative emotion, sadness etc.) based on the writing style and psychometric properties of the document. In table 1 , we perform a comparative analysis of the asker's psycholinguistic state while asking an open question and an answered question. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions. Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers. In table 2 , we show a collection of examples of open questions to illustrate that many of the above quantities based on the linguistic activities described in this section naturally correspond to the factors that human judges consider responsible for a question remaining unanswered. This is one of the prime reasons why these quantities qualify as appropriate indicators of answerability. Prediction model In this section, we describe the prediction framework in detail. Our goal is to predict whether a given question after a time period $t$ will be answered or not. Linguistic styles of the question asker The content and way of posing a question is important to attract answers. We have observed in the previous section that these linguistic as well as psycholinguistic aspects of the question asker are discriminatory factors. For the prediction, we use the following features:
Do the answered questions measure for the usefulness of the answer?
No
1,561
qasper
3k
Introduction Twitter, a micro-blogging and social networking site has emerged as a platform where people express themselves and react to events in real-time. It is estimated that nearly 500 million tweets are sent per day . Twitter data is particularly interesting because of its peculiar nature where people convey messages in short sentences using hashtags, emoticons, emojis etc. In addition, each tweet has meta data like location and language used by the sender. It's challenging to analyze this data because the tweets might not be grammatically correct and the users tend to use informal and slang words all the time. Hence, this poses an interesting problem for NLP researchers. Any advances in using this abundant and diverse data can help understand and analyze information about a person, an event, a product, an organization or a country as a whole. Many notable use cases of the twitter can be found here. Along the similar lines, The Task 1 of WASSA-2017 BIBREF0 poses a problem of finding emotion intensity of four emotions namely anger, fear, joy, sadness from tweets. In this paper, we describe our approach and experiments to solve this problem. The rest of the paper is laid out as follows: Section 2 describes the system architecture, Section 3 reports results and inference from different experiments, while Section 4 points to ways that the problem can be further explored. Preprocessing The preprocessing step modifies the raw tweets before they are passed to feature extraction. Tweets are processed using tweetokenize tool. Twitter specific features are replaced as follows: username handles to USERNAME, phone numbers to PHONENUMBER, numbers to NUMBER, URLs to URL and times to TIME. A continuous sequence of emojis is broken into individual tokens. Finally, all tokens are converted to lowercase. Feature Extraction Many tasks related to sentiment or emotion analysis depend upon affect, opinion, sentiment, sense and emotion lexicons. These lexicons associate words to corresponding sentiment or emotion metrics. On the other hand, the semantic meaning of words, sentences, and documents are preserved and compactly represented using low dimensional vectors BIBREF1 instead of one hot encoding vectors which are sparse and high dimensional. Finally, there are traditional NLP features like word N-grams, character N-grams, Part-Of-Speech N-grams and word clusters which are known to perform well on various tasks. Based on these observations, the feature extraction step is implemented as a union of different independent feature extractors (featurizers) in a light-weight and easy to use Python program EmoInt . It comprises of all features available in the baseline model BIBREF2 along with additional feature extractors and bi-gram support. Fourteen such feature extractors have been implemented which can be clubbed into 3 major categories: [noitemsep] Lexicon Features Word Vectors Syntax Features Lexicon Features: AFINN BIBREF3 word list are manually rated for valence with an integer between -5 (Negative Sentiment) and +5 (Positive Sentiment). Bing Liu BIBREF4 opinion lexicon extract opinion on customer reviews. +/-EffectWordNet BIBREF5 by MPQA group are sense level lexicons. The NRC Affect Intensity BIBREF6 lexicons provide real valued affect intensity. NRC Word-Emotion Association Lexicon BIBREF7 contains 8 sense level associations (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and 2 sentiment level associations (negative and positive). Expanded NRC Word-Emotion Association Lexicon BIBREF8 expands the NRC word-emotion association lexicon for twitter specific language. NRC Hashtag Emotion Lexicon BIBREF9 contains emotion word associations computed on emotion labeled twitter corpus via Hashtags. NRC Hashtag Sentiment Lexicon and Sentiment140 Lexicon BIBREF10 contains sentiment word associations computed on twitter corpus via Hashtags and Emoticons. SentiWordNet BIBREF11 assigns to each synset of WordNet three sentiment scores: positivity, negativity, objectivity. Negation lexicons collections are used to count the total occurrence of negative words. In addition to these, SentiStrength BIBREF12 application which estimates the strength of positive and negative sentiment from tweets is also added. Word Vectors: We focus primarily on the word vector representations (word embeddings) created specifically using the twitter dataset. GloVe BIBREF13 is an unsupervised learning algorithm for obtaining vector representations for words. 200-dimensional GloVe embeddings trained on 2 Billion tweets are integrated. Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 . Since tweets are abundant with emojis, Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used. Embeddings for each tweet are obtained by summing up individual word vectors and then dividing by the number of tokens in the tweet. Syntactic Features: Syntax specific features such as Word N-grams, Part-Of-Speech N-grams BIBREF17 , Brown Cluster N-grams BIBREF18 obtained using TweetNLP project have been integrated into the system. The final feature vector is the concatenation of all the individual features. For example, we concatenate average word vectors, sum of NRC Affect Intensities, number of positive and negative Bing Liu lexicons, number of negation words and so on to get final feature vector. The scaling of final features is not required when used with gradient boosted trees. However, scaling steps like standard scaling (zero mean and unit normal) may be beneficial for neural networks as the optimizers work well when the data is centered around origin. A total of fourteen different feature extractors have been implemented, all of which can be enabled or disabled individually to extract features from a given tweet. Regression The dev data set BIBREF19 in the competition was small hence, the train and dev sets were merged to perform 10-fold cross validation. On each fold, a model was trained and the predictions were collected on the remaining dataset. The predictions are averaged across all the folds to generalize the solution and prevent over-fitting. As described in Section SECREF6 , different combinations of feature extractors were used. After performing feature extraction, the data was then passed to various regressors Support Vector Regression, AdaBoost, RandomForestRegressor, and, BaggingRegressor of sklearn BIBREF20 . Finally, the chosen top performing models had the least error on evaluation metrics namely Pearson's Correlation Coefficient and Spearman's rank-order correlation. Parameter Optimization In order to find the optimal parameter values for the EmoInt system, an extensive grid search was performed through the scikit-Learn framework over all subsets of the training set (shuffled), using stratified 10-fold cross validation and optimizing the Pearson's Correlation score. Best cross-validation results were obtained using AdaBoost meta regressor with base regressor as XGBoost BIBREF21 with 1000 estimators and 0.1 learning rate. Experiments and analysis of results are presented in the next section. Experimental Results As described in Section SECREF6 various syntax features were used namely, Part-of-Speech tags, brown clusters of TweetNLP project. However, these didn't perform well in cross validation. Hence, they were dropped from the final system. While performing grid-search as mentioned in Section SECREF14 , keeping all the lexicon based features same, choice of combination of emoji vector and word vectors are varied to minimize cross validation metric. Table TABREF16 describes the results for experiments conducted with different combinations of word vectors. Emoji embeddings BIBREF16 give better results than using plain GloVe and Edinburgh embeddings. Edinburgh embeddings outperform GloVe embeddings in Joy and Sadness category but lag behind in Anger and Fear category. The official submission comprised of the top-performing model for each emotion category. This system ranked 3 for the entire test dataset and 2 for the subset of the test data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5. Post competition, experiments were performed on ensembling diverse models for improving the accuracy. An ensemble obtained by averaging the results of the top 2 performing models outperforms all the individual models. Feature Importance The relative feature importance can be assessed by the relative depth of the feature used as a decision node in the tree. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. By averaging the measure over several randomized trees, the variance of the estimate can be reduced and used as a measure of relative feature importance. In Figure FIGREF18 feature importance graphs are plotted for each emotion to infer which features are playing the major role in identifying emotional intensity in tweets. +/-EffectWordNet BIBREF5 , NRC Hashtag Sentiment Lexicon, Sentiment140 Lexicon BIBREF10 and NRC Hashtag Emotion Lexicon BIBREF9 are playing the most important role. System Limitations It is important to understand how the model performs in different scenarios. Table TABREF20 analyzes when the system performs the best and worst for each emotion. Since the features used are mostly lexicon based, the system has difficulties in capturing the overall sentiment and it leads to amplifying or vanishing intensity signals. For instance, in example 4 of fear louder and shaking lexicons imply fear but overall sentence doesn't imply fear. A similar pattern can be found in the 4 example of Anger and 3 example of Joy. The system has difficulties in understanding of sarcastic tweets, for instance, in the 3 tweet of Anger the user expressed anger but used lol which is used in a positive sense most of the times and hence the system did a bad job at predicting intensity. The system also fails in predicting sentences having deeper emotion and sentiment which humans can understand with a little context. For example, in sample 4 of sadness, the tweet refers to post travel blues which humans can understand. But with little context, it is difficult for the system to accurately estimate the intensity. The performance is poor with very short sentences as there are fewer indicators to provide a reasonable estimate. Future Work & Conclusion The paper studies the effectiveness of various affect lexicons word embeddings to estimate emotional intensity in tweets. A light-weight easy to use affect computing framework (EmoInt) to facilitate ease of experimenting with various lexicon features for text tasks is open-sourced. It provides plug and play access to various feature extractors and handy scripts for creating ensembles. Few problems explained in the analysis section can be resolved with the help of sentence embeddings which take the context information into consideration. The features used in the system are generic enough to use them in other affective computing tasks on social media text, not just tweet data. Another interesting feature of lexicon-based systems is their good run-time performance during prediction, future work to benchmark the performance of the system can prove vital for deploying in a real-world setting. Acknowledgement We would like to thank the organizers of the WASSA-2017 Shared Task on Emotion Intensity, for providing the data, the guidelines and timely support.
what pretrained word embeddings were used?
Pretrained word embeddings were not used
1,771
qasper
3k
Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2.
What were their results on the new dataset?
average recipe-level coherence scores of 1.78-1.82, human evaluators preferred personalized model outputs to baseline 63% of the time
2,666
qasper
3k
Introduction Neural networks have been successfully used to describe images with text using sequence-to-sequence models BIBREF0. However, the results are simple and dry captions which are one or two phrases long. Humans looking at a painting see more than just objects. Paintings stimulate sentiments, metaphors and stories as well. Therefore, our goal is to have a neural network describe the painting artistically in a style of choice. As a proof of concept, we present a model which generates Shakespearean prose for a given painting as shown in Figure FIGREF1. Accomplishing this task is difficult with traditional sequence to sequence models since there does not exist a large collection of Shakespearean prose which describes paintings: Shakespeare's works describes a single painting shown in Figure FIGREF3. Fortunately we have a dataset of modern English poems which describe images BIBREF1 and line-by-line modern paraphrases of Shakespeare's plays BIBREF2. Our solution is therefore to combine two separately trained models to synthesize Shakespearean prose for a given painting. Introduction ::: Related work A general end-to-end approach to sequence learning BIBREF3 makes minimal assumptions on the sequence structure. This model is widely used in tasks such as machine translation, text summarization, conversational modeling, and image captioning. A generative model using a deep recurrent architecture BIBREF0 has also beeen used for generating phrases describing an image. The task of synthesizing multiple lines of poetry for a given image BIBREF1 is accomplished by extracting poetic clues from images. Given the context image, the network associates image attributes with poetic descriptions using a convolutional neural net. The poem is generated using a recurrent neural net which is trained using multi-adversarial training via policy gradient. Transforming text from modern English to Shakespearean English using text "style transfer" is challenging. An end to end approach using a sequence-to-sequence model over a parallel text corpus BIBREF2 has been proposed based on machine translation. In the absence of a parallel text corpus, generative adversarial networks (GANs) have been used, which simultaneously train two models: a generative model which captures the data distribution, and a discriminative model which evaluates the performance of the generator. Using a target domain language model as a discriminator has also been employed BIBREF4, providing richer and more stable token-level feedback during the learning process. A key challenge in both image and text style transfer is separating content from style BIBREF5, BIBREF6, BIBREF7. Cross-aligned auto-encoder models have focused on style transfer using non-parallel text BIBREF7. Recently, a fine grained model for text style transfer has been proposed BIBREF8 which controls several factors of variation in textual data by using back-translation. This allows control over multiple attributes, such as gender and sentiment, and fine-grained control over the trade-off between content and style. Methods We use a total three datasets: two datasets for generating an English poem from an image, and Shakespeare plays and their English translations for text style transfer. We train a model for generating poems from images based on two datasets BIBREF1. The first dataset consists of image and poem pairs, namely a multi-modal poem dataset (MultiM-Poem), and the second dataset is a large poem corpus, namely a uni-modal poem dataset (UniM-Poem). The image and poem pairs are extended by adding the nearest three neighbor poems from the poem corpus without redundancy, and an extended image and poem pair dataset is constructed and denoted as MultiM-Poem(Ex)BIBREF1. We use a collection of line-by-line modern paraphrases for 16 of Shakespeare’s plays BIBREF2, for training a style transfer network from English poems to Shakespearean prose. We use 18,395 sentences from the training data split. We keep 1,218 sentences in the validation data set and 1,462 sentences in our test set. Methods ::: Image To Poem Actor-Critic Model For generating a poem from images we use an existing actor-critic architecture BIBREF1. This involves 3 parallel CNNs: an object CNN, sentiment CNN, and scene CNN, for feature extraction. These features are combined with a skip-thought model which provides poetic clues, which are then fed into a sequence-to-sequence model trained by policy gradient with 2 discriminator networks for rewards. This as a whole forms a pipeline that takes in an image and outputs a poem as shown on the top left of Figure FIGREF4. A CNN-RNN generative model acts as an agent. The parameters of this agent define a policy whose execution determines which word is selected as an action. When the agent selects all words in a poem, it receives a reward. Two discriminative networks, shown on the top right of Figure FIGREF4, are defined to serve as rewards concerning whether the generated poem properly describes the input image and whether the generated poem is poetic. The goal of the poem generation model is to generate a sequence of words as a poem for an image to maximize the expected return. Methods ::: Shakespearizing Poetic Captions For Shakespearizing modern English texts, we experimented with various types of sequence to sequence models. Since the size of the parallel translation data available is small, we leverage a dictionary providing a mapping between Shakespearean words and modern English words to retrofit pre-trained word embeddings. Incorporating this extra information improves the translation task. The large number of shared word types between the source and target sentences indicates that sharing the representation between them is beneficial. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with Attention We use a sequence-to-sequence model which consists of a single layer unidrectional LSTM encoder and a single layer LSTM decoder and pre-trained retrofitted word embeddings shared between source and target sentences. We experimented with two different types of attention: global attention BIBREF9, in which the model makes use of the output from the encoder and decoder for the current time step only, and Bahdanau attention BIBREF10, where computing attention requires the output of the decoder from the prior time step. We found that global attention performs better in practice for our task of text style transfer. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with a Pointer Network Since a pair of corresponding Shakespeare and modern English sentences have significant vocabulary overlap we extend the sequence-to-sequence model mentioned above using pointer networks BIBREF11 that provide location based attention and have been used to enable copying of tokens directly from the input. Moreover, there are lot of proper nouns and rare words which might not be predicted by a vanilla sequence to sequence model. Methods ::: Shakespearizing Poetic Captions ::: Prediction For both seq2seq models, we use the attention matrices returned at each decoder time step during inference, to compute the next word in the translated sequence if the decoder output at the current time step is the UNK token. We replace the UNKs in the target output with the highest aligned, maximum attention, source word. The seq2seq model with global attention gives the best results with an average target BLEU score of 29.65 on the style transfer dataset, compared with an average target BLEU score of 26.97 using the seq2seq model with pointer networks. Results We perform a qualitative analysis of the Shakespearean prose generated for the input paintings. We conducted a survey, in which we presented famous paintings including those shown in Figures FIGREF1 and FIGREF10 and the corresponding Shakespearean prose generated by the model, and asked 32 students to rate them on the basis of content, creativity and similarity to Shakespearean style on a Likert scale of 1-5. Figure FIGREF12 shows the result of our human evaluation. The average content score across the paintings is 3.7 which demonstrates that the prose generated is relevant to the painting. The average creativity score is 3.9 which demonstrates that the model captures more than basic objects in the painting successfully using poetic clues in the scene. The average style score is 3.9 which demonstrates that the prose generated is perceived to be in the style of Shakespeare. We also perform a quantitative analysis of style transfer by generating BLEU scores for the model output using the style transfer dataset. The variation of the BLEU scores with the source sentence lengths is shown in Figure FIGREF11. As expected, the BLEU scores decrease with increase in source sentence lengths. Results ::: Implementation All models were trained on Google Colab with a single GPU using Python 3.6 and Tensorflow 2.0. The number of hidden units for the encoder and decoder is 1,576 and 256 for seq2seq with global attention and seq2seq with pointer networks respectively. Adam optimizer was used with the default learning rate of 0.001. The model was trained for 25 epochs. We use pre-trained retrofitted word embeddings of dimension 192. Results ::: Limitations Since we do not have an end-to-end dataset, the generated English poem may not work well with Shakespeare style transfer as shown in Figure FIGREF12 for "Starry Night" with a low average content score. This happens when the style transfer dataset does not have similar words in the training set of sentences. A solution would be to expand the style transfer dataset, for a better representation of the poem data. Results ::: Conclusions and Future Work In conclusion, combining two pipelines with an intermediate representation works well in practice. We observe that a CNN-RNN based image-to-poem net combined with a seq2seq model with parallel text corpus for text style transfer synthesizes Shakespeare-style prose for a given painting. For the seq2seq model used, we observe that it performs better in practice using global attention as compared with local attention. We make our models and code publicly available BIBREF12. In future work we would like to experiment with GANs in the absence of non-parallel datasets, so that we can use varied styles for text style transfer. We would also like to experiment with cross aligned auto-encoders, which form a latent content representation, to efficiently separate style and content.
What limitations do the authors demnostrate of their model?
Since we do not have an end-to-end dataset, the generated English poem may not work well with Shakespeare style transfer
1,651
qasper
3k
Introduction Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods. Knowledge-based WSD methods rely on lexical resources like WordNet BIBREF1 and usually exploit two kinds of lexical knowledge. The gloss, which defines a word sense meaning, is first utilized in Lesk algorithm BIBREF2 and then widely taken into account in many other approaches BIBREF3, BIBREF4. Besides, structural properties of semantic graphs are mainly used in graph-based algorithms BIBREF5, BIBREF6. Traditional supervised WSD methods BIBREF7, BIBREF8, BIBREF9 focus on extracting manually designed features and then train a dedicated classifier (word expert) for every target lemma. Although word expert supervised WSD methods perform better, they are less flexible than knowledge-based methods in the all-words WSD task BIBREF10. Recent neural-based methods are devoted to dealing with this problem. BIBREF11 present a supervised classifier based on Bi-LSTM, which shares parameters among all word types except the last layer. BIBREF10 convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words. However, neither of them can totally beat the best word expert supervised methods. More recently, BIBREF12 propose to leverage the gloss information from WordNet and model the semantic relationship between the context and gloss in an improved memory network. Similarly, BIBREF13 introduce a (hierarchical) co-attention mechanism to generate co-dependent representations for the context and gloss. Their attempts prove that incorporating gloss knowledge into supervised WSD approach is helpful, but they still have not achieved much improvement, because they may not make full use of gloss knowledge. In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold: 1. We construct context-gloss pairs and propose three BERT-based models for WSD. 2. We fine-tune the pre-trained BERT model, and the experimental results on several English all-words WSD benchmark datasets show that our approach significantly outperforms the state-of-the-art systems. Methodology In this section, we describe our method in detail. Methodology ::: Task Definition In WSD, a sentence $s$ usually consists of a series of words: $\lbrace w_1,\cdots ,w_m\rbrace $, and some of the words $\lbrace w_{i_1},\cdots ,w_{i_k}\rbrace $ are targets $\lbrace t_1,\cdots ,t_k\rbrace $ need to be disambiguated. For each target $t$, its candidate senses $\lbrace c_1,\cdots ,c_n\rbrace $ come from entries of its lemma in a pre-defined sense inventory (usually WordNet). Therefore, WSD task aims to find the most suitable entry (symbolized as unique sense key) for each target in a sentence. See a sentence example in Table TABREF1. Methodology ::: BERT BERT BIBREF15 is a new language representation model, and its architecture is a multi-layer bidirectional Transformer encoder. BERT model is pre-trained on a large corpus and two novel unsupervised prediction tasks, i.e., masked language model and next sentence prediction tasks are used in pre-training. When incorporating BERT into downstream tasks, the fine-tuning procedure is recommended. We fine-tune the pre-trained BERT model on WSD task. Methodology ::: BERT ::: BERT(Token-CLS) Since every target in a sentence needs to be disambiguated to find its exact sense, WSD task can be regarded as a token-level classification task. To incorporate BERT to WSD task, we take the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer for every target lemma, which is the same as the last layer of the Bi-LSTM model BIBREF11. Methodology ::: GlossBERT BERT can explicitly model the relationship of a pair of texts, which has shown to be beneficial to many pair-wise natural language understanding tasks. In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem. We describe our construction method with an example (See Table TABREF1). There are four targets in this sentence, and here we take target word research as an example: Methodology ::: GlossBERT ::: Context-Gloss Pairs The sentence containing target words is denoted as context sentence. For each target word, we extract glosses of all $N$ possible senses (here $N=4$) of the target word (research) in WordNet to obtain the gloss sentence. [CLS] and [SEP] marks are added to the context-gloss pairs to make it suitable for the input of BERT model. A similar idea is also used in aspect-based sentiment analysis BIBREF16. Methodology ::: GlossBERT ::: Context-Gloss Pairs with Weak Supervision Based on the previous construction method, we add weak supervised signals to the context-gloss pairs (see the highlighted part in Table TABREF1). The signal in the gloss sentence aims to point out the target word, and the signal in the context sentence aims to emphasize the target word considering the situation that a target word may occur more than one time in the same sentence. Therefore, each target word has $N$ context-gloss pair training instances ($label\in \lbrace yes, no\rbrace $). When testing, we output the probability of $label=yes$ of each context-gloss pair and choose the sense corresponding to the highest probability as the prediction label of the target word. We experiment with three GlossBERT models: Methodology ::: GlossBERT ::: GlossBERT(Token-CLS) We use context-gloss pairs as input. We highlight the target word by taking the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer ($label\in \lbrace yes, no\rbrace $). Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS) We use context-gloss pairs as input. We take the final hidden state of the first token [CLS] as the representation of the whole sequence and add a classification layer ($label\in \lbrace yes, no\rbrace $), which does not highlight the target word. Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS-WS) We use context-gloss pairs with weak supervision as input. We take the final hidden state of the first token [CLS] and add a classification layer ($label\in \lbrace yes, no\rbrace $), which weekly highlight the target word by the weak supervision. Experiments ::: Datasets The statistics of the WSD datasets are shown in Table TABREF12. Experiments ::: Datasets ::: Training Dataset Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD. Experiments ::: Datasets ::: Evaluation Datasets We evaluate our method on several English all-words WSD datasets. For a fair comparison, we use the benchmark datasets proposed by BIBREF17 which include five standard all-words fine-grained WSD datasets from the Senseval and SemEval competitions: Senseval-2 (SE2), Senseval-3 (SE3), SemEval-2007 (SE07), SemEval-2013 (SE13) and SemEval-2015 (SE15). Following BIBREF13, BIBREF12 and BIBREF10, we choose SE07, the smallest among these test sets, as the development set. Experiments ::: Datasets ::: WordNet Since BIBREF17 map all the sense annotations in these datasets from their original versions to WordNet 3.0, we extract word sense glosses from WordNet 3.0. Experiments ::: Settings We use the pre-trained uncased BERT$_\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\mathrm {LARGE}$ model performs slightly worse than BERT$_\mathrm {BASE}$ in this task. The number of Transformer blocks is 12, the number of the hidden layer is 768, the number of self-attention heads is 12, and the total number of parameters of the pre-trained model is 110M. When fine-tuning, we use the development set (SE07) to find the optimal settings for our experiments. We keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 64. Experiments ::: Results Table TABREF19 shows the performance of our method on the English all-words WSD benchmark datasets. We compare our approach with previous methods. The first block shows the MFS baseline, which selects the most frequent sense in the training corpus for each target word. The second block shows two knowledge-based systems. Lesk$_{ext+emb}$ BIBREF4 is a variant of Lesk algorithm BIBREF2 by calculating the gloss-context overlap of the target word. Babelfy BIBREF6 is a unified graph-based approach which exploits the semantic network structure from BabelNet. The third block shows two word expert traditional supervised systems. IMS BIBREF7 is a flexible framework which trains SVM classifiers and uses local features. And IMS$_{+emb}$ BIBREF9 is the best configuration of the IMS framework, which also integrates word embeddings as features. The fourth block shows several recent neural-based methods. Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge. In the last block, we report the performance of our method. BERT(Token-CLS) is our baseline, which does not incorporate gloss information, and it performs slightly worse than previous traditional supervised methods and recent neural-based methods. It proves that directly using BERT cannot obtain performance growth. The other three methods outperform other models by a substantial margin, which proves that the improvements come from leveraging BERT to better exploit gloss information. It is worth noting that our method achieves significant improvements in SE07 and Verb over previous methods, which have the highest ambiguity level among all datasets and all POS tags respectively according to BIBREF17. Moreover, GlossBERT(Token-CLS) performs better than GlossBERT(Sent-CLS), which proves that highlighting the target word in the sentence is important. However, the weakly highlighting method GlossBERT(Sent-CLS-WS) performs best in most circumstances, which may result from its combination of the advantages of the other two methods. Experiments ::: Discussion There are two main reasons for the great improvements of our experimental results. First, we construct context-gloss pairs and convert WSD problem to a sentence-pair classification task which is similar to NLI tasks and train only one classifier, which is equivalent to expanding the corpus. Second, we leverage BERT BIBREF15 to better exploit the gloss information. BERT model shows its advantage in dealing with sentence-pair classification tasks by its amazing improvement on QA and NLI tasks. This advantage comes from both of its two novel unsupervised prediction tasks. Compared with traditional word expert supervised methods, our GlossBERT shows its effectiveness to alleviate the effort of feature engineering and does not require training a dedicated classifier for every target lemma. Up to now, it can be said that the neural network method can totally beat the traditional word expert method. Compared with recent neural-based methods, our solution is more intuitive and can make better use of gloss knowledge. Besides, our approach demonstrates that when we fine-tune BERT on a downstream task, converting it into a sentence-pair classification task may be a good choice. Conclusion In this paper, we seek to better leverage gloss knowledge in a supervised neural WSD system. We propose a new solution to WSD by constructing context-gloss pairs and then converting WSD to a sentence-pair classification task. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by National Natural Science Foundation of China (No. 61751201 and 61672162), Shanghai Municipal Science and Technology Commission (16JC1420401 and 17JC1404100), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01) and ZJLab.
Is SemCor3.0 reflective of English language data in general?
Yes
2,000
qasper
3k
Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed.
How big is Augmented LibriSpeech dataset?
Unanswerable
2,410
qasper
3k
Introduction Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are. Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly. The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data. Multitask Learning for Twitter Sentiment Classification In his work, Caruana BIBREF4 proposed a multitask approach in which a learner takes advantage of the multiplicity of interdependent tasks while jointly learning them. The intuition is that if the tasks are correlated, the learner can learn a model jointly for them while taking into account the shared information which is expected to improve its generalization ability. People express their opinions online on various subjects (events, products..), on several languages and in several styles (tweets, paragraph-sized reviews..), and it is exactly this variety that motivates the multitask approaches. Specifically for Twitter for instance, the different settings of classification like binary, ternary and fine-grained are correlated since their difference lies in the sentiment granularity of the classes which increases while moving from binary to fine-grained problems. There are two main decisions to be made in our approach: the learning algorithm, which learns a decision function, and the data representation. With respect to the former, neural networks are particularly suitable as one can design architectures with different properties and arbitrary complexity. Also, as training neural network usually relies on back-propagation of errors, one can have shared parts of the network trained by estimating errors on the joint tasks and others specialized for particular tasks. Concerning the data representation, it strongly depends on the data type available. For the task of sentiment classification of tweets with neural networks, distributed embeddings of words have shown great potential. Embeddings are defined as low-dimensional, dense representations of words that can be obtained in an unsupervised fashion by training on large quantities of text BIBREF10 . Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively. Figure FIGREF2 presents the architecture we use for multitask learning. In the top-left of the figure a biLSTM network (enclosed by the dashed line) is fed with embeddings INLINEFORM0 that correspond to the INLINEFORM1 words of a tokenized tweet. Notice, as discussed above, the biLSTM consists of two LSTMs that are fed with the word sequence forward and backwards. On top of the biLSTM network one (or more) hidden layers INLINEFORM2 transform its output. The output of INLINEFORM3 is led to the softmax layers for the prediction step. There are INLINEFORM4 softmax layers and each is used for one of the INLINEFORM5 tasks of the multitask setting. In tasks such as sentiment classification, additional features like membership of words in sentiment lexicons or counts of elongated/capitalized words can be used to enrich the representation of tweets before the classification step BIBREF3 . The lower part of the network illustrates how such sources of information can be incorporated to the process. A vector “Additional Features” for each tweet is transformed from the hidden layer(s) INLINEFORM6 and then is combined by concatenation with the transformed biLSTM output in the INLINEFORM7 layer. Experimental setup Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task. Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes. Feature representation We report results using two different feature sets. The first one, dubbed nbow, is a neural bag-of-words that uses text embeddings to generate low-dimensional, dense representations of the tweets. To construct the nbow representation, given the word embeddings dictionary where each word is associated with a vector, we apply the average compositional function that averages the embeddings of the words that compose a tweet. Simple compositional functions like average were shown to be robust and efficient in previous work BIBREF17 . Instead of training embeddings from scratch, we use the pre-trained on tweets GloVe embeddings of BIBREF10 . In terms of resources required, using only nbow is efficient as it does not require any domain knowledge. However, previous research on sentiment analysis showed that using extra resources, like sentiment lexicons, can benefit significantly the performance BIBREF3 , BIBREF2 . To validate this and examine at which extent neural networks and multitask learning benefit from such features we evaluate the models using an augmented version of nbow, dubbed nbow+. The feature space of the latter, is augmented using 1,368 extra features consisting mostly of counts of punctuation symbols ('!?#@'), emoticons, elongated words and word membership features in several sentiment lexicons. Due to space limitations, for a complete presentation of these features, we refer the interested reader to BIBREF2 , whose open implementation we used to extract them. Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1 where INLINEFORM0 is the number of categories, INLINEFORM1 is the set of instances whose true class is INLINEFORM2 , INLINEFORM3 is the true label of the instance INLINEFORM4 and INLINEFORM5 the predicted label. The measure penalizes decisions far from the true ones and is macro-averaged to account for the fact that the data are unbalanced. Complementary to INLINEFORM6 , we report the performance achieved on the micro-averaged INLINEFORM7 measure, which is a commonly used measure for classification. The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion. Both SVMs and LRs as discussed above treat the problem as a multi-class one, without considering the ordering of the classes. For these four models, we tuned the hyper-parameter INLINEFORM0 that controls the importance of the L INLINEFORM1 regularization part in the optimization problem with grid-search over INLINEFORM2 using 10-fold cross-validation in the union of the training and development data and then retrained the models with the selected values. Also, to account for the unbalanced classification problem we used class weights to penalize more the errors made on the rare classes. These weights were inversely proportional to the frequency of each class. For the four models we used the implementations of Scikit-learn BIBREF19 . For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks. To tune the neural network hyper-parameters we used 5-fold cross validation. We tuned the probability INLINEFORM0 of dropout after the hidden layers INLINEFORM1 and for the biLSTM for INLINEFORM2 , the size of the hidden layer INLINEFORM3 and the probability INLINEFORM4 of the Bernoulli trials from INLINEFORM5 . During training, we monitor the network's performance on the development set and apply early stopping if the performance on the validation set does not improve for 5 consecutive epochs. Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved. Several observations can be made from the table. First notice that, overall, the best performance is achieved by the neural network architecture that uses multitask learning. This entails that the system makes use of the available resources efficiently and improves the state-of-the-art performance. In conjunction with the fact that we found the optimal probability INLINEFORM0 , this highlights the benefits of multitask learning over single task learning. Furthermore, as described above, the neural network-based models have only access to the training data as the development are hold for early stopping. On the other hand, the baseline systems were retrained on the union of the train and development sets. Hence, even with fewer resources available for training on the fine-grained problem, the neural networks outperform the baselines. We also highlight the positive effect of the additional features that previous research proposed. Adding the features both in the baselines and in the biLSTM-based architectures improves the INLINEFORM1 scores by several points. Lastly, we compare the performance of the baseline systems with the performance of the state-of-the-art system of BIBREF2 . While BIBREF2 uses n-grams (and character-grams) with INLINEFORM0 , the baseline systems (SVMs, LRs) used in this work use the nbow+ representation, that relies on unigrams. Although they perform on par, the competitive performance of nbow highlights the potential of distributed representations for short-text classification. Further, incorporating structure and distributed representations leads to the gains of the biLSTM network, in the multitask and single task setting. Similar observations can be drawn from Figure FIGREF10 that presents the INLINEFORM0 scores. Again, the biLSTM network with multitask learning achieves the best performance. It is also to be noted that although the two evaluation measures are correlated in the sense that the ranking of the models is the same, small differences in the INLINEFORM1 have large effect on the scores of the INLINEFORM2 measure. Conclusion In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second. This opens several avenues for future research. Since sentiment is expressed in different textual types like tweets and paragraph-sized reviews, in different languages (English, German, ..) and in different granularity levels (binary, ternary,..) one can imagine multitask approaches that could benefit from combining such resources. Also, while we opted for biLSTM networks here, one could use convolutional neural networks or even try to combine different types of networks and tasks to investigate the performance effect of multitask learning. Lastly, while our approach mainly relied on the foundations of BIBREF4 , the internal mechanisms and the theoretical guarantees of multitask learning remain to be better understood. Acknowledgements This work is partially supported by the CIFRE N 28/2015.
What dataset did they use?
high-quality datasets from SemEval-2016 “Sentiment Analysis in Twitter” task
2,738
qasper
3k
Introduction Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods. Knowledge-based WSD methods rely on lexical resources like WordNet BIBREF1 and usually exploit two kinds of lexical knowledge. The gloss, which defines a word sense meaning, is first utilized in Lesk algorithm BIBREF2 and then widely taken into account in many other approaches BIBREF3, BIBREF4. Besides, structural properties of semantic graphs are mainly used in graph-based algorithms BIBREF5, BIBREF6. Traditional supervised WSD methods BIBREF7, BIBREF8, BIBREF9 focus on extracting manually designed features and then train a dedicated classifier (word expert) for every target lemma. Although word expert supervised WSD methods perform better, they are less flexible than knowledge-based methods in the all-words WSD task BIBREF10. Recent neural-based methods are devoted to dealing with this problem. BIBREF11 present a supervised classifier based on Bi-LSTM, which shares parameters among all word types except the last layer. BIBREF10 convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words. However, neither of them can totally beat the best word expert supervised methods. More recently, BIBREF12 propose to leverage the gloss information from WordNet and model the semantic relationship between the context and gloss in an improved memory network. Similarly, BIBREF13 introduce a (hierarchical) co-attention mechanism to generate co-dependent representations for the context and gloss. Their attempts prove that incorporating gloss knowledge into supervised WSD approach is helpful, but they still have not achieved much improvement, because they may not make full use of gloss knowledge. In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold: 1. We construct context-gloss pairs and propose three BERT-based models for WSD. 2. We fine-tune the pre-trained BERT model, and the experimental results on several English all-words WSD benchmark datasets show that our approach significantly outperforms the state-of-the-art systems. Methodology In this section, we describe our method in detail. Methodology ::: Task Definition In WSD, a sentence $s$ usually consists of a series of words: $\lbrace w_1,\cdots ,w_m\rbrace $, and some of the words $\lbrace w_{i_1},\cdots ,w_{i_k}\rbrace $ are targets $\lbrace t_1,\cdots ,t_k\rbrace $ need to be disambiguated. For each target $t$, its candidate senses $\lbrace c_1,\cdots ,c_n\rbrace $ come from entries of its lemma in a pre-defined sense inventory (usually WordNet). Therefore, WSD task aims to find the most suitable entry (symbolized as unique sense key) for each target in a sentence. See a sentence example in Table TABREF1. Methodology ::: BERT BERT BIBREF15 is a new language representation model, and its architecture is a multi-layer bidirectional Transformer encoder. BERT model is pre-trained on a large corpus and two novel unsupervised prediction tasks, i.e., masked language model and next sentence prediction tasks are used in pre-training. When incorporating BERT into downstream tasks, the fine-tuning procedure is recommended. We fine-tune the pre-trained BERT model on WSD task. Methodology ::: BERT ::: BERT(Token-CLS) Since every target in a sentence needs to be disambiguated to find its exact sense, WSD task can be regarded as a token-level classification task. To incorporate BERT to WSD task, we take the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer for every target lemma, which is the same as the last layer of the Bi-LSTM model BIBREF11. Methodology ::: GlossBERT BERT can explicitly model the relationship of a pair of texts, which has shown to be beneficial to many pair-wise natural language understanding tasks. In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem. We describe our construction method with an example (See Table TABREF1). There are four targets in this sentence, and here we take target word research as an example: Methodology ::: GlossBERT ::: Context-Gloss Pairs The sentence containing target words is denoted as context sentence. For each target word, we extract glosses of all $N$ possible senses (here $N=4$) of the target word (research) in WordNet to obtain the gloss sentence. [CLS] and [SEP] marks are added to the context-gloss pairs to make it suitable for the input of BERT model. A similar idea is also used in aspect-based sentiment analysis BIBREF16. Methodology ::: GlossBERT ::: Context-Gloss Pairs with Weak Supervision Based on the previous construction method, we add weak supervised signals to the context-gloss pairs (see the highlighted part in Table TABREF1). The signal in the gloss sentence aims to point out the target word, and the signal in the context sentence aims to emphasize the target word considering the situation that a target word may occur more than one time in the same sentence. Therefore, each target word has $N$ context-gloss pair training instances ($label\in \lbrace yes, no\rbrace $). When testing, we output the probability of $label=yes$ of each context-gloss pair and choose the sense corresponding to the highest probability as the prediction label of the target word. We experiment with three GlossBERT models: Methodology ::: GlossBERT ::: GlossBERT(Token-CLS) We use context-gloss pairs as input. We highlight the target word by taking the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer ($label\in \lbrace yes, no\rbrace $). Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS) We use context-gloss pairs as input. We take the final hidden state of the first token [CLS] as the representation of the whole sequence and add a classification layer ($label\in \lbrace yes, no\rbrace $), which does not highlight the target word. Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS-WS) We use context-gloss pairs with weak supervision as input. We take the final hidden state of the first token [CLS] and add a classification layer ($label\in \lbrace yes, no\rbrace $), which weekly highlight the target word by the weak supervision. Experiments ::: Datasets The statistics of the WSD datasets are shown in Table TABREF12. Experiments ::: Datasets ::: Training Dataset Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD. Experiments ::: Datasets ::: Evaluation Datasets We evaluate our method on several English all-words WSD datasets. For a fair comparison, we use the benchmark datasets proposed by BIBREF17 which include five standard all-words fine-grained WSD datasets from the Senseval and SemEval competitions: Senseval-2 (SE2), Senseval-3 (SE3), SemEval-2007 (SE07), SemEval-2013 (SE13) and SemEval-2015 (SE15). Following BIBREF13, BIBREF12 and BIBREF10, we choose SE07, the smallest among these test sets, as the development set. Experiments ::: Datasets ::: WordNet Since BIBREF17 map all the sense annotations in these datasets from their original versions to WordNet 3.0, we extract word sense glosses from WordNet 3.0. Experiments ::: Settings We use the pre-trained uncased BERT$_\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\mathrm {LARGE}$ model performs slightly worse than BERT$_\mathrm {BASE}$ in this task. The number of Transformer blocks is 12, the number of the hidden layer is 768, the number of self-attention heads is 12, and the total number of parameters of the pre-trained model is 110M. When fine-tuning, we use the development set (SE07) to find the optimal settings for our experiments. We keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 64. Experiments ::: Results Table TABREF19 shows the performance of our method on the English all-words WSD benchmark datasets. We compare our approach with previous methods. The first block shows the MFS baseline, which selects the most frequent sense in the training corpus for each target word. The second block shows two knowledge-based systems. Lesk$_{ext+emb}$ BIBREF4 is a variant of Lesk algorithm BIBREF2 by calculating the gloss-context overlap of the target word. Babelfy BIBREF6 is a unified graph-based approach which exploits the semantic network structure from BabelNet. The third block shows two word expert traditional supervised systems. IMS BIBREF7 is a flexible framework which trains SVM classifiers and uses local features. And IMS$_{+emb}$ BIBREF9 is the best configuration of the IMS framework, which also integrates word embeddings as features. The fourth block shows several recent neural-based methods. Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge. In the last block, we report the performance of our method. BERT(Token-CLS) is our baseline, which does not incorporate gloss information, and it performs slightly worse than previous traditional supervised methods and recent neural-based methods. It proves that directly using BERT cannot obtain performance growth. The other three methods outperform other models by a substantial margin, which proves that the improvements come from leveraging BERT to better exploit gloss information. It is worth noting that our method achieves significant improvements in SE07 and Verb over previous methods, which have the highest ambiguity level among all datasets and all POS tags respectively according to BIBREF17. Moreover, GlossBERT(Token-CLS) performs better than GlossBERT(Sent-CLS), which proves that highlighting the target word in the sentence is important. However, the weakly highlighting method GlossBERT(Sent-CLS-WS) performs best in most circumstances, which may result from its combination of the advantages of the other two methods. Experiments ::: Discussion There are two main reasons for the great improvements of our experimental results. First, we construct context-gloss pairs and convert WSD problem to a sentence-pair classification task which is similar to NLI tasks and train only one classifier, which is equivalent to expanding the corpus. Second, we leverage BERT BIBREF15 to better exploit the gloss information. BERT model shows its advantage in dealing with sentence-pair classification tasks by its amazing improvement on QA and NLI tasks. This advantage comes from both of its two novel unsupervised prediction tasks. Compared with traditional word expert supervised methods, our GlossBERT shows its effectiveness to alleviate the effort of feature engineering and does not require training a dedicated classifier for every target lemma. Up to now, it can be said that the neural network method can totally beat the traditional word expert method. Compared with recent neural-based methods, our solution is more intuitive and can make better use of gloss knowledge. Besides, our approach demonstrates that when we fine-tune BERT on a downstream task, converting it into a sentence-pair classification task may be a good choice. Conclusion In this paper, we seek to better leverage gloss knowledge in a supervised neural WSD system. We propose a new solution to WSD by constructing context-gloss pairs and then converting WSD to a sentence-pair classification task. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by National Natural Science Foundation of China (No. 61751201 and 61672162), Shanghai Municipal Science and Technology Commission (16JC1420401 and 17JC1404100), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01) and ZJLab.
Do they use large or small BERT?
small BERT
1,999
qasper
3k
Introduction There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples: The first punning joke exploits the sound similarity between the word “propane" and the latent target “profane", which can be categorized into the group of heterographic puns. Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word “gut". Pun detection is the task of detecting whether there is a pun residing in the given text. The goal of pun location is to find the exact word appearing in the text that implies more than one meanings. Most previous work addresses such two tasks separately and develop separate systems BIBREF10 , BIBREF5 . Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training. For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun. Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used. Compared to the pipeline methods, joint learning has been shown effective BIBREF11 , BIBREF12 since it is able to reduce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks. In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model. The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag. Since each context contains a maximum of one pun BIBREF9 , we design a novel tagging scheme to capture this structural constraint. Statistics on the corpora also show that a pun tends to appear in the second half of a context. To capture such a structural property, we also incorporate word position knowledge into our structured prediction model. Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks. To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach. Problem Definition We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }: INLINEFORM0 tag means the current word is not a pun. INLINEFORM0 tag means the current word is a pun. If the tag sequence of a sentence contains a INLINEFORM0 tag, then the text contains a pun and the word corresponding to INLINEFORM1 is the pun. The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }. INLINEFORM0 tag indicates that the current word appears before the pun in the given context. INLINEFORM0 tag highlights the current word is a pun. INLINEFORM0 tag indicates that the current word appears after the pun. We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text. Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure. Under the two schemes, if a sentence does not contain any puns, all words will be tagged with INLINEFORM0 or INLINEFORM1 , respectively. Exemplified by the second sentence “Some diets cause a gut reaction," the pun is given as “gut." Thus, under the INLINEFORM2 scheme, it should be tagged with INLINEFORM3 , while the words before it are assigned with the tag INLINEFORM4 and words after it are with INLINEFORM5 , as illustrated in Figure FIGREF8 . Likewise, the INLINEFORM6 scheme tags the word “gut" with INLINEFORM7 , while other words are tagged with INLINEFORM8 . Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach. Model Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 . Input. The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb. To capture this characteristic, we consider lexical features at the character level. Similar to the work of BIBREF15 , the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences. Nonlinear transformations are then applied to the character embeddings by highway networks BIBREF18 , which map the character-level features into different semantic spaces. We also observe that a pun tends to appear at the end of a sentence. Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively. We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information. A binary indicator can be mapped to a vector representation using a randomly initialized embedding table BIBREF19 , BIBREF20 . In this work, we directly adopt the value of the binary indicator as part of the input. The concatenation of the transformed character embeddings, the pre-trained word embeddings BIBREF21 , and the position indicators are taken as input of our model. Tagging. The input is then fed into a BiLSTM network, which will be able to capture contextual information. For a training instance INLINEFORM0 , we suppose the output by the word-level BiLSTM is INLINEFORM1 . The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models BIBREF14 , BIBREF15 . The conditional probability is defined as: where INLINEFORM0 is a set of all possible label sequences consisting of tags from INLINEFORM1 (or INLINEFORM2 ), INLINEFORM3 and INLINEFORM4 are weight and bias parameters corresponding to the label pair INLINEFORM5 . During training, we minimize the negative log-likelihood summed over all training instances: where INLINEFORM0 refers to the INLINEFORM1 -th instance in the training set. During testing, we aim to find the optimal label sequence for a new input INLINEFORM2 : This search process can be done efficiently using the Viterbi algorithm. Datasets and Settings We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end. For each fold, we randomly select 10% of the instances from the training set for development. Word embeddings are initialized with the 100-dimensional Glove BIBREF21 . The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training. The pre-trained word embeddings are not updated during training. The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300. We adopt stochastic gradient descent (SGD) BIBREF26 with a learning rate of 0.015. For the pun detection task, if the predicted tag sequence contains at least one INLINEFORM0 tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false. For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset. As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun. We report precision, recall and INLINEFORM1 score in Table TABREF11 . A list of prior works that did not employ joint learning are also shown in the first block of Table TABREF11 . Results We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger BIBREF27 , n-grams, label transitions, word suffixes and relative position to the end of the text are considered. We can see that our model with the INLINEFORM0 tagging scheme yields new state-of-the-art INLINEFORM1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block. For location on heterographic puns, our model's performance is slightly lower than the system of BIBREF25 , which is a rule-based locator. Compared to CRF, we can see that our model, either with the INLINEFORM2 or the INLINEFORM3 scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close. This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts – such information appears to be helpful in recalling more puns. Compared to the INLINEFORM0 scheme, the INLINEFORM1 tagging scheme is able to yield better performance on these two tasks. After studying outputs from these two approaches, we found that one leading source of error for the INLINEFORM2 approach is that there exist more than one words in a single instance that are assigned with the INLINEFORM3 tag. However, according to the description of pun in BIBREF9 , each context contains a maximum of one pun. Thus, such a useful structural constraint is not well captured by the simple approach based on the INLINEFORM4 tagging scheme. On the other hand, by applying the INLINEFORM5 tagging scheme, such a constraint is properly captured in the model. As a result, the results for such a approach are significantly better than the approach based on the INLINEFORM6 tagging scheme, as we can observe from the table. Under the same experimental setup, we also attempted to exclude word position features. Results are given by INLINEFORM7 - INLINEFORM8 . It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence. While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task. One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information. Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately. It can be regarded as a “pipeline” method where the classifier for pun detection is regarded as perfect. Following the prior work of BIBREF4 , we apply 10-fold cross validation. Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table TABREF11 . Compared with our approaches, the performance of such an approach drops significantly. On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other. These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner. Error Analysis We studied the outputs from our system and make some error analysis. We found the errors can be broadly categorized into several types, and we elaborate them here. 1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set. Learning the representations of such unseen words is challenging, which affects the model's performance. Such errors contribute around 40% of the total errors made by our system. 2) Detection errors: we found many errors are due to the model's inability to make correct pun detection. Such inability harms both pun detection and pun location. Although our approach based on the INLINEFORM0 tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group. 3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short. Consider the example “Superglue! Tom rejoined," here the word rejoined is the corresponding pun. However, it would be challenging to figure out the pun with such limited contextual information. Related Work Most existing systems address pun detection and location separately. BIBREF22 applied word sense knowledge to conduct pun detection. BIBREF24 trained a bidirectional RNN classifier for detecting homographic puns. Next, a knowledge-based approach is adopted to find the exact pun. Such a system is not applicable to heterographic puns. BIBREF28 applied Google n-gram and word2vec to make decisions. The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns. BIBREF10 used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns. BIBREF23 used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features. BIBREF25 proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics. Two systems are developed to conduct detection and location separately in the system known as UWAV BIBREF3 . The pun detector combines predictions from three classifiers. The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun. The state-of-the-art system for homographic pun location is a neural method BIBREF4 , where the word senses are incorporated into a bidirectional LSTM model. This method only supports the pun location task on homographic puns. Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER. The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN BIBREF13 , GRNN BIBREF29 , LSTM-CRF BIBREF30 , LSTM-CNN-CRF BIBREF14 , LM-LSTM-CRF BIBREF15 . In this work, we combine pun detection and location tasks as a single sequence labeling problem. Inspired by the work of BIBREF15 , we also adopt a LSTM-CRF with character embeddings to make labeling decisions. Conclusion In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective. We observe that each text in our corpora contains a maximum of one pun. Hence, we design a novel tagging scheme to incorporate such a constraint. Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase. We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works. Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns. Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective. Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns. Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies. Acknowledgments We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01.
What is the tagging scheme employed?
A new tagging scheme that tags the words before and after the pun as well as the pun words.
2,974
qasper
3k
Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed.
Is Arabic one of the 11 languages in CoVost?
No
2,413
qasper
3k
Introduction Modelling the relationship between sequences is extremely significant in most retrieval or classification problems involving two sequences. Traditionally, in Siamese networks, Hadamard product or concatenation have been used to fuse two vector representations of two input sequences to form a final representation for tasks like semantic similarity, passage retrieval. This representation, subsequently, has been used to compute similarity scores which has been used in a variety of training objectives like margin loss for ranking or cross-entropy error in classification. We have also witnessed word or phrase level similarity to create alignment matrices between two sequences BIBREF0 , BIBREF1 . These alignment matrices has proved to be very useful to model the relationship between two word representations as well fuse the relevant information of one sequence into another. Empirical evidences have shown this alignment procedures have significantly performed better then simple concatenation or element-wise multiplication, especially for long sentences or paragraphs. Attention works on creating neural alignment matrix using learnt weights without pre-computing alignment matrix and using them as features. The main objective of any attentive or alignment process is to look for matching words or phrases between two sequences and assign a high weight to the most similar pairs and vice-versa. The notion of matching or similarity maybe not semantic similarity but based on whatever task we have at hand. For example, for a task that requires capturing semantic similarity between two sequences like "how rich is tom cruise" and "how much wealth does tom cruise have", an attentive model shall discover the high similarity between "rich" and "wealthy" and assign a high weight value to the pair. Likewise, for a different task like question answering, a word "long" in a question like "how long does it take to recover from a mild fever" might be aligned with the phrase "a week" from the candidate answer "it takes almost a week to recover fully from a fever". Thus, attention significantly aids in better understanding the relevance of a similar user query in a similar measurement task or a candidate answer in a question answering task. The final prediction score is dependent on how well the relationship between two sequences are modeled and established. The general process of matching one sequence with another through attention includes computing the alignment matrix containing weight value between every pair of word representations belonging to both of the sequences. Subsequently, softmax function is applied on all the elements of one of the two dimensions of the matrix to represent the matching probabilities of all the word of a sequence with respect to one particular word in the other sequence. Since attention always looks for matching word representations, it operates under the assumption that there is always a match to be found inside the sequences. We provide a theoretical limitation to it and propose another technique called conflict that looks for contrasting relationship between words in two sequences. We empirically verify that our proposed conflict mechanism combined with attention can outperform the performance of attention working solely. Related Work Bahdanau et al. BIBREF2 introduced attention first in neural machine translation. It used a feed-forward network over addition of encoder and decoder states to compute alignment score. Our work is very similar to this except we use element wise difference instead of addition to build our conflict function. BIBREF3 came up with a scaled dot-product attention in their Transformer model which is fast and memory-efficient. Due to the scaling factor, it didn't have the issue of gradients zeroing out. On the other hand, BIBREF4 has experimented with global and local attention based on the how many hidden states the attention function takes into account. Their experiments have revolved around three attention functions - dot, concat and general. Their findings include that dot product works best for global attention. Our work also belongs to the global attention family as we consider all the hidden states of the sequence. Attention has been widely used in pair-classification problems like natural language inference. Wang et al. BIBREF5 introduced BIMPM which matched one sequence with another in four different fashion but one single matching function which they used as cosine. Liu et al. BIBREF6 proposed SAN for language inference which also used dot-product attention between the sequences. Summarizing, attention has helped in achieving state-of-the-art results in NLI and QA. Prior work in attention has been mostly in similarity based approaches while our work focuses on non-matching sequences. How attention works Let us consider that we have two sequences INLINEFORM0 and INLINEFORM1 each with M and N words respectively. The objective of attention is two-fold: compute alignment scores (or weight) between every word representation pairs from INLINEFORM2 and INLINEFORM3 and fuse the matching information of INLINEFORM4 with INLINEFORM5 thus computing a new representation of INLINEFORM6 conditioned on INLINEFORM7 . The word representations that attention operates on can be either embeddings like GloVe or hidden states from any recurrent neural network. We denote these representations as u = INLINEFORM0 and v = INLINEFORM1 . We provide a mathematical working of how a general attention mechanism works between two sequences, followed by a explanation in words: DISPLAYFORM0 Explanation: Both are sequences are non-linearly projected into two different spaces (eqn.1) and each word representation in INLINEFORM0 is matched with that in INLINEFORM1 by computing a dot-product (eqn.2). INLINEFORM2 is a M X N matrix that stores the alignment scores between word INLINEFORM3 and INLINEFORM4 (eqn.2). Since, the scores are not normalized, a softmax function is applied on each row to convert them to probabilities (eqn. 3). Thus, each row contains relative importance of words in INLINEFORM5 to a particular word INLINEFORM6 . Weighted sum of INLINEFORM7 is taken (eqn. 4) and fused with the word representation INLINEFORM8 using concatenation (eqn.5). Limits of using only Attention Attention operates by using dot product or sometimes addition followed by linear projection to a scalar which models the similarity between two vectors. Subsequently, softmax is applied which gives high probabilities to most matching word representations. This assumes that there is some highly matched word pairs already existing and high scores will be assigned to them. Given a vector INLINEFORM0 =( INLINEFORM1 ,..., INLINEFORM2 ) on which softmax function is applied, each INLINEFORM3 INLINEFORM4 (0, 1). It is observable that the average value of INLINEFORM5 is always INLINEFORM6 . In other words, it is impossible to produce a vector having all INLINEFORM7 < INLINEFORM8 when two sequences have no matching at all. In cases, where one or more word pairs from two different sequences are highly dissimilar, it is impossible to assign a very low probability to it without increasing the probability of some other pair somewhere else since INLINEFORM0 = 1. For example, when we consider two sequences "height of tom cruise" and "age of sun", while computing the attention weights between the word "height" and all the words in the second sequence it can be observed that their no matching word in the latter. In this case, a standard dot-product based attention with softmax won't be able to produce weights which is below 0.33 (=1/3) for all the words in the second sequence with respect to the word "height" in the first sequence. Conflict model We propose a different mechanism that does the opposite of what attention does that is computing how much two sequences repel each other. This works very similar to how attention works but inversely. We demonstrate a general model but we also realize that there can be other variants of it which may be worked out to perform better. Our approach consists of using element wise difference between two vectors followed by a linear transformation to produce a scalar weight. The remaining of the process acts similar to how attention works. Mathematically, we can express it as: DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 INLINEFORM2 is a parameter that we introduce to provide a weight for the pair. The two word representations INLINEFORM3 and INLINEFORM4 are projected to a space where their element wise difference can be used to model their dissimilarity and softmax applied on them can produce high probability to more dissimilar word pairs. It is good to note that conflict suffers from the same limitation that attention suffers from. This is when a pair of sentences are highly matching especially with multiple associations. But when the two methods work together, each compensates for the other's shortcomings. Combination of attention and conflict We used two weighted representations of INLINEFORM0 using weights of attention and conflict as computed in Eqn. (4) and (8) respectively. Our final representation of a word representation INLINEFORM1 conditioned on INLINEFORM2 can be expressed as: DISPLAYFORM0 where A and C denote that they are from attention and conflict models respectively. Relation to Multi-Head attention Multi-head attention, as introduced in BIBREF3 , computes multiple identical attention mechanism parallelly on multiple linear projections of same inputs. The parameters of each attention and projections are different in each head. Finally, they concatenate all the attentions which is similar to how we concatenate conflict and attention. However, they use dot-product to compute each of the attention. Our combined model that contains both attention and conflict can be thought of as a 2-head attention model but both heads are different. Our conflict head explicitly captures difference between the inputs. Visualizing attention and conflict We observe how our conflict model learns the dissimilarities between word representations. We achieve that by visualizing the heatmap of the weight matrix INLINEFORM0 for both attention and conflict from eqns. (3) and (8). While attention successfully learns the alignments, conflict matrix also shows that our approach models the contradicting associations like "animal" and "lake" or "australia" and "world". These two associations are the unique pairs which are instrumental in determining that the two queries are not similar. The model We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like. Task 1: Quora Duplicate Question Pair Detection The dataset includes pairs of questions labelled as 1 or 0 depending on whether a pair is duplicate or not respectively. This is a popular pair-level classification task on which extensive work has already been done before like BIBREF7 , BIBREF8 . For this task, we make the output layer of our model to predict two probabilities for non-duplicate and duplicate. We sample the data from the original dataset so that it contains equal positive and negative classes. Original dataset has some class imbalance but for sake simplicity we don't consider it. The final data that we use has roughly 400,000 question pairs and we split this data into train and test using 8:2 ratio. We train all our models for roughly 2 epochs with a batch size of 64. We use a hidden dimension of 150 throughout the model. The embedding layer uses ELMO BIBREF9 which has proven to be very useful in various downstream language understanding tasks. Our FC layers consists of four dense layers with INLINEFORM0 activation after each layer. The dropout rate is kept as 0.2 for every recurrent and FC linear layers. We use Adam optimizer in our experiment with epsilon=1e-8, beta=0.9 and learning rate=1e-3. Task 2: Ranking questions in Bing's People Also Ask People Also Ask is a feature in Bing search result page where related questions are recommended to the user. User may click on a question to view the answer. Clicking is a positive feedback that shows user's interest in the question. We use this click logs to build a question classifier using the same model in Figure 3. The problem statement is very similar to BIBREF10 where they use logistic regression to predict whether an user would click on ad. Our goal is to classify if a question is potential high-click question or not for a given query. For this, we first create a labelled data set using the click logs where any question having CTR lower than 0.3 is labelled as 0 and a question having CTR more than 0.7 as 1. Our final data resembles that of a pair-level classifier, as in Task 1, where user query and candidate questions are input. With these data set, we train a binary classifier to detect high-click and low-click questions. Quantitative Analysis For both tasks, we compute classification accuracy using three model variants and report the results in Table 1 and Table 2. We observe that model with both attention and conflict combined gives the best results. We also show the training loss curve for both the models having attention and attention combined with conflict respectively. Figure 4 and 5 shows these curves for Task 1 and Task 2 respectively. The curves are smoothed using moving average having an window size of 8. We notice that the conflict model has much steeper slope and converges to a much better minima in both the tasks. It can also be noticed that in the training procedure for the model which has both attention and conflict, the updates are much smoother. Qualitative Comparison We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences. Sequence 1: What are the best ways to learn French ? Sequence 2: How do I learn french genders ? Attention only: 1 Attention+Conflict: 0 Ground Truth: 0 Sequence 1: How do I prevent breast cancer ? Sequence 2: Is breast cancer preventable ? Attention only: 1 Attention+Conflict: 0 Ground Truth: 0 We provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data. Analyzing the gains We analyzed the gains in Task 1 which we get from the attention-conflict model in order to ensure that they are not due to randomness in weight initialization or simply additional parameters. We particularly focused on the examples which were incorrectly marked in attention model but correctly in attention-conflict model. We saw that 70% of those cases are the ones where the pair was incorrectly marked as duplicate in the previous model but our combined model correctly marked them as non-duplicate. Conclusion In this work, we highlighted the limits of attention especially in cases where two sequences have a contradicting relationship based on the task it performs. To alleviate this problem and further improve the performance, we propose a conflict mechanism that tries to capture how two sequences repel each other. This acts like the inverse of attention and, empirically, we show that how conflict and attention together can improve the performance. Future research work should be based on alternative design of conflict mechanism using other difference operators other than element wise difference which we use.
On which tasks do they test their conflict method?
Task 1: Quora Duplicate Question Pair Detection, Task 2: Ranking questions
2,577
qasper
3k
Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2.
What are the baseline models?
name-based Nearest-Neighbor model (NN), Encoder-Decoder baseline with ingredient attention (Enc-Dec)
2,655
qasper
3k
Introduction The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K): “By asking people to describe the people, objects, scenes and activities that are shown in a picture without giving them any further information about the context in which the picture was taken, we were able to obtain conceptual descriptions that focus only on the information that can be obtained from the image alone.” BIBREF1 What this assumption overlooks is the amount of interpretation or recontextualization carried out by the annotators. Let us take a concrete example. Figure FIGREF1 shows an image from the Flickr30K dataset. This image comes with the five descriptions below. All but the first one contain information that cannot come from the image alone. Relevant parts are highlighted in bold: We need to understand that the descriptions in the Flickr30K dataset are subjective descriptions of events. This can be a good thing: the descriptions tell us what are the salient parts of each image to the average human annotator. So the two humans in Figure FIGREF1 are relevant, but the two soap dispensers are not. But subjectivity can also result in stereotypical descriptions, in this case suggesting that the male is more likely to be the manager, and the female is more likely to be the subordinate. rashtchian2010collecting do note that some descriptions are speculative in nature, which they say hurts the accuracy and the consistency of the descriptions. But the problem is not with the lack of consistency here. Quite the contrary: the problem is that stereotypes may be pervasive enough for the data to be consistently biased. And so language models trained on this data may propagate harmful stereotypes, such as the idea that women are less suited for leadership positions. This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases. Stereotype-driven descriptions Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. I distinguish two kinds of verbal behavior that result from stereotypes: (i) linguistic bias, and (ii) unwarranted inferences. The former is discussed in more detail by beukeboom2014mechanisms, who defines linguistic bias as “a systematic asymmetry in word choice as a function of the social category to which the target belongs.” So this bias becomes visible through the distribution of terms used to describe entities in a particular category. Unwarranted inferences are the result of speculation about the image; here, the annotator goes beyond what can be glanced from the image and makes use of their knowledge and expectations about the world to provide an overly specific description. Such descriptions are directly identifiable as such, and in fact we have already seen four of them (descriptions 2–5) discussed earlier. Linguistic bias Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. beukeboom2014mechanisms lists several linguistic `tools' that people use to mark individuals who deviate from the norm. I will mention two of them. [leftmargin=0cm] One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough. can be used when prior beliefs about a particular social category are violated, e.g. The garbage man was not stupid. See also BIBREF6 . These examples are similar in that the speaker has to put in additional effort to mark the subject for being unusual. But they differ in what we can conclude about the speaker, especially in the context of the Flickr30K data. Negations are much more overtly displaying the annotator's prior beliefs. When one annotator writes that A little boy is eating pie without utensils (image 2659046789), this immediately reveals the annotator's normative beliefs about the world: pie should be eaten with utensils. But when another annotator talks about a girls basketball game (image 8245366095), this cannot be taken as an indication that the annotator is biased about the gender of basketball players; they might just be helpful by providing a detailed description. In section 3 I will discuss how to establish whether or not there is any bias in the data regarding the use of adjectives. Unwarranted inferences Unwarranted inferences are statements about the subject(s) of an image that go beyond what the visual data alone can tell us. They are based on additional assumptions about the world. After inspecting a subset of the Flickr30K data, I have grouped these inferences into six categories (image examples between parentheses): [leftmargin=0cm] We've seen an example of this in the introduction, where the `manager' was said to be talking about job performance and scolding [a worker] in a stern lecture (8063007). Many dark-skinned individuals are called African-American regardless of whether the picture has been taken in the USA or not (4280272). And people who look Asian are called Chinese (1434151732) or Japanese (4834664666). In image 4183120 (Figure FIGREF16 ), people sitting at a gym are said to be watching a game, even though there could be any sort of event going on. But since the location is so strongly associated with sports, crowdworkers readily make the assumption. Quite a few annotations focus on explaining the why of the situation. For example, in image 3963038375 a man is fastening his climbing harness in order to have some fun. And in an extreme case, one annotator writes about a picture of a dancing woman that the school is having a special event in order to show the american culture on how other cultures are dealt with in parties (3636329461). This is reminiscent of the Stereotypic Explanatory Bias BIBREF7 , which refers to “the tendency to provide relatively more explanations in descriptions of stereotype inconsistent, compared to consistent behavior” BIBREF6 . So in theory, odd or surprising situations should receive more explanations, since a description alone may not make enough sense in those cases, but it is beyond the scope of this paper to test whether or not the Flickr30K data suffers from the SEB. Older people with children around them are commonly seen as parents (5287405), small children as siblings (205842), men and women as lovers (4429660), groups of young people as friends (36979). Annotators will often guess the status or occupation of people in an image. Sometimes these guesses are relatively general (e.g. college-aged people being called students in image 36979), but other times these are very specific (e.g. a man in a workshop being called a graphics designer, 5867606). Detecting stereotype-driven descriptions In order to get an idea of the kinds of stereotype-driven descriptions that are in the Flickr30K dataset, I made a browser-based annotation tool that shows both the images and their associated descriptions. You can simply leaf through the images by clicking `Next' or `Random' until you find an interesting pattern. Ethnicity/race One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased? We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators. Note that this upper bound lies somewhere between 20% (one description) and 100% (5 descriptions). Figure TABREF22 presents count data for the ethnic marking of babies. It includes two false positives (talking about a white baby stroller rather than a white baby). In the Asian group there is an additional complication: sometimes the mother gets marked rather than the baby. E.g. An Asian woman holds a baby girl. I have counted these occurrences as well. The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased. I have manually categorized each of the baby images. There are 504 white, 66 asian, and 36 black babies. 73 images do not contain a baby, and 18 images do not fall into any of the other categories. While this does bring down the average number of times each category was marked, it also increases the contrast between white babies (who get marked in less than 1% of the images) and asian/black babies (who get marked much more often). A next step would be to see whether these observations also hold for other age groups, i.e. children and adults. INLINEFORM0 Other methods It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead? Discussion In the previous section, I have outlined several methods to manually detect stereotypes, biases, and odd phrases. Because there are many ways in which a phrase can be biased, it is difficult to automatically detect bias from the data. So how should we deal with stereotype-driven descriptions? Conclusion This paper provided a taxonomy of stereotype-driven descriptions in the Flickr30K dataset. I have divided these descriptions into two classes: linguistic bias and unwarranted inferences. The former corresponds to the annotators' choice of words when confronted with an image that may or may not match their stereotypical expectancies. The latter corresponds to the tendency of annotators to go beyond what the physical data can tell us, and expand their descriptions based on their past experiences and knowledge of the world. Acknowledging these phenomena is important, because on the one hand it helps us think about what is learnable from the data, and on the other hand it serves as a warning: if we train and evaluate language models on this data, we are effectively teaching them to be biased. I have also looked at methods to detect stereotype-driven descriptions, but due to the richness of language it is difficult to find an automated measure. Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce. Acknowledgments Thanks to Piek Vossen and Antske Fokkens for discussion, and to Desmond Elliott and an anonymous reviewer for comments on an earlier version of this paper. This research was supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014-2019).
Which methods are considered to find examples of biases and unwarranted inferences??
spot patterns by just looking at a collection of images, tag all descriptions with part-of-speech information, I applied Louvain clustering
2,204
qasper
3k
Winograd Schemas A Winograd schema (Levesque, Davis, and Morgenstern 2012) is a pair of sentences, or of short texts, called the elements of the schema, that satisfy the following constraints: The following is an example of a Winograd schema: Here, the two sentences differ only in the last word: `large' vs. `small'. The ambiguous pronoun is `it'. The two antecedents are `trophy' and `brown suitcase'. A human reader will naturally interpret `it' as referring to the trophy in the first sentence and to the suitcase in the second sentence, using the world knowledge that a small object can fit in a large container, but a large object cannot fit in a small container (Davis 2013). Condition 4 is satisfied because either a trophy or a suitcase can be either large or small, and there is no reason to suppose that there would be a meaningful correlation of `small' or `large' with `trophy' or `suitcase' in a typical corpus. We will say that an element of a Winograd schema is “solved” if the referent of the pronoun is identified. An example of a pair of sentences satisfying conditions 1-3 but not 4 would be Since women cannot be carcinogenic and pills cannot be pregnant, the pronoun `they' in these sentences is easily disambiguated using selectional restrictions. This pair is therefore not a valid Winograd schema. The Winograd Schema Challenge (WSC) is a challenge for AI programs. The program is presented with a collection of sentences, each of which is one element of a Winograd schema, and is required to find the correct referent for the ambiguous pronoun. An AI program passes the challenge if its success rate is comparable to a human reader. The challenge is administered by commonsensereasoning.org and sponsored by Nuance Inc. It was offered for the first time at IJCAI-2016 (Morgenstern, Davis, and Ortiz, in preparation); the organizers plan to continue to offer it roughly once a year. Winograd schemas as the basis for challenges for machine translation programs In many cases, the identification of the referent of the prounoun in a Winograd schema is critical for finding the correct translation of that pronoun in a different language. Therefore, Winograd schemas can be used as a very difficult challenge for the depth of understanding achieved by a machine translation program. The third person plural pronoun `they' has no gender in English and most other languages (unlike the third person singular pronouns `he', `she', and `it'). However Romance languages such as French, Italian, and Spanish, and Semitic languages such as Hebrew and Arabic distinguish between the masculine and the feminine third-person plural pronouns, at least in some grammatical cases. For instance in French, the masculine pronoun is `ils'; the feminine pronoun is `elles'. In all of these cases, the masculine pronoun is standardly used for groups of mixed or unknown gender. In order to correctly translate a sentence in English containing the word `they' into one of these languages, it is necessary to determine whether or not the referent is a group of females. If it is, then the translation must be the feminine pronoun; otherwise, it must be the masculine pronoun. Therefore, if one can create a Winograd schema in English where the ambiguous pronoun is `they' and the correct referent for one element is a collection of men and for the other is a collection of women, then to translate both elements correctly requires solving the Winograd schema. As an example, consider the Winograd schema: If these sentences are translated into French, then `they' in the first sentence should be translated `elles', as referring to Jane and Susan, and `they' in the second sentence should be translated `ils', as referring to Fred and George. A number of the Winograd schemas already published can be “converted” quite easily to this form. Indeed, the above example was constructed in this way; the original form was “Jane knocked on Susan's door, but she did not [answer/get an answer].” Of the 144 schemas in the collection at http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WSCollection.html there are 33 that can plausibly be translated this way. (#'s 3, 4, 12, 15, 17, 18, 23, 25, 26, 27, 42, 43, 44, 45, 46, 57, 58, 60, 69, 70, 71, 77, 78, 79, 86, 97, 98, 106, 111, 114, 127, 132, 144) — essentially any schema that involves two people as possible referents where the content of the sentences makes sense as applied to groups of people rather than individuals. A similar device, in the opposite direction, relies on the fact that French does not distinguish between the possessive pronouns `his' and `her'. The pronouns `son' and `sa' are gendered, but the gender agrees with the possession, not the possessor. Therefore a Winograd schema that relies on finding a referent for a possessive pronoun can be turned into a hard pair of French-to-English translation problems by making one possible referent male and the other one female. For example, schema #124 in the online collection reads “The man lifted the boy onto his [bunk bed/shoulders].” Changing the boy to a girl and translating into French gives “L'homme leva la fille sur [ses épaules/son lit superposé]”. In the first sentence “ses” is translated “his”, in the second “son” is translated “her”. The eleven Winograd schemas #'s 112, 117, 118, 119, 124, 125, 127, 129, 130, 131, and 143 can be converted this way. Care must be taken to avoid relying on, or seeming to rely on, objectionable stereotypes about men and women. One mechanism that can sometimes be used is to include a potentially problematic sentence in both directions. For instance, schema 23 from the WSC collection can be translated into both “The girls were bullying the boys so we [punished/rescued] them” and “The boys were bullying the girls, so we [punished/rescued] them,” thus avoiding any presupposition of whether girls are more likely to bully boys or vice versa. A historical note: Winograd schemas were named after Terry Winograd because of a well-known example in his doctoral thesis (Winograd 1970). In his thesis, this example followed the above form, and the importance of the example was justified in terms of machine translation. Winograd's original schema was: “The city councilmen refused to give the women a permit for a demonstration because they [feared/advocated] violence", and Winograd explained that, in the sentence with `feared', `they' would refer to the councilmen and would be translated as `ils' in French, whereas in the sentence with `advocated', `they' would refer to the women and would be translated as `elles'. In the later versions of the thesis, published as (Winograd 1972), he changed `women' to `demonstrators'; this made the disambiguation clearer, but lost the point about translation. Current state of the art No one familiar with the state of the art in machine translation technology or the state of the art of artificial intelligence generally will be surprised to learn that currently machine translation program are unable to solve these Winograd schema challenge problems. What may be more surprising is that currently (July 2016), machine translation programs are unable to choose the feminine plural pronoun even when the only possible antecedent is a group of women. For instance, the sentence “The girls sang a song and they danced” is translated into French as “Les filles ont chanté une chanson et ils ont dansé” by Google Translate (GT), Bing Translate, and Yandex. In fact, I have been unable to construct any sentence in English that is translated into any language using the feminine plural pronun. Note that, since the masculine plural pronoun is used for groups of mixed gender in all these languages, it is almost certainly more common in text than the feminine plural; hence this strategy is reasonable faute de mieux. (It also seems likely that the erroneous use of a feminine plural for a masculine antecedent sounds even more jarring than the reverse.) The same thing sometimes occurs in translating the feminine plural pronoun between languages that have it. GT translates the French word `elles' into Spanish as `ellos' (the masculine form). Curiously, in the opposite direction, it gets the right answer; the Spanish `ellas' (fem.) is translated into French as `elles'. Language-specific issues The masculine and feminine plural pronouns are distinguished in the Romance languages (French, Spanish, Italian, Portuguese etc.) and in Semitic languages (Arabic, Hebrew, etc.) I have consulted with native speakers and experts in these languages about the degree to which the gender distinction is observed in practice. The experts say that in French, Spanish, Italian, and Portuguese, the distinction is very strictly observed; the use of a masculine pronoun for a feminine antecedent is jarringly wrong to a native or fluent speaker. “Les filles ont chanté une chanson et ils ont dansé” sounds as wrong to a French speaker as “The girl sang a song and he danced” sounds to an English speaker; in both cases, the hearer will interpret the pronoun as referrinig to some other persons or person, who is male. In Hebrew and Arabic, this is much less true; in speech, and even, increasingly, in writing, the masculine pronoun is often used for a feminine antecedent. Looking further ahead, it is certainly possible that gender distinctions will be abandoned in the Romance languages, or even that English will have driven all other languages out of existence, sooner than AI systems will be able to do pronoun resolution in Winograd schemas; at that point, this test will no longer be useful. In some cases, a translation program can side-step the issue by omitting the pronoun altogether. For example, GT translates the above sentence “The girls sang a song and they danced” into Spanish as “Las chicas cantaron una canción y bailaban” and into Italian as “Le ragazze hanno cantato una canzone e ballavano.” However, with the more complex sentences of the Winograd Schemas, this strategy will rarely give a plausible translation for both elements of the schema. Other languages, other ambiguities Broadly speaking, whenever a target language INLINEFORM0 requires some distinction that is optional or non-existent in source language INLINEFORM1 , it is possible to create a sentence INLINEFORM2 in INLINEFORM3 where the missing information is not explicit but can be inferred from background knowledge. Translating INLINEFORM4 from INLINEFORM5 to INLINEFORM6 thus requires using the background knowledge to resolve the ambiguity, and will therefore be challenging for automatic machine translation. A couple of examples: The word `sie' in German serves as both the formal second person prounoun (always capitalized), the third person feminine singular, and the third person plural. Therefore, it can be translated into English as either “you”, “she”, “it”, or “they”; and into French as either `vous', `il', `elle', `ils', or `elles'. (The feminine third-person singular German `sie' can be translated as neuter in English and as masculine in French because the three languages do not slice up the worlds into genders in the same way.) Likewise, the possessive pronoun `ihr' in all its declensions can mean either `her' or `their'. In some cases, the disambiguation can be carried out on purely syntactic ground; e.g. if `sie' is the subject of a third-person singular verb, it must mean `she'. However, in many case, the disambiguation requires a deeper level of understanding. Thus, it should be possible to construct German Winograd schemas based on the words `sie' or `ihr' that have to be solved in order to translate them into English. For example, Marie fand fünf verlassene Kätzchen im Keller. Ihre Mutter war gestorben. (Marie found five abandoned kittens in the cellar. Their mother was dead.) vs. Marie fand fünf verlassene Kätzchen im Keller. Ihre Mutter war unzufrieden. (Marie found five abandoned kittens in the cellar. Her mother was displeased.) Another example: French distinguishes between a male friend `ami' and a female friend `amie'. Therefore, in translating the word “friend” into French, it is necessary, if possible, to determine the sex of the friend; and the clue for that can involve an inference that, as far as AI programs are concerned, is quite remote. Human readers are awfully good at picking up on those, of course. In general, with this kind of thing, if you want to break GT or another translation program, the trick is to place a large separation between the evidence and the word being disambiguated. In this case, at the present time, the separation can be pretty small. GT correctly translates “my friend Pierre” and “my friend Marie” as “mon ami Pierre” and “mon amie Marie” and it translates “She is my friend” as “Elle est mon amie”, but rather surpringly it breaks down at “Marie is my friend,” which it translates “Marie est mon ami.” As for something like “Jacques said to Marie, `You have always been a true friend,' ” that is quite hopeless. GT can surprise one, though, in both directions; sometimes it misses a very close clue, as in “Marie is my friend”, but other times it can carry a clue further than one would have guessed. Acknowledgements Thanks to Arianna Bisazza, Gerhard Brewka, Antoine Cerfon, Joseph Davis, Gigi Dopico-Black, Nizar Habash, Leora Morgenstern, Oded Regev, Francesca Rossi, Vesna Sabljakovic-Fritz, and Manuela Veloso for help with the various languages and discussion of gendered pronouns. References E. Davis, “Qualitative Spatial Reasoning in Interpreting Text and Narrative” Spatial Cognition and Computation, 13:4, 2013, 264-294. H. Levesque, E. Davis, and L. Morgenstern, “The Winograd Schema Challenge,” KR 2012. L. Morgenstern, E. Davis, and C. Ortiz, “Report on the Winograd Schema Challenge, 2016”, in preparation. T. Winograd, “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language," Ph.D. thesis, Department of Mathematics, MIT, August 1970. Published as MIT AITR-235, January 1971. T. Winograd, Understanding Natural Language, Academic Press, 1972.
What language do they explore?
English, French, German
2,285
qasper
3k
Introduction Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks. A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples: In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company. The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work. Related Work We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT. There is much work on a closely related problem viz., classifying sentences in dialogues through dialogue-specific categories called dialogue acts BIBREF16 , which we will not review here. Just as one example, Cotterill BIBREF17 classifies questions in emails into the dialogue acts of YES_NO_QUESTION, WH_QUESTION, ACTION_REQUEST, RHETORICAL, MULTIPLE_CHOICE etc. We could not find much work related to mining of performance appraisals data. Pawar et al. BIBREF18 uses kernel-based classification to classify sentences in both performance appraisal text and product reviews into classes SUGGESTION, APPRECIATION, COMPLAINT. Apte et al. BIBREF6 provides two algorithms for matching the descriptions of goals or tasks assigned to employees to a standard template of model goals. One algorithm is based on the co-training framework and uses goal descriptions and self-appraisal comments as two separate perspectives. The second approach uses semantic similarity under a weak supervision framework. Ramrakhiyani et al. BIBREF5 proposes label propagation algorithms to discover aspects in supervisor assessments in performance appraisals, where an aspect is modelled as a verb-noun pair (e.g. conduct training, improve coding). Dataset In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19. Sentence Classification The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class. STRENGTH: WEAKNESS: SUGGESTION: Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”. We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation. Comparison with Sentiment Analyzer We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem. Discovering Clusters within Sentence Classes After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters. PA along Attributes In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI. In the example in Section SECREF4 , the first sentence (which has class STRENGTH) can be mapped to two attributes: FUNCTIONAL_EXCELLENCE and BUILDING_EFFECTIVE_TEAMS. Similarly, the third sentence (which has class WEAKNESS) can be mapped to the attribute INTERPERSONAL_EFFECTIVENESS and so forth. Thus, in order to answer the second question in Section SECREF1 , we need to map each sentence in each of the 3 classes to zero, one, two or more attributes, which is a multi-class multi-label classification problem. We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2. Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3 It can be observed that INLINEFORM0 would be undefined if INLINEFORM1 is empty and similarly INLINEFORM2 would be undefined when INLINEFORM3 is empty. Hence, overall precision and recall are computed by averaging over all the instances except where they are undefined. Instance-level F-measure can not be computed for instances where either precision or recall are undefined. Therefore, overall F-measure is computed using the overall precision and recall. Summarization of Peer Feedback using ILP The PA system includes a set of peer feedback comments for each employee. To answer the third question in Section SECREF1 , we need to create a summary of all the peer feedback comments about a given employee. As an example, following are the feedback comments from 5 peers of an employee. The individual sentences in the comments written by each peer are first identified and then POS tags are assigned to each sentence. We hypothesize that a good summary of these multiple comments can be constructed by identifying a set of important text fragments or phrases. Initially, a set of candidate phrases is extracted from these comments and a subset of these candidate phrases is chosen as the final summary, using Integer Linear Programming (ILP). The details of the ILP formulation are shown in Table TABREF36 . As an example, following is the summary generated for the above 5 peer comments. humble nature, effective communication, technical expertise, always supportive, vast knowledge Following rules are used to identify candidate phrases: Various parameters are used to evaluate a candidate phrase for its importance. A candidate phrase is more important: A complete list of parameters is described in detail in Table TABREF36 . There is a trivial constraint INLINEFORM0 which makes sure that only INLINEFORM1 out of INLINEFORM2 candidate phrases are chosen. A suitable value of INLINEFORM3 is used for each employee depending on number of candidate phrases identified across all peers (see Algorithm SECREF6 ). Another set of constraints ( INLINEFORM4 to INLINEFORM5 ) make sure that at least one phrase is selected for each of the leadership attributes. The constraint INLINEFORM6 makes sure that multiple phrases sharing the same headword are not chosen at a time. Also, single word candidate phrases are chosen only if they are adjectives or nouns with lexical category noun.attribute. This is imposed by the constraint INLINEFORM7 . It is important to note that all the constraints except INLINEFORM8 are soft constraints, i.e. there may be feasible solutions which do not satisfy some of these constraints. But each constraint which is not satisfied, results in a penalty through the use of slack variables. These constraints are described in detail in Table TABREF36 . The objective function maximizes the total importance score of the selected candidate phrases. At the same time, it also minimizes the sum of all slack variables so that the minimum number of constraints are broken. INLINEFORM0 : No. of candidate phrases INLINEFORM1 : No. of phrases to select as part of summary INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM0 and INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM0 (For determining number of phrases to select to include in summary) Evaluation of auto-generated summaries We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries. Conclusions and Further Work In this paper, we presented an analysis of the text generated in Performance Appraisal (PA) process in a large multi-national IT company. We performed sentence classification to identify strengths, weaknesses and suggestions for improvements found in the supervisor assessments and then used clustering to discover broad categories among them. As this is non-topical classification, we found that SVM with ADWS kernel BIBREF18 produced the best results. We also used multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Logistic Regression classifier was observed to produce the best results for this topical classification. Finally, we proposed an ILP-based summarization technique to produce a summary of peer feedback comments for a given employee and compared it with manual summaries. The PA process also generates much structured data, such as supervisor ratings. It is an interesting problem to compare and combine the insights from discovered from structured data and unstructured text. Also, we are planning to automatically discover any additional performance attributes to the list of 15 attributes currently used by HR.
What summarization algorithms did the authors experiment with?
LSA, TextRank, LexRank and ILP-based summary.
3,045
qasper
3k
Introduction Natural languages evolve and words have always been subject to semantic change over time BIBREF1. With the rise of large digitized text resources recent NLP technologies have made it possible to capture such change with vector space models BIBREF2, BIBREF3, BIBREF4, BIBREF5, topic models BIBREF6, BIBREF7, BIBREF8, and sense clustering models BIBREF9. However, many approaches for detecting LSC differ profoundly from each other and therefore drawing comparisons between them can be challenging BIBREF10. Not only do architectures for detecting LSC vary, their performance is also often evaluated without access to evaluation data or too sparse data sets. In cases where evaluation data is available, oftentimes LSCD systems are not evaluated on the same data set which hinders the research community to draw comparisons. For this reason we report the results of the first shared task on unsupervised lexical semantic change detection in German that is based on an annotated data set to guarantee objective reasoning throughout different approaches. The task was organized as part of the seminar 'Lexical Semantic Change Detection' at the IMS Stuttgart in the summer term of 2019. Task The goal of the shared task was to create an architecture to detect semantic change and to rank words according to their degree of change between two different time periods. Given two corpora Ca and Cb, the target words had to be ranked according to their degree of lexical semantic change between Ca and Cb as annotated by human judges. A competition was set up on Codalab and teams mostly consisting of 2 people were formed to take part in the task. There was one group consisting of 3 team members and two individuals who entered the task on their own. In total there were 12 LSCD systems participating in the shared task. The shared task was divided into three phases, i.e., development, testing and analysis phase. In the development phase each team implemented a first version of their model based on a trial data set and submitted it subsequently. In the testing phase the testing data was made public and participants applied their models to the test data with a restriction of possible result uploads to 30. The leaderboard was public at all times. Eventually, the analysis phase was entered and the models of the testing phase were evaluated in terms of the predictions they made and parameters could be tuned further. The models and results will be discussed in detail in sections 7 and 8. Corpora The task, as framed above, requires to detect the semantic change between two corpora. The two corpora used in the shared task correspond to the diachronic corpus pair from BIBREF0: DTA18 and DTA19. They consist of subparts of DTA corpus BIBREF11 which is a freely available lemmatized, POS-tagged and spelling-normalized diachronic corpus of German containing texts from the 16th to the 20th century. DTA18 contains 26 million sentences published between 1750-1799 and DTA19 40 million between 1850-1899. The corpus version used in the task has the following format: "year [tab] lemma1 lemma2 lemma3 ...". Evaluation The Diachronic Usage Relatedness (DURel) gold standard data set includes 22 target words and their varying degrees of semantic change BIBREF12. For each of these target words a random sample of use pairs from the DTA corpus was retrieved and annotated. The annotators were required to rate the pairs according to their semantic relatedness on a scale from 1 to 4 (unrelated - identical meanings) for two time periods. The average Spearman's $\rho $ between the five annotators was 0.66 for 1,320 use paris. The resulting word ranking of the DURel data set is determined by the mean usage relatedness across two time periods and is used as the benchmark to compare the models’ performances in the shared task. Evaluation ::: Metric The output of a system with the target words in the predicted order is compared to the gold ranking of the DURel data set. As the metric to assess how well the model's output fits the gold ranking Spearman's $\rho $ was used. The higher Spearman's rank-order correlation the better the system's performance. Evaluation ::: Baselines Models were compared to two baselines for the shared task: log-transformed normalized frequency difference (FD) count vectors with column intersection and cosine distance (CNT + CI + CD) The window size for CNT + CI + CD was 10. Find more information on these models in BIBREF0. Participating Systems Participants mostly rely on the models compared in BIBREF0 and apply modifications to improve them. In particular, most teams make use of skip-gram with negative sampling (SGNS) based on BIBREF13 to learn the semantic spaces of the two time periods and orthogonal procrustes (OP) to align these vector spaces, similar to the approach by BIBREF14. Different meaning representations such as sense clusters are used as well. As measure to detect the degree of LSC all teams except one choose cosine distance (CD). This team uses Jensen-Shannon distance (JSD) instead, which computes the distance between probability distributions BIBREF15. The models of each team will be briefly introduced in this section. Participating Systems ::: sorensbn Team sorensbn makes use of SGNS + OP + CD to detect LSC. They use similar hyperparameters as in BIBREF0 to tune the SGNS model. They use an open-sourced noise-aware implementation to improve the OP alignment BIBREF16. Participating Systems ::: tidoe Team tidoe builds on SGNS + OP + CD, but they add a transformation step to receive binarized representations of matrices BIBREF17. This step is taken to counter the bias that can occur in vector-space models based on frequencies BIBREF18. Participating Systems ::: in vain The team applies a model based on SGNS with vector initialization alignment and cosine distance (SGNS + VI + CD). Vector initialization is an alignment strategy where the vector space learning model for $t_2$ is initialized with the vectors from $t_1$ BIBREF19. Since SGNS + VI + OP does not perform as well as other models in BIBREF0, they alter the vector initialization process by initializing on the complete model instead of only the word matrix of $t_1$ to receive improved results. Participating Systems ::: Evilly In line with previous approaches, team Evilly builds upon SGNS + OP + CD. They alter the OP step by using only high-frequency words for alignment. Participating Systems ::: DAF Team DAF uses an architecture based on learning vectors with fastText, alignment with unsupervised and supervised variations of OP, and CD, using the MUSE package BIBREF20, BIBREF21. For the supervised alignment stop words are used. The underlying assumption is that stop words serve as functional units of language and their usage should be consistent over time. Participating Systems ::: SnakesOnAPlane The team learns vector spaces with count vectors, positive pointwise mutual information (PPMI), SGNS and uses column intersection (CI) and OP as alignment techniques where applicable. Then they compare two distance measures (CD and JSD) for the different models CNT + CI, PPMI + CI and SGNS + OP to identify which measure performs better for these models. They also experiment with different ways to remove negative values from SGNS vectors, which is needed for JSD. Participating Systems ::: TeamKulkarni15 TeamKulkarni15 uses SGNS + OP + CD with the modification of local alignment with k nearest neighbors, since other models often use global alignment that can be prone to noise BIBREF22. Participating Systems ::: Bashmaistori They use word injection (WI) alignment on PPMI vectors with CD. This approach avoids the complex alignment procedure for embeddings and is applicable to embeddings and count-based methods. They compare two implementations of word injection BIBREF23, BIBREF0 as these showed different results on different data sets. Participating Systems ::: giki Team giki uses PPMI + CI + CD to detect LSC. They state that a word sense is determined by its context, but relevant context words can also be found outside a predefined window. Therefore, they use tf-idf to select relevant context BIBREF24. Participating Systems ::: Edu-Phil Similar to team DAF they also use fastText + OP + CD. Their hypothesis is that fastText may increase the performance for less frequent words in the corpus since generating word embeddings in fasttext is based on character n-grams. Participating Systems ::: orangefoxes They use the model by BIBREF5 which is based on SGNS, but avoids alignment by treating time as a vector that may be combined with word vectors to get time-specific word vectors. Participating Systems ::: Loud Whisper Loud Whisper base their approach on BIBREF9 which is a graph-based sense clustering model. They process the data set to receive bigrams, create a co-occurence graph representation and after clustering assess the type of change per word by comparing the results against an intersection table. Their motivation is not only to use a graph-based approach, but to extend the approach by enabling change detection for all parts of speech as opposed to the original model. Results and Discussion Table TABREF8 shows the results of the shared task. All teams receive better results than baseline 1 (FD), of which a total of 8 teams outperform baseline 2 (CNT + CI + CD). The 4 top scores with $\rho $ $>$ 0.7 are either modified versions of SGNS + OP + CD or use SGNS + VI + CD. The following 4 scores in the range of 0.5 $<$ $\rho $ $<$ 0.6 are generated by the models fastText + OP + CD, SGNS + OP + CD/JSD, and PPMI + WI + CD. Contrary to the results by BIBREF0 the modified version of vector initialization shows high performance similar to OP alignment, as previously reported by BIBREF14. Some modifications to the SGNS + OP + CD approach are able to yield better results than others, e.g. noise-aware alignment and binarized matrices as compared to frequency-driven OP alignment or local alignment with KNN. Team SnakesOnAPlane compare two distance measures and their results show that JSD ($\rho $ $=$ .561) performs minimally worse than CD ($\rho $ $=$ .565) as the semantic change measure for their model. The overall best-performing model is Skip-Gram with orthogonal alignment and cosine distance (SGNS + OP + CD) with similar hyperparameters as in the model architecture described previously BIBREF0. Said architecture was used as the basis for the two best performing models. Team tidoe reports that binarizing matrices leads to a generally worse performance ($\rho $ $=$ .811) compared to the unmodified version of SGNS + OP + CD ($\rho $ $=$ 0.9). The noise aware alignment approach applied by team sorensbn obtains a higher score ($\rho $ $=$ .854) compared to the result reported by tidoe, but is unable to exceed the performance of the unmodified SNGS + OP + CD for the same set of hyperparameters (window size = 10, negative sampling = 1; subsampling = None). Of the 8 scores above the second baseline, 5 use an architecture that builds upon SGNS + OP + CD. Whereas in the lower score segment $\rho $ $<$ 0.5 none of the models use SGNS + OP + CD. These findings are in line with the results reported by BIBREF0, however the overall best results are lower in this shared task, which is expected from the smaller number of parameter combinations explored. Additionally, in the shared task the objective was to report the best score and not to calculate the mean which makes it more difficult to compare the robustness of the models presented here.
What is the corpus used for the task?
DTA18, DTA19
1,908
qasper
3k
INTRODUCTION The idea of language identification is to classify a given audio signal into a particular class using a classification algorithm. Commonly language identification task was done using i-vector systems [1]. A very well known approach for language identification proposed by N. Dahek et al. [1] uses the GMM-UBM model to obtain utterance level features called i-vectors. Recent advances in deep learning [15,16] have helped to improve the language identification task using many different neural network architectures which can be trained efficiently using GPUs for large scale datasets. These neural networks can be configured in various ways to obtain better accuracy for language identification task. Early work on using Deep learning for language Identification was published by Pavel Matejka et al. [2], where they used stacked bottleneck features extracted from deep neural networks for language identification task and showed that the bottleneck features learned by Deep neural networks are better than simple MFCC or PLP features. Later the work by I. Lopez-Moreno et al. [3] from Google showed how to use Deep neural networks to directly map the sequence of MFCC frames into its language class so that we can apply language identification at the frame level. Speech signals will have both spatial and temporal information, but simple DNNs are not able to capture temporal information. Work done by J. Gonzalez-Dominguez et al. [4] by Google developed an LSTM based language identification model which improves the accuracy over the DNN based models. Work done by Alicia et al. [5] used CNNs to improve upon i-vector [1] and other previously developed systems. The work done by Daniel Garcia-Romero et al. [6] has used a combination of Acoustic model trained for speech recognition with Time-delay neural networks where they train the TDNN model by feeding the stacked bottleneck features from acoustic model to predict the language labels at the frame level. Recently X-vectors [7] is proposed for speaker identification task and are shown to outperform all the previous state of the art speaker identification algorithms and are also used for language identification by David Snyder et al. [8]. In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results. POOLING STRATEGIES In any language identification model, we want to obtain utterance level representation which has very good language discriminative features. These representations should be compact and should be easily separable by a linear classifier. The idea of any pooling strategy is to pool the frame-level representations into a single utterance level representation. Previous works by [7] have used simple mean and standard deviation aggregation to pool the frame-level features from the top layer of the neural network to obtain the utterance level features. Recently [9] used VLAD based pooling strategy for speaker identification which is inspired from [10] proposed for face recognition. The NetVLAD [11] and Ghost-VLAD [10] methods are proposed for Place recognition and face recognition, respectively, and in both cases, they try to aggregate the local descriptors into global features. In our case, the local descriptors are features extracted from ResNet [15], and the global utterance level feature is obtained by using GhostVLAD pooling. In this section, we explain different pooling methods, including NetVLAD, Ghost-VLAD, Statistic pooling, and Average pooling. POOLING STRATEGIES ::: NetVLAD pooling The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side). The model takes spectrogram as an input and feeds into CNN based ResNet architecture. The ResNet is used to map the spectrogram into 3D feature map of dimension HxWxD. We convert this 3D feature map into 2D by unfolding H and W dimensions, creating a NxD dimensional feature map, where N=HxW. The NetVLAD layer is kept on top of the feature extraction layer of ResNet, as shown in Figure 1. The NetVLAD now takes N features vectors of dimension D and computes a matrix V of dimension KxD, where K is the number clusters in the NetVLAD layer, and D is the dimension of the feature vector. The matrix V is computed as follows. Where $w_k$,$b_k$ and $c_k$ are trainable parameters for the cluster $k$ and V(j,k) represents a point in the V matrix for (j,k)th location. The matrix is constructed using the equation (1) where the first term corresponds to the soft assignment of the input $x_i$ to the cluster $c_k$, whereas the second term corresponds to the residual term which tells how far the input descriptor $x_i$ is from the cluster center $c_k$. POOLING STRATEGIES ::: GhostVLAD pooling GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label. POOLING STRATEGIES ::: Statistic and average pooling In statistic pooling, we compute the first and second order statistics of the local features from the top layer of the ResNet model. The 3-D feature map is unfolded to create N features of D dimensions, and then we compute the mean and standard deviation of all these N vectors and get two D dimensional vectors, one for mean and the other for standard deviation. We then concatenate these 2 features and feed it to the projection layer for predicting the language label. In the Average pooling layer, we compute only the first-order statistics (mean) of the local features from the top layer of the CNN model. The feature map from the top layer of CNN is unfolded to create N features of D dimensions, and then we compute the mean of all these N vectors and get D dimensional representation. We then feed this feature to the projection layer followed by softmax for predicting the language label. DATASET In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow. EXPERIMENTS In this section, we describe the feature extraction process and network architecture in detail. We use spectral features of 256 dimensions computed using 512 point FFT for every frame, and we add an energy feature for every frame giving us total 257 features for every frame. We use a window size of 25ms and frame shift of 10ms during feature computation. We crop random 5sec audio data from each utterance during training which results in a spectrogram of size 257x500 (features x number of features). We use these spectrograms as input to our CNN model during training. During testing, we compute the prediction score irrespective of the audio length. For the network architecture, we use ResNet-34 architecture, as described in [9]. The model uses convolution layers with Relu activations to map the spectrogram of size 257x500 input into 3D feature map of size 1x32x512. This feature cube is converted into 2D feature map of dimension 32x512 and fed into Ghost-VLAD/NetVLAD layer to generate a representation that has more language discrimination capacity. We use Adam optimizer with an initial learning rate of 0.01 and a final learning rate of 0.00001 for training. Each model is trained for 15 epochs with early stopping criteria. For the baseline, we train an i-vector model using GMM-UBM. We fit a small classifier on top of the generated i-vectors to measure the accuracy. This model is referred as i-vector+svm . To compare our model with the previous state of the art system, we set up the x-vector language identification system [8]. The x-vector model used time-delay neural networks (TDNN) along with statistic-pooling. We use 7 layer TDNN architecture similar to [8] for training. We refer to this model as tdnn+stat-pool . Finally, we set up a Deep LSTM based language identification system similar to [4] but with little modification where we add statistics pooling for the last layers hidden activities before classification. We use 3 layer Bi-LSTM with 256 hidden units at each layer. We refer to this model as LSTM+stat-pool. We train our i-vector+svm and TDNN+stat-pool using Kaldi toolkit. We train our NetVLAD and GhostVLAD experiments using Keras by modifying the code given by [9] for language identification. We train the LSTM+stat-pool and the remaining experiments using Pytorch [14] toolkit, and we will opensource all the codes and data soon. RESULTS In this section, we compare the performance of our system with the recent state of the art language identification approaches. We also compare different pooling strategies and finally, compare the robustness of our system to the length of the input spectrogram during training. We visualize the embeddings generated by the GhostVLAD method and conclude that the GhostVLAD embeddings shows very good feature discrimination capabilities. RESULTS ::: Comparison with different approaches We compare our system performance with the previous state of the art language identification approaches, as shown in Table 2. The i-vector+svm system is trained using GMM-UBM models to generate i-vectors as proposed in [1]. Once the i-vectors are extracted, we fit SVM classifier to classify the audio. The TDNN+stat-pool system is trained with a statistics pooling layer and is called the x-vector system as proposed by David Snyder et al. [11] and is currently the state of the art language identification approach as far as our knowledge. Our methods outperform the state of the art x-vector system by absolute 1.88% improvement in F1-score, as shown in Table 2. RESULTS ::: Comparison with different pooling techniques We compare our approach with different pooling strategies in Table 3. We use ResNet as our base feature extraction network. We keep the base network the same and change only the pooling layers to see which pooling approach performs better for language identification task. Our experiments show that GhostVLAD pooling outperforms all the other pooling methods by achieving 98.43% F1-Score. RESULTS ::: Duration analysis To observe the performance of our method with different input durations, we conducted an experiment where we train our model on different input durations. Since our model uses ResNet as the base feature extractor, we need to feed fixed-length spectrogram. We conducted 4 different experiments where we trained the model using 2sec, 3sec, 4sec and 5sec spectrograms containing 200,300,400 and 500 frames respectively. We observed that the model trained with a 5sec spectrogram is the best model, as shown in Table 4. RESULTS ::: Visualization of embeddings We visualize the embeddings generated by our approach to see the effectiveness. We extracted 512-dimensional embeddings for our testing data and reduced the dimensionality using t-sne projection. The t-sne plot of the embeddings space is shown in Figure 3. The plot shows that the embeddings learned by our approach has very good discriminative properties Conclusion In this work, we use Ghost-VLAD pooling approach that was originally proposed for face recognition to improve language identification performance for Indian languages. We collected and curated 630 hrs audio data from news All India Radio news channel for 7 Indian languages. Our experimental results shows that our approach outperforms the previous state of the art methods by an absolute 1.88% F1-score. We have also conducted experiments with different pooling strategies proposed in the past, and the GhostVLAD pooling approach turns out to be the best approach for aggregating frame-level features into a single utterance level feature. Our experiments also prove that our approach works much better even if the input during training contains smaller durations. Finally, we see that the embeddings generated by our method has very good language discriminative features and helps to improve the performance of language identification.
Which 7 Indian languages do they experiment with?
Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam
2,453
qasper
3k
Introduction Reading Comprehension (RC) has become a central task in natural language processing, with great practical value in various industries. In recent years, many large-scale RC datasets in English BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 have nourished the development of numerous powerful and diverse RC models BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. The state-of-the-art model BIBREF12 on SQuAD, one of the most widely used RC benchmarks, even surpasses human-level performance. Nonetheless, RC on languages other than English has been limited due to the absence of sufficient training data. Although some efforts have been made to create RC datasets for Chinese BIBREF13, BIBREF14 and Korean BIBREF15, it is not feasible to collect RC datasets for every language since annotation efforts to collect a new RC dataset are often far from trivial. Therefore, the setup of transfer learning, especially zero-shot learning, is of extraordinary importance. Existing methods BIBREF16 of cross-lingual transfer learning on RC datasets often count on machine translation (MT) to translate data from source language into target language, or vice versa. These methods may not require a well-annotated RC dataset for the target language, whereas a high-quality MT model is needed as a trade-off, which might not be available when it comes to low-resource languages. In this paper, we leverage pre-trained multilingual language representation, for example, BERT learned from multilingual un-annotated sentences (multi-BERT), in cross-lingual zero-shot RC. We fine-tune multi-BERT on the training set in source language, then test the model in target language, with a number of combinations of source-target language pair to explore the cross-lingual ability of multi-BERT. Surprisingly, we find that the models have the ability to transfer between low lexical similarity language pair, such as English and Chinese. Recent studies BIBREF17, BIBREF12, BIBREF18 show that cross-lingual language models have the ability to enable preliminary zero-shot transfer on simple natural language understanding tasks, but zero-shot transfer of RC has not been studied. To our knowledge, this is the first work systematically exploring the cross-lingual transferring ability of multi-BERT on RC tasks. Zero-shot Transfer with Multi-BERT Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting. Zero-shot Transfer with Multi-BERT ::: Experimental Setup and Data We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet. Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting. The pre-trained multi-BERT is the official released one. This multi-lingual version of BERT were pre-trained on corpus in 104 languages. Data in different languages were simply mixed in batches while pre-training, without additional effort to align between languages. When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged. Zero-shot Transfer with Multi-BERT ::: Experimental Results Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean. In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not. Zero-shot Transfer with Multi-BERT ::: Discussion ::: The Effect of Machine Translation Table TABREF8 shows that fine-tuning on un-translated target language data achieves much better performance than data translated into the target language. Because the above statement is true across all the languages, it is a strong evidence that translation degrades the performance.We notice that the translated corpus and untranslated corpus are not the same. This may be another factor that influences the results. Conducting an experiment between un-translated and back-translated data may deal with this problem. Zero-shot Transfer with Multi-BERT ::: Discussion ::: The Effect of Other Factors Here we discuss the case that the training data are translated. We consider each result is affected by at least three factors: (1) training corpus, (2) data size, (3) whether the source corpus is translated into the target language. To study the effect of data-size, we conducted an extra experiment where we down-sampled the size of English data to be the same as Chinese corpus, and used the down-sampled corpus to train. Then We carried out one-way ANOVA test and found out the significance of the three factors are ranked as below: (1) > (2) >> (3). The analysis supports that the characteristics of training data is more important than translated into target language or not. Therefore, although translation degrades the performance, whether translating the corpus into the target language is not critical. What Does Zero-shot Transfer Model Learn? ::: Unseen Language Dataset It has been shown that extractive QA tasks like SQuAD may be tackled by some language independent strategies, for example, matching words in questions and context BIBREF20. Is zero-shot learning feasible because the model simply learns this kind of language independent strategies on one language and apply to the other? To verify whether multi-BERT largely counts on a language independent strategy, we test the model on the languages unseen during pre-training. To make sure the languages have never been seen before, we artificially make unseen languages by permuting the whole vocabulary of existing languages. That is, all the words in the sentences of a specific language are replaced by other words in the same language to form the sentences in the created unseen language. It is assumed that if multi-BERT used to find answers by language independent strategy, then multi-BERT should also do well on unseen languages. Table TABREF14 shows that the performance of multi-BERT drops drastically on the dataset. It implies that multi-BERT might not totally rely on pattern matching when finding answers. What Does Zero-shot Transfer Model Learn? ::: Embedding in Multi-BERT PCA projection of hidden representations of the last layer of multi-BERT before and after fine-tuning are shown in Fig. FIGREF15. The red points represent Chinese tokens, and the blue points are for English. The results show that tokens from different languages might be embedded into the same space with close spatial distribution. Even though during the fine-tuning only the English data is used, the embedding of the Chinese token changed accordingly. We also quantitatively evaluate the similarities between the embedding of the languages. The results can be found in the Appendix. What Does Zero-shot Transfer Model Learn? ::: Code-switching Dataset We observe linguistic-agnostic representations in the last subsection. If tokens are represented in a language-agnostic way, the model may be able to handle code-switching data. Because there is no code-switching data for RC, we create artificial code-switching datasets by replacing some of the words in contexts or questions with their synonyms in another language. The synonyms are found by word-by-word translation with given dictionaries. We use the bilingual dictionaries collected and released in facebookresearch/MUSE GitHub repository. We substitute the words if and only if the words are in the bilingual dictionaries. Table TABREF14 shows that on all the code-switching datasets, the EM/F1 score drops, indicating that the semantics of representations are not totally disentangled from language. However, the examples of the answers of the model (Table TABREF21) show that multi-BERT could find the correct answer spans although some keywords in the spans have been translated into another language. What Does Zero-shot Transfer Model Learn? ::: Typology-manipulated Dataset There are various types of typology in languages. For example, in English the typology order is subject-verb-object (SVO) order, but in Japanese and Korean the order is subject-object-verb (SOV). We construct a typology-manipulated dataset to examine if the typology order of the training data influences the transfer learning results. If the model only learns the semantic mapping between different languages, changing English typology order from SVO to SOV should improve the transfer ability from English to Japanese. The method used to generate datasets is the same as BIBREF21. The source code is from a GitHub repository named Shaul1321/rnn_typology, which labels given sentences to CoNLL format with StanfordCoreNLP and then re-arranges them greedily. Table TABREF23 shows that when we change the English typology order to SOV or OSV order, the performance on Korean is improved and worsen on English and Chinese, but very slightly. The results show that the typology manipulation on the training set has little influence. It is possible that multi-BERT normalizes the typology order of different languages to some extent. Conclusion In this paper, we systematically explore zero-shot cross-lingual transfer learning on RC with multi-BERT. The experimental results on English, Chinese and Korean corpora show that even when the languages for training and testing are different, reasonable performance can be obtained. Furthermore, we created several artificial data to study the cross-lingual ability of multi-BERT in the presence of typology variation and code-switching. We showed that only token-level pattern matching is not sufficient for multi-BERT to answer questions and typology variation and code-switching only caused minor effects on testing performance. Supplemental Material ::: Internal Representation of multi-BERT The architecture of multi-BERT is a Transformer encoder BIBREF25. While fine-tuning on SQuAD-like dataset, the bottom layers of multi-BERT are initialized from Google-pretrained parameters, with an added output layer initialized from random parameters. Tokens representations from the last layer of bottom-part of multi-BERT are inputs to the output layer and then the output layer outputs a distribution over all tokens that indicates the probability of a token being the START/END of an answer span. Supplemental Material ::: Internal Representation of multi-BERT ::: Cosine Similarity As all translated versions of SQuAD/DRCD are parallel to each other. Given a source-target language pair, we calculate cosine similarity of the mean pooling of tokens representation within corresponding answer-span as a measure of how much they look like in terms of the internal representation of multi-BERT. The results are shown in Fig. FIGREF26. Supplemental Material ::: Internal Representation of multi-BERT ::: SVCCA Singular Vector Canonical Correlation Analysis (SVCCA) is a general method to compare the correlation of two sets of vector representations. SVCCA has been proposed to compare learned representations across language models BIBREF24. Here we adopt SVCCA to measure the linear similarity of two sets of representations in the same multi-BERT from different translated datasets, which are parallel to each other. The results are shown in Fig FIGREF28. Supplemental Material ::: Improve Transfering In the paper, we show that internal representations of multi-BERT are linear-mappable to some extent between different languages. This implies that multi-BERT model might encode semantic and syntactic information in language-agnostic ways and explains how zero-shot transfer learning could be done. To take a step further, while transfering model from source dataset to target dataset, we align representations in two proposed way, to improve performance on target dataset. Supplemental Material ::: Improve Transfering ::: Linear Mapping Method Algorithms proposed in BIBREF23, BIBREF22, BIBREF26 to unsupervisedly learn linear mapping between two sets of embeddings are used here to align representations of source (training data) to those of target. We obtain the mapping generated by embeddings from one specific layer of pre-trained multi-BERT then we apply this mapping to transform the internal representations of multi-BERT while fine-tuning on training data. Supplemental Material ::: Improve Transfering ::: Adversarial Method In Adversarial Method, we add an additional transform layer to transform representations and a discrimination layer to discriminate between transformed representations from source language (training set) and target language (development set). And the GAN loss is applied in the total loss of fine-tuning. Supplemental Material ::: Improve Transfering ::: Discussion As table TABREF33 shows, there are no improvements among above methods. Some linear mapping methods even causes devastating effect on EM/F1 scores.
What is the model performance on target language reading comprehension?
Table TABREF6, Table TABREF8
2,492
qasper
3k
Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data.
What is the size of the dataset?
Dataset contains 3606 total sentences and 79087 total entities.
2,843
qasper
3k
Introduction The cognitive processes involved in human language comprehension are complex and only partially identified. According to the dual-stream model of speech comprehension BIBREF1 , sound waves are first converted to phoneme-like features and further processed by a ventral stream that maps those features onto words and semantic structures, and a dorsal stream that (among other things) supports audio-short term memory. The mapping of words onto meaning is thought to be subserved by widely distributed regions of the brain that specialize in particular modalities — for example visual aspects of the word banana reside in the occipital lobe of the brain and are activated when the word banana is heard BIBREF2 — and the different representation modalities are thought to be integrated into a single coherent latent representation in the anterior temporal lobe BIBREF3 . While this part of meaning representation in human language comprehension is somewhat understood, much less is known about how the meanings of words are integrated together to form the meaning of sentences and discourses. One tool researchers use to study the integration of meaning across words is electroencephelography (EEG), which measures the electrical activity of large numbers of neurons acting in concert. EEG has the temporal resolution necessary to study the processes involved in meaning integration, and certain stereotyped electrical responses to word presentations, known as event-related potentials (ERPs), have been identified with some of the processes thought to contribute to comprehension. In this work, we consider six ERP components that have been associated in the cognitive neuroscience and psycholinguistics literature with language processing and which we analyze in the data from BIBREF0 (see Figure FIGREF1 for spatial and temporal definitions of these ERP components). Three of these — the N400, EPNP, and PNP responses — are primarily considered markers for semantic processing, while the other three — the P600, ELAN, and LAN responses — are primarily considered markers for syntactic processing. However, the neat division of the ERP responses into either semantic or syntactic categories is controversial. The N400 response has been very well studied (for an overview see BIBREF4 ) and it is well established that it is associated with semantic complexity, but the features of language that trigger the other ERP responses we consider here are poorly understood. We propose to use a neural network pretrained as a language model to probe what features of language drive these ERP responses, and in turn to probe what features of language mediate the cognitive processes that underlie human language comprehension, and especially the integration of meaning across words. Background While a full discussion of each ERP component and the features of language thought to trigger each are beyond the scope of this document (for reviews see e.g. BIBREF0 , BIBREF2 , BIBREF4 , BIBREF5 , and BIBREF6 ), we introduce some basic features of ERP components to help in the discussion later. ERP components are electrical potential responses measured with respect to a baseline that are triggered by an event (in our case the presentation of a new word to a participant in an experiment). The name of each ERP component reflects whether the potential is positive or negative relative to the baseline. The N400 is so-named because it is Negative relative to a baseline (the baseline is typically recorded just before a word is presented at an electrode that is not affected by the ERP response) and because it peaks in magnitude at about 400ms after a word is presented to a participant in an experiment. The P600 is Positive relative to a baseline and peaks around 600ms after a word is presented to a participant (though its overall duration is much longer and less specific in time than the N400). The post-N400 positivity is so-named because it is part of a biphasic response; it is a positivity that occurs after the negativity associated with the N400. The early post-N400 positivity (EPNP) is also part of a biphasic response, but the positivity has an eariler onset than the standard PNP. Finally, the LAN and ELAN are the left-anterior negativity and early left-anterior negativity respectively. These are named for their timing, spatial distribution on the scalp, and direction of difference from the baseline. It is important to note that ERP components can potentially cancel and mask each other, and that it is difficult to precisely localize the neural activity that causes the changes in electrical potential at the electrodes where those changes are measured. Related Work This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call “distance" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer’s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions. Discussion In this work we find that all six of the ERP components from BIBREF0 can be predicted above chance by a model which has been pretrained using a language modeling objective and then directly trained to predict the components. This is in contrast to prior work which has successfully linked language models to the N400 BIBREF0 and P600 BIBREF7 but not the other ERP components. We also note that contrary to BIBREF7 , we find that an LSTM does contain information that can be used to predict EEG data, and in particular that it can predict the P600. We speculate that the analysis used in BIBREF7 did not find reliable effects because the language models were related to the EEG data through functions chosen a priori (the surprisal, and the `distance' metric). These functions, though interpretable, might be interpretable at the cost of losing much of the information in the representations learned by the network. In addition, we show through our multitask learning analysis that information is shared between ERP components, and between ERP components and behavioral data. Although these relationships must be viewed with caution until they can be verified across multiple datasets and with more variation in neural network architectures, here we consider some potential reasons for our findings. The broad point we wish to make is that by better understanding which ERP components share information with each other and with behavioral data through the type of analysis we present here (multitask learning) or other means, we can better understand what drives each ERP component and in turn the processes involved in human language comprehension. Conclusion We have shown that ERP components can be predicted from neural networks pretrained as language models and fine-tuned to directly predict those components. To the best of our knowledge, prior work has not successfully used statistical models to predict all of these components. Furthermore, we have shown that multitask learning benefits the prediction of ERP components and can suggest how components relate to each other. At present, these joint-training benefit relationships are only suggestive, but if these relationships ultimately lead to insights about what drives each ERP component, then the components become more useful tools for studying human language comprehension. By using multitask learning as a method of characterization, we have found some expected relationships (LAN+P600 and ELAN+P600) and several more surprising relationships. We believe that this is exactly the kind of finding that makes multitask learning an interesting exploratory technique in this area. Additionally, we have shown that information can be shared between heterogeneous types of data (eye-tracking, self-paced reading, and ERP components) in the domain of human language processing prediction, and in particular between behavioral and neural data. Given the small datasets associated with human language processing, using heterogeneous data is a potentially major advantage of a multitask approach. In future work, we will further explore what information is encoded into the model representations when neural and behavioral data are used to train neural networks, and how these representations differ from the representations in a model trained on language alone. Acknowledgments We thank our reviewers for their valuable feedback. This work is supported in part by National Institutes of Health grant number U01NS098969. Appendix Here we present a visualization (Figure FIGREF21 ) of the results presented in Table TABREF9 of the main paper, and a visualization (Figure FIGREF22 ) of a more complete set of results from which the information in Table TABREF16 of the main paper is drawn. We also show supplemental results for variants of our primary analysis on multitask learning with eye-tracking, self-paced reading time and ERP data. In the variants we modify the input representation to our decoder network to see whether the relationships between the behavioral data and neural activity appear to be consistent with different choices of encoder architectures. Additional (and more varied) choices or architectures are left to future work. The results in Table TABREF23 reflect using only the forward-encoder (rather than the bi-LSTM) in the encoder network, while the results in Table TABREF24 reflect using only the word embeddings (i.e. bypassing the LSTM entirely). While the results are clearly worse for each of these choices of architecture than for using a bi-LSTM encoder, the relationships between the behavioral data and the ERP signals is qualitatively similar. Finally, TABREF25 shows the Pearson correlation coefficient between different measures. We note that the patterns of correlation are different than the patterns of which measures benefit from joint training with each other.
What datasets are used?
Answer with content missing: (Whole Method and Results sections) The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences. Select: - ERP data collected and computed by Frank et al. (2015) - behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013)
1,971
qasper
3k
Introduction Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible. Among the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline. Production of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models. Proposed Framework Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively. Preprocessing step We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth. Joint variability of electrodes Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 . CNN & LSTM In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN. Deep autoencoder for spatio-temporal information As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix. In order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function. Classification with Extreme Gradient Boost At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery. Dataset We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels. Training and hyperparameter selection We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE. The architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier. Performance analysis and discussion To demonstrate the significance of the hierarchical CNN-LSTM-DAE method, we conducted separate experiments with the individual networks in PHASE-ONE of experiments and summarized the results in Table TABREF12 From the average accuracy scores, we observe that the mixed network performs much better than individual blocks which is in agreement with the findings in BIBREF21 . A detailed analysis on repeated runs further shows that in most of the cases, LSTM alone does not perform better than chance. CNN, on the other hand, is heavily biased towards the class label which sees more training data corresponding to it. Though the situation improves with combined CNN-LSTM, our analysis clearly shows the necessity of a better encoding scheme to utilize the combined features rather than mere concatenation of the penultimate features of both networks. The very fact that our combined network improves the classification accuracy by a mean margin of 14.45% than the CNN-LSTM network indeed reveals that the autoencoder contributes towards filtering out the unrelated and noisy features from the concatenated penultimate feature set. It also proves that the combined supervised and unsupervised neural networks, trained hierarchically, can learn the discriminative manifold better than the individual networks and it is crucial for improving the classification accuracy. In addition to accuracy, we also provide the kappa coefficients BIBREF24 of our method in Fig. FIGREF14 . Here, a higher mean kappa value corresponding to a task implies that the network is able to find better discriminative information from the EEG data beyond random decisions. The maximum above-chance accuracy (75.92%) is recorded for presence/absence of the vowel task and the minimum (49.14%) is recorded for the INLINEFORM0 . To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance. Next, we provide performance comparison of the proposed approach with the baseline methods for PHASE-TWO of our study (cross-validation experiment) in Table TABREF15 . Since the model encounters the unseen data of a new subject for testing, and given the high inter-subject variability of the EEG data, a reduction in the accuracy was expected. However, our network still managed to achieve an improvement of 18.91, 9.95, 67.15, 2.83 and 13.70 % over BIBREF17 . Besides, our best model shows more reliability compared to previous works: The standard deviation of our model's classification accuracy across all the tasks is reduced from 22.59% BIBREF17 and 17.52% BIBREF18 to a mere 5.41%. Conclusion and future direction In an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%. Acknowledgments This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR).
What data was presented to the subjects to elicit event-related responses?
7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)
2,379
qasper
3k
Introduction Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 . Recently, an increasing number of users have been subjected to harassment, or have witnessed offensive behaviors online BIBREF6 . Major social media companies (i.e. Facebook, Twitter) have utilized multiple resources—artificial intelligence, human reviewers, user reporting processes, etc.—in effort to censor offensive language, yet it seems nearly impossible to successfully resolve the issue BIBREF7 , BIBREF8 . The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 . Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date. This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models. Related Work The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Naïve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets. More recently, distributed word representations and neural network models have been widely applied for abusive language detection. Djuric et al. djuric2015hate used the Continuous Bag Of Words model with paragraph2vec algorithm BIBREF13 to more accurately detect hate speech than that of the plain Bag Of Words models. Badjatiya et al. badjatiya2017deep implemented Gradient Boosted Decision Trees classifiers using word representations trained by deep learning models. Other researchers have investigated character-level representations and their effectiveness compared to word-level representations BIBREF14 , BIBREF15 . As traditional machine learning methods have relied on feature engineering, (i.e. n-grams, POS tags, user information) BIBREF1 , researchers have proposed neural-based models with the advent of larger datasets. Convolutional Neural Networks and Recurrent Neural Networks have been applied to detect abusive language, and they have outperformed traditional machine learning classifiers such as Logistic Regression and SVM BIBREF15 , BIBREF16 . However, there are no studies investigating the efficiency of neural models with large-scale datasets over 100K. Methodology This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated. Traditional Machine Learning Models We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications: Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1 Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function Neural Network based Models Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features. CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024. Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output. All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer. RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer. For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data. Feature Extension While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language. As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1). (1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on. INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it. Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice. (3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'. INLINEFORM0 (4) Who the HELL is “LIKE" ING this post? Sick people.... Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data. In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated. Dataset Hate and Abusive Speech on Twitter BIBREF10 classifies tweets into 4 labels, “normal", “spam", “hateful" and “abusive". We were only able to crawl 70,904 tweets out of 99,996 tweet IDs, mainly because the tweet was deleted or the user account had been suspended. Table shows the distribution of labels of the crawled data. Data Preprocessing In the data preprocessing steps, user IDs, URLs, and frequently used emojis are replaced as special tokens. Since hashtags tend to have a high correlation with the content of the tweet BIBREF23 , we use a segmentation library BIBREF24 for hashtags to extract more information. For character-level representations, we apply the method Zhang et al. zhang2015character proposed. Tweets are transformed into one-hot encoded vectors using 70 character dimensions—26 lower-cased alphabets, 10 digits, and 34 special characters including whitespace. Training and Evaluation In training the feature engineering based machine learning classifiers, we truncate vector representations according to the TF-IDF values (the top 14,000 and 53,000 for word-level and character-level representations, respectively) to avoid overfitting. For neural network models, words that appear only once are replaced as unknown tokens. Since the dataset used is not split into train, development, and test sets, we perform 10-fold cross validation, obtaining the average of 5 tries; we divide the dataset randomly by a ratio of 85:5:10, respectively. In order to evaluate the overall performance, we calculate the weighted average of precision, recall, and F1 scores of all four labels, “normal”, “spam”, “hateful”, and “abusive”. Empirical Results As shown in Table , neural network models are more accurate than feature engineering based models (i.e. NB, SVM, etc.) except for the LR model—the best LR model has the same F1 score as the best CNN model. Among traditional machine learning models, the most accurate in classifying abusive language is the LR model followed by ensemble models such as GBT and RF. Character-level representations improve F1 scores of SVM and RF classifiers, but they have no positive effect on other models. For neural network models, RNN with LTC modules have the highest accuracy score, but there are no significant improvements from its baseline model and its attention-added model. Similarly, HybridCNN does not improve the baseline CNN model. For both CNN and RNN models, character-level features significantly decrease the accuracy of classification. The use of context tweets generally have little effect on baseline models, however they noticeably improve the scores of several metrics. For instance, CNN with context tweets score the highest recall and F1 for “hateful" labels, and RNN models with context tweets have the highest recall for “abusive" tweets. Discussion and Conclusion While character-level features are known to improve the accuracy of neural network models BIBREF16 , they reduce classification accuracy for Hate and Abusive Speech on Twitter. We conclude this is because of the lack of labeled data as well as the significant imbalance among the different labels. Unlike neural network models, character-level features in traditional machine learning classifiers have positive results because we have trained the models only with the most significant character elements using TF-IDF values. Variants of neural network models also suffer from data insufficiency. However, these models show positive performances on “spam" (14%) and “hateful" (4%) tweets—the lower distributed labels. The highest F1 score for “spam" is from the RNN-LTC model (0.551), and the highest for “hateful" is CNN with context tweets (0.309). Since each variant model excels in different metrics, we expect to see additional improvements with the use of ensemble models of these variants in future works. In this paper, we report the baseline accuracy of different learning models as well as their variants on the recently introduced dataset, Hate and Abusive Speech on Twitter. Experimental results show that bidirectional GRU networks with LTC provide the most accurate results in detecting abusive language. Additionally, we present the possibility of using ensemble models of variant models and features for further improvements. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632), the Technology Innovation Program (10073144) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). We would also like to thank Yongkeun Hwang and Ji Ho Park for helpful discussions and their valuable insights.
What learning models are used on the dataset?
Naïve Bayes (NB), Logistic Regression (LR), Support Vector Machine (SVM), Random Forests (RF), Gradient Boosted Trees (GBT), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN)
2,074
qasper
3k
Introduction Pre-training of language models has been shown to provide large improvements for a range of language understanding tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The key idea is to train a large generative model on vast corpora and use the resulting representations on tasks for which only limited amounts of labeled data is available. Pre-training of sequence to sequence models has been previously investigated for text classification BIBREF4 but not for text generation. In neural machine translation, there has been work on transferring representations from high-resource language pairs to low-resource settings BIBREF5 . In this paper, we apply pre-trained representations from language models to language generation tasks that can be modeled by sequence to sequence architectures. Previous work on integrating language models with sequence to sequence models focused on the decoder network and added language model representations right before the output of the decoder BIBREF6 . We extend their study by investigating several other strategies such as inputting ELMo-style representations BIBREF0 or fine-tuning the language model (§ SECREF2 ). Our experiments rely on strong transformer-based language models trained on up to six billion tokens (§ SECREF3 ). We present a detailed study of various strategies in different simulated labeled training data scenarios and observe the largest improvements in low-resource settings but gains of over 1 BLEU are still possible when five million sentence-pairs are available. The most successful strategy to integrate pre-trained representations is as input to the encoder network (§ SECREF4 ). Strategies to add representations We consider augmenting a standard sequence to sequence model with pre-trained representations following an ELMo-style regime (§ SECREF2 ) as well as by fine-tuning the language model (§ SECREF3 ). ELMo augmentation The ELMo approach of BIBREF0 forms contextualized word embeddings based on language model representations without adjusting the actual language model parameters. Specifically, the ELMo module contains a set of parameters INLINEFORM0 to form a linear combination of the INLINEFORM1 layers of the language model: ELMo = INLINEFORM2 where INLINEFORM3 is a learned scalar, INLINEFORM4 is a constant to normalize the INLINEFORM5 to sum to one and INLINEFORM6 is the output of the INLINEFORM7 -th language model layer; the module also considers the input word embeddings of the language model. We also apply layer normalization BIBREF7 to each INLINEFORM8 before computing ELMo vectors. We experiment with an ELMo module to input contextualized embeddings either to the encoder () or the decoder (). This provides word representations specific to the current input sentence and these representations have been trained on much more data than is available for the text generation task. Fine-tuning approach Fine-tuning the pre-trained representations adjusts the language model parameters by the learning signal of the end-task BIBREF1 , BIBREF3 . We replace learned input word embeddings in the encoder network with the output of the language model (). Specifically, we use the language model representation of the layer before the softmax and feed it to the encoder. We also add dropout to the language model output. Tuning separate learning rates for the language model and the sequence to sequence model may lead to better performance but we leave this to future work. However, we do tune the number of encoder blocks INLINEFORM0 as we found this important to obtain good accuracy for this setting. We apply the same strategy to the decoder: we input language model representations to the decoder network and fine-tune the language model when training the sequence to sequence model (). Datasets We train language models on two languages: One model is estimated on the German newscrawl distributed by WMT'18 comprising 260M sentences or 6B tokens. Another model is trained on the English newscrawl data comprising 193M sentences or 5B tokens. We learn a joint Byte-Pair-Encoding (BPE; Sennrich et al., 2016) vocabulary of 37K types on the German and English newscrawl and train the language models with this vocabulary. We consider two benchmarks: Most experiments are run on the WMT'18 English-German (en-de) news translation task and we validate our findings on the WMT'18 English-Turkish (en-tr) news task. For WMT'18 English-German, the training corpus consists of all available bitext excluding the ParaCrawl corpus and we remove sentences longer than 250 tokens as well as sentence-pairs with a source/target length ratio exceeding 1.5. This results in 5.18M sentence pairs. We tokenize all data with the Moses tokenizer BIBREF8 and apply the BPE vocabulary learned on the monolingual corpora. For WMT'18 English-Turkish, we use all of the available bitext comprising 208K sentence-pairs without any filtering. We develop on newstest2017 and test on newstest2018. For en-tr we only experiment with adding representations to the encoder and therefore apply the language model vocabulary to the source side. For the target vocabulary we learn a BPE code with 32K merge operations on the Turkish side of the bitext. Both datasets are evaluated in terms of case-sensitive de-tokenized BLEU BIBREF9 , BIBREF10 . We consider the abstractive document summarization task comprising over 280K news articles paired with multi-sentence summaries. is a widely used dataset for abstractive text summarization. Following BIBREF11 , we report results on the non-anonymized version of rather than the entity-anonymized version BIBREF12 , BIBREF13 because the language model was trained on full text. Articles are truncated to 400 tokens BIBREF11 and we use a BPE vocabulary of 32K types BIBREF14 . We evaluate in terms of F1-Rouge, that is Rouge-1, Rouge-2 and Rouge-L BIBREF15 . Language model pre-training We consider two types of architectures: a bi-directional language model to augment the sequence to sequence encoder and a uni-directional model to augment the decoder. Both use self-attention BIBREF16 and the uni-directional model contains INLINEFORM0 transformer blocks, followed by a word classifier to predict the next word on the right. The bi-directional model solves a cloze-style token prediction task at training time BIBREF17 . The model consists of two towers, the forward tower operates left-to-right and the tower operating right-to-left as backward tower; each tower contains INLINEFORM1 transformer blocks. The forward and backward representations are combined via a self-attention module and the output of this module is used to predict the token at position INLINEFORM2 . The model has access to the entire input surrounding the current target token. Models use the standard settings for the Big Transformer BIBREF16 . The bi-directional model contains 353M parameters and the uni-directional model 190M parameters. Both models were trained for 1M steps using Nesterov's accelerated gradient BIBREF18 with momentum INLINEFORM3 following BIBREF19 . The learning rate is linearly warmed up from INLINEFORM4 to 1 for 16K steps and then annealed using a cosine learning rate schedule with a single phase to 0.0001 BIBREF20 . We train on 32 Nvidia V100 SXM2 GPUs and use the NCCL2 library as well as the torch distributed package for inter-GPU communication. Training relies on 16-bit floating point operations BIBREF21 and it took six days for the bi-directional model and four days for the uni-directional model. Sequence to sequence model We use the transformer implementation of the fairseq toolkit BIBREF22 . The WMT en-de and en-tr experiments are based on the Big Transformer sequence to sequence architecture with 6 blocks in the encoder and decoder. For abstractive summarization we use a base transformer model BIBREF16 . We tune dropout values of between 0.1 and 0.4 on the validation set. Models are optimized with Adam BIBREF23 using INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and we use the same learning rate schedule as BIBREF16 ; we perform 10K-200K depending on bitext size. All models use label smoothing with a uniform prior distribution over the vocabulary INLINEFORM3 BIBREF24 , BIBREF25 . We run experiments on 8 GPUs and generate translations with a beam of size 5. Machine translation We first present a comparison of the various strategies in different simulated parallel corpus size settings. For each experiment, we tune the dropout applied to the language model representations, and we reduce the number of optimizer steps for smaller bitext setups as models converge faster; all other hyper-parameters are equal between setups. Our baseline is a Big Transformer model and we also consider a variant where we share token embeddings between the encoder and decoder (; Inan et al., 2016; Press & Wolf, 2016). Figure FIGREF10 shows results averaged over six test sets relative to the baseline which does not share source and target embeddings (Appendix SECREF6 shows a detailed breakdown). performs very well with little labeled data but the gains erode to practically zero in large bitext settings. Pre-trained language model representations are most effective in low bitext setups. The best performing strategy is ELMo embeddings input to the encoder (). This improves the baseline by 3.8 BLEU in the 160K bitext setting and it still improves the 5.2M setting by over 1 BLEU. We further improve by sharing learned word representations in the decoder by tying input and output embeddings (). This configuration performs even better than with a gain of 5.3 BLEU in the 160K setup. Sharing decoder embeddings is equally applicable to . Language model representations are much less effective in the decoder: improves the 160K bitext setup but yields no improvements thereafter and performs even worse. We conjecture that pre-trained representations give much easier wins in the encoder. Table TABREF14 shows additional results on newstest2018. Pre-trained representations mostly impacts the training time of the sequence to sequence model (see Appendix SECREF7 ): slows throughput during training by about 5.3x and is even slower because of the need to backpropagate through the LM for fine-tuning (9.2x). However, inference is only 12-14% slower than the baseline when adding pre-trained embeddings to the encoder (, ). This is because the LM computation can be paralelized for all input tokens. Inference is much slower when adding representations to the decoder because the LM needs to be invoked repeatedly. Our current implementation does not cache LM operations for the previous state and can be made much faster. The baseline uses a BPE vocabulary estimated on the language model corpora (§ SECREF3 ). Appendix SECREF6 shows that this vocabulary actually leads to sligtly better performance than a joint BPE code learned on the bitext as is usual. Next, we validate our findings on the WMT'18 English-Turkish task for which the bitext is truly limited (208K sentence-pairs). We use the language model vocab for the the English side of the bitext and a BPE vocabulary learned on the Turkish side. Table TABREF15 shows that ELMo embeddings for the encoder improve English-Turkish translation. Abstractive summarization Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method. Conclusion We presented an analysis of different strategies to add pre-trained language model representations to sequence to sequence models for neural machine translation and abstractive document summarization. Adding pre-trained representations is very effective for the encoder network and while returns diminish when more labeled data becomes available, we still observe improvements when millions of examples are available. In future research we will investigate ways to improve the decoder with pre-trained representations.
What language model architectures are used?
uni-directional model to augment the decoder
1,914
qasper
3k
Introduction and Background Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games—or interaction fictions—are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks—building on prior work that focuses on tackling the other problems. Formally, we use the definition of text-adventure games as seen in BIBREF1 and BIBREF2. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple of $\langle S,T,A,\Omega , O,R, \gamma \rangle $ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. For our purposes, understanding the exact state and action spaces we use in this work is critical and so we define each of these in relative depth. Action-Space. To solve Zork1, the cannonical text-adventure games, requires the generation of actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by the game’s parser. This results in $\mathcal {O}(697^5)={1.64e14}$ possible actions at every step. To facilitate text-adventure game playing, BIBREF2 introduce Jericho, a framework for interacting with text-games. They propose a template-based action space in which the agent first selects a template, consisting of an action verb and preposition, and then filling that in with relevant entities $($e.g. $[get]$ $ [from] $ $)$. Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size $\mathcal {O}(237 \times 697^2)={1.15e8}$. This space is still far larger than most used by previous approaches applying reinforcement learning to text-based games. State-Representation. Prior work has shown that knowledge graphs are effective in terms of dealing with the challenges of partial observability $($BIBREF3 BIBREF3; BIBREF4$)$. A knowledge graph is a set of 3-tuples of the form $\langle subject, relation, object \rangle $. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) BIBREF5. Human-made text-adventure games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so they add additional rules to ensure that the correct information is parsed. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. An example of what the knowledge graph looks like and specific implementation details can be found in Appendix SECREF14. BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck. More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1. Exploration Methods In this section, we describe methods to explore combinatorially sized action spaces such as text-games—focusing especially on methods that can deal with their inherent bottleneck structure. We first describe our method that explicitly attempts to detect bottlenecks and then describe how an exploration algorithm such as Go Explore BIBREF9 can leverage knowledge graphs. KG-A2C-chained An example of a bottleneck can be seen in Figure FIGREF1. We extend the KG-A2C algorithm as follows. First, we detect bottlenecks as states where the agent is unable to progress any further. We set a patience parameter and if the agent has not seen a higher score in patience steps, the agent assumes it has been limited by a bottleneck. Second, when a bottleneck is found, we freeze the policy that gets the agent to the state with the highest score. The agent then begins training a new policy from that particular state. Simply freezing the policy that led to the bottleneck, however, can potentially result in a policy one that is globally sub-optimal. We therefore employ a backtracking strategy that restarts exploration from each of the $n$ previous steps—searching for a more optimal policy that reaches that bottleneck. At each step, we keep track of a buffer of $n$ states and admissible actions that led up to that locally optimal state. We force the agent to explore from this state to attempt to drive it out of the local optima. If it is further unable to find itself out of this local optima, we refresh the training process again, but starting at the state immediately before the agent reaches the local optima. If this continues to fail, we continue to iterate through this buffer of seen states states up to that local optima until we either find a more optimal state or we run out of states to refresh from, in which we terminate the training algorithm. KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the first to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are defined as those states when explored from will likely result in higher reward trajectories. Since the text games we are dealing with are mostly deterministic, with the exception of Zork in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to find an optimal policy. BIBREF10 look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld BIBREF1. Instead of training a policy network in parallel to generate actions used for exploration, they use a small set of “admissible actions”—actions guaranteed to change the world state at any given step during Phase 1—to explore and find high reward trajectories. This space of actions is relatively small (of the order of $10^2$ per step) and so finding high reward trajectories in larger action-spaces such as in Zork would be infeasible Go-Explore maintains an archive of cells—defined as a set of states that map to a single representation—to keep track of promising states. BIBREF9 simply encodes each cell by keeping track of the agent's position and BIBREF10 use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). The KG-A2C will run for a number of steps, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training the KG-A2C at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, KG-A2C will benefit from the heuristics of selecting preferred cells and be trained on promising states more often. Evaluation We compare our two exploration strategies to the following baselines and ablations: KG-A2C This is the exact same method presented in BIBREF6 with no modifications. A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks. A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C. A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation. Figure FIGREF10 shows that agents utilizing knowledge-graphs in addition to either enhanced exploration method far outperform the baseline A2C and KG-A2C. KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it. There are a couple of key insights that can be drawn from these results The first is that the knowledge graph appears to be critical; it is theorized to help with partial observability. However the knowledge graph representation isn't sufficient in that the knowledge graph representation without enhanced exploration methods cannot surpass the bottleneck. A2C-chained—which explores without a knowledge graph—fails to even outperform the baseline A2C. We hypothesize that this is due to the knowledge graph aiding implicitly in the sample efficiency of bottleneck detection and subsequent exploration. That is, exploring after backtracking from a potentially detected bottleneck is much more efficient in the knowledge graph based agent. The Go-Explore based exploration algorithm sees less of a difference between agents. A2C-Explore converges more quickly, but to a lower reward trajectory that fails to pass the bottleneck, whereas KG-A2C-Explore takes longer to reach a similar reward but consistently makes it through the bottleneck. The knowledge graph cell representation appears to thus be a better indication of what a promising state is as opposed to just the textual observation. Comparing the advanced exploration methods when using the knowledge graph, we see that both agents successfully pass the bottleneck corresponding to entering the cellar and lighting the lamp and reach comparable scores within a margin of error. KG-A2C-chained is significantly more sample efficient and converges faster. We can infer that chaining policies by explicitly detecting bottlenecks lets us pass it more quickly than attempting to find promising cell representations with Go-Explore. This form of chained exploration with backtracking is particularly suited to sequential decision making problems that can be represented as acyclic directed graphs as in Figure FIGREF1. Appendix ::: Zork1 Zork1 is one of the first text-adventure games and heavily influences games released later in terms of narrative style and game structure. It is a dungeon crawler where the player must explore a vast world and collect a series of treasures. It was identified by BIBREF2 as a moonshot game and has been the subject of much work in leaning agents BIBREF12, BIBREF7, BIBREF11, BIBREF8. Rewards are given to the player when they collect treasures as well as when important intermediate milestones needed to further explore the world are passed. Figure FIGREF15 and Figure FIGREF1 show us a map of the world of Zork1 and the corresponding quest structure. The bottleneck seen at a score of around 40 is when the player first enters the cellar on the right side of the map. The cellar is dark and you need to immediately light the lamp to see anything. Attempting to explore the cellar in the dark results in you being instantly killed by a monster known as a “grue”. Appendix ::: Knowledge Graph Rules We make no changes from the graph update rules used by BIBREF6. Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singular and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command $examine$ $OBJ$. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inventory items are linked to the “you” node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. $\langle kitchen,down,cellar \rangle $ when the agent performs $go$ $down$ when in the kitchen to move to the cellar. Appendix ::: Hyperparameters Hyperparameters used for our agents are given below. Patience and buffer size are used for the policy chaining method as described in Section SECREF2. Cell step size is a parameter used for Go-Explore and describes how many steps are taken when exploring in a given cell state. Base hyperparameters for KG-A2C are taken from BIBREF6 and the same parameters are used for A2C.
What are the results from these proposed strategies?
Reward of 11.8 for the A2C-chained model, 41.8 for the KG-A2C-chained model, 40 for A2C-Explore and 44 for KG-A2C-Explore.
2,443
qasper
3k
Introduction Recent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind. Endangered languages pose an additional challenge. The process of documenting an endangered language typically includes the creation of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored in large online linguistics archives. This process is often hindered by the Transcription Bottleneck: the linguistic fieldworker and the language community may not have time to transcribe all of the recordings and may only transcribe segments that are linguistically salient for publication or culturally significant for the creation of community resources. With this work we make publicly available a large corpus in Mapudungun, a language of the indigenous Mapuche people of southern Chile and western Argentina. We hope to ameliorate the resource gap and the transcription bottleneck in two ways. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation). In providing baselines and datasets splits, we hope to further facilitate research on low-resource NLP for this language through our data set. Research on low-resource speech recognition is particularly important in relieving the transcription bottleneck, while tackling the research challenges that speech synthesis and machine translation pose for such languages could lead to such systems being deployed to serve more under-represented communities. The Mapudungun Language Mapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes. Mapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of BIBREF2; see BIBREF3 for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages BIBREF4. One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role BIBREF5. The Resource The resource is comprised of 142 hours of spoken Mapudungun that was recorded during the AVENUE project BIBREF6 in 2001 to 2005. The data was recorded under a partnership between the AVENUE project, funded by the US National Science Foundation at Carnegie Mellon University, the Chilean Ministry of Education (Mineduc), and the Instituto de Estudios Indígenas at Universidad de La Frontera, originally spanning 170 hours of audio. We have recently cleaned the data and are releasing it publicly for the first time (although it has been shared with individual researchers in the past) along with NLP baselines. The recordings were transcribed and translated into Spanish at the Instituto de Estudios Indígenas at Universidad de La Frontera. The corpus covers three dialects of Mapudungun: about 110 hours of Nguluche, 20 hours of Lafkenche and 10 hours of Pewenche. The three dialects are quite similar, with some minor semantic and phonetic differences. The fourth traditionally distinguished dialect, Huilliche, has several grammatical differences from the other three and is classified by Ethnologue as a separate language, iso 639-3: huh, and as nearly extinct. The recordings are restricted to a single domain: primary, preventive, and treatment health care, including both Western and Mapuche traditional medicine. The recording sessions were conducted as interactive conversations so as to be natural in Mapuche culture, and they were open-ended, following an ethnographic approach. The interviewer was trained in these methods along with the use of the digital recording systems that were available at the time. We also followed human subject protocol. Each person signed a consent form to release the recordings for research purposes and the data have been accordingly anonymized. Because Machi (traditional Mapuche healers) were interviewed, we asked the transcribers to delete any culturally proprietary knowledge that a Machi may have revealed during the conversation. Similarly, we deleted any names or any information that may identify the participants. The corpus is culturally relevant because it was created by Mapuche people, using traditional ways of relating to each other in conversations. They discussed personal experiences with primary health care in the traditional Mapuche system and the Chilean health care system, talking about illnesses and the way they were cured. The participants ranged from 16 years old to 100 years old, almost in equal numbers of men and women, and they were all native speakers of Mapudungun. The Resource ::: Orthography At the time of the collection and transcription of the corpus, the orthography of Mapudungun was not standardized. The Mapuche team at the Instituto de Estudios Indígenas (IEI – Institute for Indigenous Studies) developed a supra-dialectal alphabet that comprises 28 letters that cover 32 phones used in the three Mapudungun variants. The main criterion for choosing alphabetic characters was to use the current Spanish keyboard that was available on all computers in Chilean offices and schools. The alphabet used the same letters used in Spanish for those phonemes that sound like Spanish phonemes. Diacritics such as apostrophes were used for sounds that are not found in Spanish. As a result, certain orthographic conventions that were made at the time deviate from the now-standard orthography of Mapudungun, Azumchefe. We plan to normalize the orthography of the corpus, and in fact a small sample has already been converted to the modern orthography. However, we believe that the original transcriptions will also be invaluable for academic, historical, and cultural purposes, hence we release the corpus using these conventions. The Resource ::: Additional Annotations In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such. The Resource ::: Cleaning The dialogues were originally recorded using a Sony DAT recorder (48kHz), model TCD-D8, and Sony digital stereo microphone, model ECM-DS70P. Transcription was performed with the TransEdit transcription tool v.1.1 beta 10, which synchronizes the transcribed text and the wave files. However, we found that a non-trivial number of the utterance boundaries and speaker annotations were flawed. Also some recording sessions did not have a complete set of matching audio, transcription, and translation files. Hence, in an effort to provide a relatively “clean" corpus for modern computational experiments, we converted the encoding of the textual transcription from Latin-1 to Unicode, DOS to UNIX line endings, a now more standard text encoding format than what was used when the data was first collected. Additionally, we renamed a small portion of files which had been misnamed and removed several duplicate files. Although all of the data was recorded with similar equipment in relatively quiet environments, the acoustics are not as uniform as we would like for building speech synthesizers. Thus we applied standardized power normalization. We also moved the boundaries of the turns to standardize the amount of leading and trailing silence in each turn. This is a standard procedure for speech recognition and synthesis datasets. Finally we used the techniques in BIBREF7 for found data to re-align the text to the audio and find out which turns are best (or worst) aligned so that we can select segments that give the most accurate alignments. Some of the misalignments may in part be due to varied orthography, and we intend, but have not yet, to investigate normalization of orthography (i.e. spelling correction) to mitigate this. The Resource ::: Training, Dev, and Test Splits We create two training sets, one appropriate for single-speaker speech synthesis experiments, and one appropriate for multiple-speaker speech recognition and machine translation experiments. In both cases, our training, development, and test splits are performed at the dialogue level, so that all examples from each dialogue belong to exactly one of these sets. For single-speaker speech synthesis, we only use the dialog turns of the speaker with the largest volume of data (nmlch – one of the interviewers). The training set includes $221.8$ thousand sentences from 285 dialogues, with 12 and 46 conversations reserved for the development and test set. For speech recognition experiments, we ensure that our test set includes unique speakers as well as speakers that overlap with the training set, in order to allow for comparisons of the ability of the speech recognition system to generalize over seen and new speakers. For consistency, we use the same dataset splits for the machine translation experiments. The statistics in Table reflect this split. Applications Our resource has the potential to be the basis of computational research in Mapudungun across several areas. Since the collected audio has been transcribed, our resource is appropriate for the study of automatic speech recognition and speech synthesis. The Spanish translations enable the creation of machine translation systems between Mapudungun and Spanish, as well as end-to-end (or direct) speech translation. We in fact built such speech synthesis, speech recognition, and machine translation systems as a showcase of the usefulness of our corpus in that research direction. Furthermore, our annotations of the Spanish words interspersed in Mapudungun speech could allow for a study of code-switching patterns within the Mapuche community. In addition, our annotations of non-standardized orthographic transcriptions could be extremely useful in the study of historical language and orthography change as a language moves from predominantly oral to being written in a standardized orthography, as well as in building spelling normalization and correction systems. The relatively large amount of data that we collected will also allow for the training of large language models, which in turn could be used as the basis for predictive keyboards tailored to Mapudungun. Last, since all data are dialogues annotated for the different speaker turns, they could be useful for building Mapudungun dialogue systems and chatbot-like applications. The potential applications of our resource, however, are not exhausted in language technologies. The resource as a whole could be invaluable for ethnographic and sociological research, as the conversations contrast traditional and Western medicine practices, and they could reveal interesting aspects of the Mapuche culture. In addition, the corpus is a goldmine of data for studying the morphostyntax of Mapudungun BIBREF8. As an isolate polysynthetic language, the study of Mapudungun can provide insights into the range of possibilities within human languages can work. Baseline Results Using the aforementioned higher quality portions of the corpus, we trained baseline systems for Mapudungun speech recognition and speech synthesis, as well as Machine Translation systems between Mapudungun and Spanish. Baseline Results ::: Speech Synthesis In our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above. For the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech. We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7. The initial build gave an MCD on held out data of 6.483. While the 2000 best segment dataset gives an MCD of 5.551, which is a large improvement. The quality of the generated speech goes from understandable, only if you can see the text, to understandable, and transcribable even for non-Mapudungun speakers. We do not believe we are building the best synthesizer with our current (non-neural) techniques, but we do believe we are selecting the best training data for other statistical and neural training techniques in both speech synthesis and speech recognition. Baseline Results ::: Speech Recognition For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above. Under the first setting, we obtained a 60% character error rate, while the generated lexicon significantly boosts performance, as our systems achieve a notably reduced 30% phone error rate. Naturally, these results are relatively far from the quality of ASR systems trained on large amounts of clean data such as those available in English. Given the quality of the recordings, and the lack of additional resources, we consider our results fairly reasonable and they would still be usable for simple dialog-like tasks. We anticipate, though, that one could significantly improve ASR quality over our dataset, by using in-domain language models, or by training end-to-end neural recognizers leveraging languages with similar phonetic inventories BIBREF12 or by using the available Spanish translations in a multi-source scenario BIBREF13. Baseline Results ::: Mapudungun–Spanish Machine Translation We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs. The baseline results using different portions of the training set (10k, 50k, 100k, and all (220k) parallel sentences) on both translation directions are presented in Table , using detokenized BLEU BIBREF19 (a standard MT metric) and chrF BIBREF20 (a metric that we consider to be more appropriate for polysynthetic languages, as it does not rely on word n-grams) computed with the sacreBLEU toolkit BIBREF21. It it worth noting the difference in quality between the two directions, with translation into Spanish reaching 20.4 (almost 21) BLEU points in the development set, while the opposite direction (translating into Mapudungun) shows about a 7 BLEU points worse performance. This is most likely due to Mapudungun being a polysynthetic language, with its complicated morphology posing a challenge for proper generation. Related Work Mapudungun grammar has been studied since the arrival of European missionaries and colonizers hundreds of years ago. More recent descriptions of Mapudungun grammar BIBREF1 and BIBREF0 informed the collection of the resource that we are presenting in this paper. Portions of our resource have been used in early efforts to build language systems for Mapudungun. In particular, BIBREF22 focused on Mapudungun morphology in order to create spelling correction systems, while BIBREF23, BIBREF6, BIBREF24, and BIBREF25 developed hybrid rule- and phrase-based Statistical Machine Translation systems. Naturally, similar works in collecting corpora in Indigenous languages of Latin America are abundant, but very few, if any, have the scale and potential of our resource to be useful in many downstream language-specific and inter-disciplinary applications. A general overview of the state of NLP for the under-represented languages of the Americas can be found at BIBREF26. To name a few of the many notable works, BIBREF27 created a parallel Mixtec-Spanish corpus for Machine Translation and BIBREF28 created lexical resources for Arapaho, while BIBREF29 and BIBREF30 focused on building speech corpora for Southern Quechua and Chatino respectively. Conclusion With this work we present a resource that will be extremely useful for building language systems in an endangered, under-represented language, Mapudungun. We benchmark NLP systems for speech synthesis, speech recognition, and machine translation, providing strong baseline results. The size of our resource (142 hours, more than 260k total sentences) has the potential to alleviate many of the issues faced when building language technologies for Mapudungun, in contrast to other indigenous languages of the Americas that unfortunately remain low-resource. Our resource could also be used for ethnographic and anthropological research into the Mapuche culture, and has the potential to contribute to intercultural bilingual education, preservation activities and further general advancement of the Mapudungun-speaking community. Acknowledgements The data collection described in this paper was supported by NSF grants IIS-0121631 (AVENUE) and IIS-0534217 (LETRAS), with supplemental funding from NSF's Office of International Science and Education. Preliminary funding for work on Mapudungun was also provided by DARPA The experimental material is based upon work generously supported by the National Science Foundation under grant 1761548.
How is non-standard pronunciation identified?
Unanswerable
3,018
qasper
3k
Introduction Part-of-speech tagging is now a classic task in natural language processing, for which many systems have been developed or adapted for a large variety of languages. Its aim is to associate each “word” with a morphosyntactic tag, whose granularity can range from a simple morphosyntactic category, or part-of-speech (hereafter PoS), to finer categories enriched with morphological features (gender, number, case, tense, mood, etc.). The use of machine learning algorithms trained on manually annotated corpora has long become the standard way to develop PoS taggers. A large variety of algorithms have been used, such as (in approximative chronological order) bigram and trigram hidden Markov models BIBREF0 , BIBREF1 , BIBREF2 , decision trees BIBREF3 , BIBREF4 , maximum entropy Markov models (MEMMs) BIBREF5 and Conditional Random Fields (CRFs) BIBREF6 , BIBREF7 . With such machine learning algorithms, it is possible to build PoS taggers for any language, provided adequate training data is available. As a complement to annotated corpora, it has previously been shown that external lexicons are valuable sources of information, in particular morphosyntactic lexicons, which provide a large inventory of (word, PoS) pairs. Such lexical information can be used in the form of constraints at tagging time BIBREF8 , BIBREF9 or during the training process as additional features combined with standard features extracted from the training corpus BIBREF10 , BIBREF11 , BIBREF12 . In recent years, a different approach to modelling lexical information and integrating it into natural language processing systems has emerged, namely the use of vector representations for words or word sequences BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . Such representations, which are generally extracted from large amounts of raw text, have proved very useful for numerous tasks including PoS tagging, in particular when used in recurrent neural networks (RNNs) and more specifically in mono- or bi-directional, word-level and/or character-level long short-term memory networks (LSTMs) BIBREF19 , BIBREF16 , BIBREF17 , BIBREF20 . Both approaches to representing lexical properties and to integrating them into a PoS tagger improve tagging results. Yet they rely on resources of different natures. The main advantage of word vectors is that they are built in an unsupervised way, only requiring large amounts of raw textual data. They also encode finer-grained information than usual morphosyntactic lexicons, most of which do not include any quantitative data, not even simple frequency information. Conversely, lexical resources often provide information about scarcely attested words, for which corpus-based approaches such as word vector representations are of limited relevance. Moreover, morphological or morphosyntactic lexicons already exist for a number of languages, including less-resourced langauges for which it might be difficult to obtain the large amounts of raw data necessary to extract word vector representations. Our main goal is therefore to compare the respective impact of external lexicons and word vector representations on the accuracy of PoS models. This question has already been investigated for 6 languages by BIBREF18 using the state-of-the-art CRF-based tagging system MarMoT. The authors found that their best-performing word-vector-based PoS tagging models outperform their models that rely on morphosyntactic resources (lexicons or morphological analysers). In this paper, we report on larger comparison, carried out in a larger multilingual setting and comparing different tagging models. Using different 16 datasets, we compare the performances of two feature-based models enriched with external lexicons and of two LSTM-based models enriched with word vector representations. A secondary goal of our work is to compare the relative improvements linked to the use of external lexical information in the two feature-based models, which use different models (MEMM vs. CRF) and feature sets. More specifically, our starting point is the MElt system BIBREF12 , an MEMM tagging system. We first briefly describe this system and the way we adapted it by integrating our own set of corpus-based and lexical features. We then introduce the tagging models we have trained for 16 different languages using our adapted version of MElt. These models are trained on the Universal Dependencies (v1.2) corpus set BIBREF21 , complemented by morphosyntactic lexicons. We compare the accuracy of our models with the scores obtained by the CRF-based system MarMoT BIBREF22 , BIBREF18 , retrained on the same corpora and the same external morphosyntactic lexicons. We also compare our results to those obtained by the best bidirectional LSTM models described by BIBREF20 , which both make use of Polyglot word vector representations published by BIBREF23 . We will show that an optimised enrichment of feature-based models with morphosyntactic lexicon results in significant accuracy gains. The macro-averaged accuracy of our enriched MElt models is above that of enriched MarMoT models and virtually identical to that of LSTMs enriched with word vector representations. More precisely, per-language results indicate that lexicons provide more useful information for languages with a high lexical variability (such as morphologically rich languages), whereas word vectors are more informative for languages with a lower lexical variability (such as English). MElt MElt BIBREF12 is a tagging system based on maximum entropy Markov models (MEMM) BIBREF5 , a class of discriminative models that are suitable for sequence labelling BIBREF5 . The basic set of features used by MElt is given in BIBREF12 . It is a superset of the feature sets used by BIBREF5 and BIBREF24 and includes both local standard features (for example the current word itself and its prefixes and suffixes of length 1 to 4) and contextual standard features (for example the tag just assigned to the preceding word). In particular, with respect to Ratnaparkhi's feature set, MElt's basic feature set lifts the restriction that local standard features used to analyse the internal composition of the current word should only apply to rare words. One of the advantages of feature-based models such as MEMMs and CRFs is that complementary information can be easily added in the form of additional features. This was investigated for instance by BIBREF25 , whose best-performing model for PoS tagging dialogues was obtained with a version of MElt extended with dialogue-specific features. Yet the motivation of MElt's developers was first and foremost to investigate the best way to integrate lexical information extracted from large-scale morphosyntactic lexical resources into their models, on top of the training data BIBREF12 . They showed that performances are better when this external lexical information is integrated in the form of additional lexical features than when the external lexicon is used as constraints at tagging time. These lexical features can also be divided into local lexical features (for example the list of possible tags known to the external lexicon for the current word) and contextual lexical features (for example the list of possible tags known to the external lexicon for surrounding words). In particular, lexical contextual features provide a means to model the right context of the current word, made of words that have not yet been tagged by the system but for which the lexicon often provides a list of possible tags. Moreover, tagging accuracy for out-of-vocabulary (OOV) words is improved, as a result of the fact that words unknown to the training corpus might be known to the external lexicon. Despite a few experiments published with MElt on languages other than French BIBREF12 , BIBREF40 , BIBREF41 , the original feature set used by MElt (standard and lexical features) was designed and tested mostly on this language, by building and evaluating tagging models on a variant of the French TreeBank. Since our goal was to carry out experiments in a multilingual setting, we have decided to design our own set of features, using the standard MElt features as a starting point. With respect to the original MElt feature set, we have added new ones, such as prefixes and suffixes of the following word, as well as a hybrid contextual feature obtained by concatenating the tag predicted for the preceding word and the tag(s) provided by the external lexicon for the following word. In order to select the best performing feature set, we carried out a series of experiments using the multilingual dataset provided during the SPMRL parsing shared task BIBREF42 . This included discarding useless or harmful features and selecting the maximal length of the prefixes and suffixes to be used as features, both for the current word and for the following word. We incorporated in MElt the best performing feature set, described in Table TABREF1 . All models discussed in this paper are based on this feature set. Corpora We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 . As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse. Lexicons We generate our external lexicons using the set of source lexicons listed in Table TABREF3 . Since external lexical information is exploited via features, there is no need for the external lexicons and the annotated corpora to use the same PoS inventory. Therefore, for each language, we simply extracted from the corresponding lexicon the PoS of each word based on its morphological tags, by removing all information provided except for its coarsest-level category. We also added entries for punctuations when the source lexicons did not contain any. We also performed experiments in which we retained the full original tags provided by the lexicons, with all morphological features included. On average, results were slightly better than those presented in the paper, although not statistically significantly. Moreover, the granularity of tag inventories in the lexicons is diverse, which makes it difficult to draw general conclusions about results based on full tags. This is why we only report results based on (coarse) PoS extracted from the original lexicons. Baseline models In order to assess the respective contributions of external lexicons and word vector representations, we first compared the results of the three above-mentioned systems when trained without such additional lexical information. Table TABREF11 provides the results of MElt and MarMoT retrained on UD1.2 corpora, together with the results publised on the same corpora by BIBREF20 , using their best model not enhanced by external word vector representations —i.e. the model they call INLINEFORM0 , which is a bidirectional LSTM that combines both word and character embeddings. These results show that Plank et al.'s (2016) bi-LSTM performs extremely well, surpassed by MarMoT on only 3 out of 16 datasets (Czech, French and Italian), and by MElt only once (Indonesian). Models enriched with external lexical information Table TABREF13 provides the results of four systems enriched with lexical information. The feature-based systems MElt and MarMoT, respectively based on MEMMs and CRFs, are extended with the lexical information provided by our morphosyntactic lexicons. This extension takes the form of additional features, as described in Section SECREF2 for MElt. The results reported by BIBREF20 for their bidirectional LSTM when initialised with Polyglot embeddings trained on full wikipedias are also included, together with their new system FREQBIN, also initialised with Polyglot embeddings. FREQBIN trains bi-LSTMs to predict for each input word both a PoS and a label that represents its log frequency in the training data. As they word it, “the idea behind this model is to make the representation predictive for frequency, which encourages the model not to share representations between common and rare words, thus benefiting the handling of rare tokens.” The results, which are also displayed in Figures FIGREF14 and FIGREF15 , show that all systems reach very similar results on average, although discrepancies can be observed from one dataset to another, on which we shall comment shortly. The best performing system in terms of macro-average is MElt (96.60%). Both bi-LSTM systems reach the same score (96.58%), the difference with MElt's results being non significant, whereas MarMoT is only 0.14% behind (96.46%). Given the better baseline scores of the neural approaches, these results show that the benefit of using external lexicons in the feature-based models MElt and MarMoT are much higher than those using Polyglot word vector representations as initialisations for bi-LSTMs. Yet these very similar overall results reflect a different picture when focusing on OOV tagging accuracy. The best models for OOV tagging accuracy are, by far, FREQBIN models, which are beaten by MarMoT and by MElt only once each (on English and Danish respectively). The comparison on OOV tagging between MElt and MarMoT shows that MElt performs better on average than MarMoT, despite the fact that MarMoT's baseline results were better than those reached by MElt. This shows that the information provided by external morphosyntactic lexicons is better exploited by MElt's lexical features than by those used by MarMoT. On the other hand, the comparison of both bi-LSTM-based approaches confirm that the FREQBIN models is better by over 10% absolute on OOV tagging accuracy (94.28% vs. 83.59%), with 65% lower error rate. One of the important differences between the lexical information provided by an external lexicon and word vectors built from raw corpora, apart from the very nature of the lexical information provided, is the coverage and accuracy of this lexical information on rare words. All words in a morphosyntactic lexicon are associated with information of a same granularity and quality, which is not the case with word representations such as provided by Polyglot. Models that take advantage of external lexicons should therefore perform comparatively better on datasets containing a higher proportion of rarer words, provided the lexicons' coverage is high. In order to confirm this intuition, we have used a lexical richness metric based on the type/token ratio. Since this ratio is well-known for being sensitive to corpus length, we normalised it by computing it over the 60,000 first tokens of each training set. When this normalised type/token ratio is plotted against the difference between the results of MElt and both bi-LSTM-based models, the expected correlation is clearly visible (see Figure FIGREF16 ). This explains why MElt obtains better results on the morphologically richer Slavic datasets (average normalised type/token ratio: 0.28, average accuracy difference: 0.32 compared to both bi-LSTM+Polyglot and FREQBIN+Polyglot) and, at the other end of the spectrum, significantly worse results on the English dataset (normalised type/token ratio: 0.15, average accuracy difference: -0.56 compared to bi-LSTM+Polyglot, -0.57 compared to FREQBIN+Polyglot). Conclusion Two main conclusions can be drawn from our comparative results. First, feature-based tagging models adequately enriched with external morphosyntactic lexicons perform, on average, as well as bi-LSTMs enriched with word embeddings. Per-language results show that the best accuracy levels are reached by feature-based models, and in particular by our improved version of the MEMM-based system MElt, on datasets with high lexical variability (in short, for morphologically rich languages), whereas neural-based results perform better on datatsets with lower lexical variability (e.g. for English). We have only compared the contribution of morphosyntactic lexicons to feature-based models (MEMMs, CRFs) and that of word vector representations to bi-LSTM-based models as reported by BIBREF20 . As mentioned above, work on the contribution of word vector representations to feature-based approaches has been carried out by BIBREF18 . However, the exploitation of existing morphosyntactic or morphological lexicons in neural models is a less studied question. Improvements over the state of the art might be achieved by integrating lexical information both from an external lexicon and from word vector representations into tagging models. In that regard, further work will be required to understand which class of models perform the best. An option would be to integrate feature-based models such as a CRF with an LSTM-based layer, following recent proposals such as the one proposed by BIBREF45 for named entity recognition.
which languages are explored?
Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish
2,697
qasper
3k
Introduction Writing errors can occur in many different forms – from relatively simple punctuation and determiner errors, to mistakes including word tense and form, incorrect collocations and erroneous idioms. Automatically identifying all of these errors is a challenging task, especially as the amount of available annotated data is very limited. Rei2016 showed that while some error detection algorithms perform better than others, it is additional training data that has the biggest impact on improving performance. Being able to generate realistic artificial data would allow for any grammatically correct text to be transformed into annotated examples containing writing errors, producing large amounts of additional training examples. Supervised error generation systems would also provide an efficient method for anonymising the source corpus – error statistics from a private corpus can be aggregated and applied to a different target text, obscuring sensitive information in the original examination scripts. However, the task of creating incorrect data is somewhat more difficult than might initially appear – naive methods for error generation can create data that does not resemble natural errors, thereby making downstream systems learn misleading or uninformative patterns. Previous work on artificial error generation (AEG) has focused on specific error types, such as prepositions and determiners BIBREF0 , BIBREF1 , or noun number errors BIBREF2 . Felice2014a investigated the use of linguistic information when generating artificial data for error correction, but also restricting the approach to only five error types. There has been very limited research on generating artificial data for all types, which is important for general-purpose error detection systems. For example, the error types investigated by Felice2014a cover only 35.74% of all errors present in the CoNLL 2014 training dataset, providing no additional information for the majority of errors. In this paper, we investigate two supervised approaches for generating all types of artificial errors. We propose a framework for generating errors based on statistical machine translation (SMT), training a model to translate from correct into incorrect sentences. In addition, we describe a method for learning error patterns from an annotated corpus and transplanting them into error-free text. We evaluate the effect of introducing artificial data on two error detection benchmarks. Our results show that each method provides significant improvements over using only the available training set, and a combination of both gives an absolute improvement of 4.3% in INLINEFORM0 , without requiring any additional annotated data. Error Generation Methods We investigate two alternative methods for AEG. The models receive grammatically correct text as input and modify certain tokens to produce incorrect sequences. The alternative versions of each sentence are aligned using Levenshtein distance, allowing us to identify specific words that need to be marked as errors. While these alignments are not always perfect, we found them to be sufficient for practical purposes, since alternative alignments of similar sentences often result in the same binary labeling. Future work could explore more advanced alignment methods, such as proposed by felice-bryant-briscoe. In Section SECREF4 , this automatically labeled data is then used for training error detection models. Machine Translation We treat AEG as a translation task – given a correct sentence as input, the system would learn to translate it to contain likely errors, based on a training corpus of parallel data. Existing SMT approaches are already optimised for identifying context patterns that correspond to specific output sequences, which is also required for generating human-like errors. The reverse of this idea, translating from incorrect to correct sentences, has been shown to work well for error correction tasks BIBREF2 , BIBREF3 , and round-trip translation has also been shown to be promising for correcting grammatical errors BIBREF4 . Following previous work BIBREF2 , BIBREF5 , we build a phrase-based SMT error generation system. During training, error-corrected sentences in the training data are treated as the source, and the original sentences written by language learners as the target. Pialign BIBREF6 is used to create a phrase translation table directly from model probabilities. In addition to default features, we add character-level Levenshtein distance to each mapping in the phrase table, as proposed by Felice:2014-CoNLL. Decoding is performed using Moses BIBREF7 and the language model used during decoding is built from the original erroneous sentences in the learner corpus. The IRSTLM Toolkit BIBREF8 is used for building a 5-gram language model with modified Kneser-Ney smoothing BIBREF9 . Pattern Extraction We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors. The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching. For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern: (VVD shop_VV0 II, VVD shopping_VVG II) After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset. Error Detection Model We construct a neural sequence labeling model for error detection, following the previous work BIBREF12 , BIBREF13 . The model receives a sequence of tokens as input and outputs a prediction for each position, indicating whether the token is correct or incorrect in the current context. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings. Next, the embeddings are given as input to a bidirectional LSTM BIBREF14 , in order to create context-dependent representations for every token. The hidden states from forward- and backward-LSTMs are concatenated for each word position, resulting in representations that are conditioned on the whole sequence. This concatenated vector is then passed through an additional feedforward layer, and a softmax over the two possible labels (correct and incorrect) is used to output a probability distribution for each token. The model is optimised by minimising categorical cross-entropy with respect to the correct labels. We use AdaDelta BIBREF15 for calculating an adaptive learning rate during training, which accounts for a higher baseline performance compared to previous results. Evaluation We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data. We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors. The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset. The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection. We used the Approximate Randomisation Test BIBREF17 , BIBREF18 to calculate statistical significance and found that the improvement for each of the systems using artificial data was significant over using only manual annotation. In addition, the final combination system is also significantly better compared to the Felice2014a system, on all three datasets. While Rei2016 also report separate experiments that achieve even higher performance, these models were trained on a considerably larger proprietary corpus. In this paper we compare error detection frameworks trained on the same publicly available FCE dataset, thereby removing the confounding factor of dataset size and only focusing on the model architectures. The error generation methods can generate alternative versions of the same input text – the pattern-based method randomly samples the error locations, and the SMT system can provide an n-best list of alternative translations. Therefore, we also investigated the combination of multiple error-generated versions of the input files when training error detection models. Figure FIGREF6 shows the INLINEFORM0 score on the development set, as the training data is increased by using more translations from the n-best list of the SMT system. These results reveal that allowing the model to see multiple alternative versions of the same file gives a distinct improvement – showing the model both correct and incorrect variations of the same sentences likely assists in learning a discriminative model. Related Work Our work builds on prior research into AEG. Brockett2006 constructed regular expressions for transforming correct sentences to contain noun number errors. Rozovskaya2010a learned confusion sets from an annotated corpus in order to generate preposition errors. Foster2009 devised a tool for generating errors for different types using patterns provided by the user or collected automatically from an annotated corpus. However, their method uses a limited number of edit operations and is thus unable to generate complex errors. Cahill2013 compared different training methodologies and showed that artificial errors helped correct prepositions. Felice2014a learned error type distributions for generating five types of errors, and the system in Section SECREF3 is an extension of this model. While previous work focused on generating a specific subset of error types, we explored two holistic approaches to AEG and showed that they are able to significantly improve error detection performance. Conclusion This paper investigated two AEG methods, in order to create additional training data for error detection. First, we explored a method using textual patterns learned from an annotated corpus, which are used for inserting errors into correct input text. In addition, we proposed formulating error generation as an MT framework, learning to translate from grammatically correct to incorrect sentences. The addition of artificial data to the training process was evaluated on three error detection annotations, using the FCE and CoNLL 2014 datasets. Making use of artificial data provided improvements for all data generation methods. By relaxing the type restrictions and generating all types of errors, our pattern-based method consistently outperformed the system by Felice2014a. The combination of the pattern-based method with the machine translation approach gave further substantial improvements and the best performance on all datasets.
What was the baseline used?
error detection system by Rei2016
2,132
qasper
3k
Introduction Twitter is a widely used microblogging platform, where users post and interact with messages, “tweets”. Understanding the semantic representation of tweets can benefit a plethora of applications such as sentiment analysis BIBREF0 , BIBREF1 , hashtag prediction BIBREF2 , paraphrase detection BIBREF3 and microblog ranking BIBREF4 , BIBREF5 . However, tweets are difficult to model as they pose several challenges such as short length, informal words, unusual grammar and misspellings. Recently, researchers are focusing on leveraging unsupervised representation learning methods based on neural networks to solve this problem. Once these representations are learned, we can use off-the-shelf predictors taking the representation as input to solve the downstream task BIBREF6 , BIBREF7 . These methods enjoy several advantages: (1) they are cheaper to train, as they work with unlabelled data, (2) they reduce the dependence on domain level experts, and (3) they are highly effective across multiple applications, in practice. Despite this, there is a lack of prior work which surveys the tweet-specific unsupervised representation learning models. In this work, we attempt to fill this gap by investigating the models in an organized fashion. Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. We conclude the paper by presenting interesting future research directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models. Unsupervised Tweet Representation Models There are various models spanning across different model architectures and objective functions in the literature to compute tweet representation in an unsupervised fashion. These models work in a semi-supervised way - the representations generated by the model is fed to an off-the-shelf predictor like Support Vector Machines (SVM) to solve a particular downstream task. These models span across a wide variety of neural network based architectures including average of word vectors, convolutional-based, recurrent-based and so on. We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. In this section, we provide the first of its kind survey of the recent tweet-specific unsupervised models in an organized fashion to understand the literature. Specifically, we categorize each model based on the optimized objective function as shown in Figure FIGREF1 . Next, we study each category one by one. Modeling within-tweet relationships Motivation: Every tweet is assumed to have a latent topic vector, which influences the distribution of the words in the tweet. For example, though the appearance of the phrase catch the ball is frequent in the corpus, if we know that the topic of a tweet is about “technology”, we can expect words such as bug or exception after the word catch (ignoring the) instead of the word ball since catch the bug/exception is more plausible under the topic “technology”. On the other hand, if the topic of the tweet is about “sports”, then we can expect ball after catch. These intuitions indicate that the prediction of neighboring words for a given word strongly relies on the tweet also. Models: BIBREF8 's work is the first to exploit this idea to compute distributed document representations that are good at predicting words in the document. They propose two models: PV-DM and PV-DBOW, that are extensions of Continuous Bag Of Words (CBOW) and Skip-gram model variants of the popular Word2Vec model BIBREF9 respectively – PV-DM inserts an additional document token (which can be thought of as another word) which is shared across all contexts generated from the same document; PV-DBOW attempts to predict the sampled words from the document given the document representation. Although originally employed for paragraphs and documents, these models work better than the traditional models: BOW BIBREF10 and LDA BIBREF11 for tweet classification and microblog retrieval tasks BIBREF12 . The authors in BIBREF12 make the PV-DM and PV-DBOW models concept-aware (a rich semantic signal from a tweet) by augmenting two features: attention over contextual words and conceptual tweet embedding, which jointly exploit concept-level senses of tweets to compute better representations. Both the discussed works have the following characteristics: (1) they use a shallow architecture, which enables fast training, (2) computing representations for test tweets requires computing gradients, which is time-consuming for real-time Twitter applications, and (3) most importantly, they fail to exploit textual information from related tweets that can bear salient semantic signals. Modeling inter-tweet relationships Motivation: To capture rich tweet semantics, researchers are attempting to exploit a type of sentence-level Distributional Hypothesis BIBREF10 , BIBREF13 . The idea is to infer the tweet representation from the content of adjacent tweets in a related stream like users' Twitter timeline, topical, retweet and conversational stream. This approach significantly alleviates the context insufficiency problem caused due to the ambiguous and short nature of tweets BIBREF0 , BIBREF14 . Models: Skip-thought vectors BIBREF15 (STV) is a widely popular sentence encoder, which is trained to predict adjacent sentences in the book corpus BIBREF16 . Although the testing is cheap as it involves a cheap forward propagation of the test sentence, STV is very slow to train thanks to its complicated model architecture. To combat this computational inefficiency, FastSent BIBREF17 propose a simple additive (log-linear) sentence model, which predicts adjacent sentences (represented as BOW) taking the BOW representation of some sentence in context. This model can exploit the same signal, but at a much lower computational expense. Parallel to this work, Siamase CBOW BIBREF18 develop a model which directly compares the BOW representation of two sentence to bring the embeddings of a sentence closer to its adjacent sentence, away from a randomly occurring sentence in the corpus. For FastSent and Siamese CBOW, the test sentence representation is a simple average of word vectors obtained after training. Both of these models are general purpose sentence representation models trained on book corpus, yet give a competitive performance over previous models on the tweet semantic similarity computation task. BIBREF14 's model attempt to exploit these signals directly from Twitter. With the help of attention technique and learned user representation, this log-linear model is able to capture salient semantic information from chronologically adjacent tweets of a target tweet in users' Twitter timeline. Modeling from structured resources Motivation: In recent times, building representation models based on supervision from richly structured resources such as Paraphrase Database (PPDB) BIBREF19 (containing noisy phrase pairs) has yielded high quality sentence representations. These methods work by maximizing the similarity of the sentences in the learned semantic space. Models: CHARAGRAM BIBREF20 embeds textual sequences by learning a character-based compositional model that involves addition of the vectors of its character n-grams followed by an elementwise nonlinearity. This simpler architecture trained on PPDB is able to beat models with complex architectures like CNN, LSTM on SemEval 2015 Twitter textual similarity task by a large margin. This result emphasizes the importance of character-level models that address differences due to spelling variation and word choice. The authors in their subsequent work BIBREF21 conduct a comprehensive analysis of models spanning the range of complexity from word averaging to LSTMs for its ability to do transfer and supervised learning after optimizing a margin based loss on PPDB. For transfer learning, they find models based on word averaging perform well on both the in-domain and out-of-domain textual similarity tasks, beating LSTM model by a large margin. On the other hand, the word averaging models perform well for both sentence similarity and textual entailment tasks, outperforming the LSTM. However, for sentiment classification task, they find LSTM (trained on PPDB) to beat the averaging models to establish a new state of the art. The above results suggest that structured resources play a vital role in computing general-purpose embeddings useful in downstream applications. Modeling as an autoencoder Motivation: The autoencoder based approach learns latent (or compressed) representation by reconstructing its own input. Since textual data like tweets contain discrete input signals, sequence-to-sequence models BIBREF22 like STV can be used to build the solution. The encoder model which encodes the input tweet can typically be a CNN BIBREF23 , recurrent models like RNN, GRU, LSTM BIBREF24 or memory networks BIBREF25 . The decoder model which generates the output tweet can typically be a recurrent model that predicts a output token at every time step. Models: Sequential Denoising Autoencoders (SDAE) BIBREF17 is a LSTM-based sequence-to-sequence model, which is trained to recover the original data from the corrupted version. SDAE produces robust representations by learning to represent the data in terms of features that explain its important factors of variation. Tweet2Vec BIBREF3 is a recent model which uses a character-level CNN-LSTM encoder-decoder architecture trained to construct the input tweet directly. This model outperforms competitive models that work on word-level like PV-DM, PV-DBOW on semantic similarity computation and sentiment classification tasks, thereby showing that the character-level nature of Tweet2Vec is best-suited to deal with the noise and idiosyncrasies of tweets. Tweet2Vec controls the generalization error by using a data augmentation technique, wherein tweets are replicated and some of the words in the replicated tweets are replaced with their synonyms. Both SDAE and Tweet2Vec has the advantage that they don't need a coherent inter-sentence narrative (like STV), which is hard to obtain in Twitter. Modeling using weak supervision Motivation: In a weakly supervised setup, we create labels for a tweet automatically and predict them to learn potentially sophisticated models than those obtained by unsupervised learning alone. Examples of labels include sentiment of the overall tweet, words like hashtag present in the tweet and so on. This technique can create a huge labeled dataset especially for building data-hungry, sophisticated deep learning models. Models: BIBREF26 learns sentiment-specific word embedding (SSWE), which encodes the polarity information in the word representations so that words with contrasting polarities and similar syntactic context (like good and bad) are pushed away from each other in the semantic space that it learns. SSWE utilizes the massive distant-supervised tweets collected by positive and negative emoticons to build a powerful tweet representation, which are shown to be useful in tasks such as sentiment classification and word similarity computation in sentiment lexicon. BIBREF2 observes that hashtags in tweets can be considered as topics and hence tweets with similar hashtags must come closer to each other. Their model predicts the hashtags by using a Bi-GRU layer to embed the tweets from its characters. Due to subword modeling, such character-level models can approximate the representations for rare words and new words (words not seen during training) in the test tweets really well. This model outperforms the word-level baselines for hashtag prediction task, thereby concluding that exploring character-level models for tweets is a worthy research direction to pursue. Both these works fail to study the model's generality BIBREF27 , i.e., the ability of the model to transfer the learned representations to diverse tasks. Future Directions In this section we present the future research directions which we believe can be worth pursuing to generate high quality tweet embeddings. Conclusion In this work we study the problem of learning unsupervised tweet representations. We believe our survey of the existing works based on the objective function can give vital perspectives to researchers and aid their understanding of the field. We also believe the future research directions studied in this work can help in breaking the barriers in building high quality, general purpose tweet representation models.
Which dataset do they use?
Unanswerable
1,902
qasper
3k
Introduction Cancer is one of the leading causes of death in the world, with over 80,000 deaths registered in Canada in 2017 (Canadian Cancer Statistics 2017). A computer-aided system for cancer diagnosis usually involves a pathologist rendering a descriptive report after examining the tissue glass slides obtained from the biopsy of a patient. A pathology report contains specific analysis of cells and tissues, and other histopathological indicators that are crucial for diagnosing malignancies. An average sized laboratory may produces a large quantity of pathology reports annually (e.g., in excess of 50,000), but these reports are written in mostly unstructured text and with no direct link to the tissue sample. Furthermore, the report for each patient is a personalized document and offers very high variability in terminology due to lack of standards and may even include misspellings and missing punctuation, clinical diagnoses interspersed with complex explanations, different terminology to label the same malignancy, and information about multiple carcinoma appearances included in a single report BIBREF0 . In Canada, each Provincial and Territorial Cancer Registry (PTCR) is responsible for collecting the data about cancer diseases and reporting them to Statistics Canada (StatCan). Every year, Canadian Cancer Registry (CCR) uses the information sources of StatCan to compile an annual report on cancer and tumor diseases. Many countries have their own cancer registry programs. These programs rely on the acquisition of diagnostic, treatment, and outcome information through manual processing and interpretation from various unstructured sources (e.g., pathology reports, autopsy/laboratory reports, medical billing summaries). The manual classification of cancer pathology reports is a challenging, time-consuming task and requires extensive training BIBREF0 . With the continued growth in the number of cancer patients, and the increase in treatment complexity, cancer registries face a significant challenge in manually reviewing the large quantity of reports BIBREF1 , BIBREF0 . In this situation, Natural Language Processing (NLP) systems can offer a unique opportunity to automatically encode the unstructured reports into structured data. Since, the registries already have access to the large quantity of historically labeled and encoded reports, a supervised machine learning approach of feature extraction and classification is a compelling direction for making their workflow more effective and streamlined. If successful, such a solution would enable processing reports in much lesser time allowing trained personnel to focus on their research and analysis. However, developing an automated solution with high accuracy and consistency across wide variety of reports is a challenging problem. For cancer registries, an important piece of information in a pathology report is the associated ICD-O code which describes the patient's histological diagnosis, as described by the World Health Organization's (WHO) International Classification of Diseases for Oncology BIBREF2 . Prediction of the primary diagnosis from a pathology report provides a valuable starting point for exploration of machine learning techniques for automated cancer surveillance. A major application for this purpose would be “auto-reporting” based on analysis of whole slide images, the digitization of the biopsy glass slides. Structured, summarized and categorized reports can be associated with the image content when searching in large archives. Such as system would be able to drastically increase the efficiency of diagnostic processes for the majority of cases where in spite of obvious primary diagnosis, still time and effort is required from the pathologists to write a descriptive report. The primary objective of our study is to analyze the efficacy of existing machine learning approaches for the automated classification of pathology reports into different diagnosis categories. We demonstrate that TF-IDF feature vectors combined with linear SVM or XGBoost classifier can be an effective method for classification of the reports, achieving up to 83% accuracy. We also show that TF-IDF features are capable of identifying important keywords within a pathology report. Furthermore, we have created a new dataset consisting of 1,949 pathology reports across 37 primary diagnoses. Taken together, our exploratory experiments with a newly introduced dataset on pathology reports opens many new opportunities for researchers to develop a scalable and automatic information extraction from unstructured pathology reports. Background NLP approaches for information extraction within the biomedical research areas range from rule-based systems BIBREF3 , to domain-specific systems using feature-based classification BIBREF1 , to the recent deep networks for end-to-end feature extraction and classification BIBREF0 . NLP has had varied degree of success with free-text pathology reports BIBREF4 . Various studies have acknowledge the success of NLP in interpreting pathology reports, especially for classification tasks or extracting a single attribute from a report BIBREF4 , BIBREF5 . The Cancer Text Information Extraction System (caTIES) BIBREF6 is a framework developed in a caBIG project focuses on information extraction from pathology reports. Specifically, caTIES extracts information from surgical pathology reports (SPR) with good precision as well as recall. Another system known as Open Registry BIBREF7 is capable of filtering the reports with disease codes containing cancer. In BIBREF8 , an approach called Automated Retrieval Console (ARC) is proposed which uses machine learning models to predict the degree of association of a given pathology or radiology with the cancer. The performance ranges from an F-measure of 0.75 for lung cancer to 0.94 for colorectal cancer. However, ARC uses domain-specific rules which hiders with the generalization of the approach to variety of pathology reports. This research work is inspired by themes emerging in many of the above studies. Specifically, we are evaluating the task of predicting the primary diagnosis from the pathology report. Unlike previous approaches, the system does not rely on custom rule-based knowledge, domain specific features, balanced dataset with fewer number of classes. Materials and Methods We assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes. The reports are collected from four different body parts or primary sites from multiple patients. The distribution of reports across different primary diagnoses and primary sites is reported in tab:report-distribution. The dataset was developed in three steps as follows. Collecting pathology reports: The total of 11,112 pathology reports were downloaded from NCI's Genomic Data Commons (GDC) dataset in PDF format BIBREF9 . Out of all PDF files, 1,949 reports were selected across multiple patients from four specific primary sites—thymus, testis, lung, and kidney. The selection was primarily made based on the quality of PDF files. Cleaning reports: The next step was to extract the text content from these reports. Due to the significant time expense of manually re-typing all the pathology reports, we developed a new strategy to prepare our dataset. We applied an Optical Character Recognition (OCR) software to convert the PDF reports to text files. Then, we manually inspected all generated text files to fix any grammar/spelling issues and irrelevant characters as an artefact produced by the OCR system. Splitting into training-testing data: We split the cleaned reports into 70% and 30% for training and testing, respectively. This split resulted in 1,364 training, and 585 testing reports. Pre-Processing of Reports We pre-processed the reports by setting their text content to lowercase and filtering out any non-alphanumeric characters. We used NLTK library to remove stopping words, e.g., `the', `an', `was', `if' and so on BIBREF10 . We then analyzed the reports to find common bigrams, such as “lung parenchyma”, “microscopic examination”, “lymph node” etc. We joined the biagrams with a hyphen, converting them into a single word. We further removed the words that occur less than 2% in each of the diagnostic category. As well, we removed the words that occur more than 90% across all the categories. We stored each pre-processed report in a separate text file. TF-IDF features TF-IDF stands for Term Frequency-Inverse Document Frequency, and it is a useful weighting scheme in information retrieval and text mining. TF-IDF signifies the importance of a term in a document within a corpus. It is important to note that a document here refers to a pathology report, a corpus refers to the collection of reports, and a term refers to a single word in a report. The TF-IDF weight for a term INLINEFORM0 in a document INLINEFORM1 is given by DISPLAYFORM0 We performed the following steps to transform a pathology report into a feature vector: Create a set of vocabulary containing all unique words from all the pre-processed training reports. Create a zero vector INLINEFORM0 of the same length as the vocabulary. For each word INLINEFORM0 in a report INLINEFORM1 , set the corresponding index in INLINEFORM2 to INLINEFORM3 . The resultant INLINEFORM0 is a feature vector for the report INLINEFORM1 and it is a highly sparse vector. Keyword extraction and topic modelling The keyword extraction involves identifying important words within reports that summarizes its content. In contrast, the topic modelling allows grouping these keywords using an intelligent scheme, enabling users to further focus on certain aspects of a document. All the words in a pathology report are sorted according to their TF-IDF weights. The top INLINEFORM0 sorted words constitute the top INLINEFORM1 keywords for the report. The INLINEFORM2 is empirically set to 50 within this research. The extracted keywords are further grouped into different topics by using latent Dirichlet allocation (LDA) BIBREF11 . The keywords in a report are highlighted using the color theme based on their topics. Evaluation metrics Each model is evaluated using two standard NLP metrics—micro and macro averaged F-scores, the harmonic mean of related metrics precision and recall. For each diagnostic category INLINEFORM0 from a set of 37 different classes INLINEFORM1 , the number of true positives INLINEFORM2 , false positives INLINEFORM3 , and false negatives INLINEFORM4 , the micro F-score is defined as DISPLAYFORM0 whereas macro F-score is given by DISPLAYFORM0 In summary, micro-averaged metrics have class representation roughly proportional to their test set representation (same as accuracy for classification problem with a single label per data point), whereas macro-averaged metrics are averaged by class without weighting by class prevalence BIBREF12 . Experimental setting In this study, we performed two different series of experiments: i) evaluating the performance of TF-IDF features and various machine learning classifiers on the task of predicting primary diagnosis from the text content of a given report, and ii) using TF-IDF and LDA techniques to highlight the important keywords within a report. For the first experiment series, training reports are pre-processed, then their TF-IDF features are extracted. The TF-IDF features and the training labels are used to train different classification models. These different classification models and their hyper-parameters are reported in tab:classifier. The performance of classifiers is measured quantitatively on the test dataset using the evaluation metrics discussed in the previous section. For the second experiment series, a random report is selected and its top 50 keywords are extracted using TF-IDF weights. These 50 keywords are highlighted using different colors based on their associated topic, which are extracted through LDA. A non-expert based qualitative inspection is performed on the extracted keywords and their corresponding topics. Experiment Series 1 A classification model is trained to predict the primary diagnosis given the content of the cancer pathology report. The performance results on this task are reported in tab:results. We can observe that the XGBoost classifier outperformed all other models for both the micro F-score metric, with a score of 0.92, and the macro F-score metric, with a score of 0.31. This was an improvement of 7% for the micro F-score over the next best model, SVM-L, and a marginal improvement of 5% for macro F-score. It is interesting to note that SVM with linear kernels performs much better than SVM with RBF kernel, scoring 9% on the macro F-score and 12% more on the micro F-score. It is suspected that since words used in primary diagnosis itself occur in some reports, thus enabling the linear models to outperform complex models. Experiment Series 2 fig:keywords shows the top 50 keywords highlighted using TF-IDF and LDA. The proposed approach has performed well in highlighting the important regions, for example the topic highlighted with a red color containing “presence range tumor necrosis” provides useful biomarker information to readers. Conclusions We proposed a simple yet efficient TF-IDF method to extract and corroborate useful keywords from pathology cancer reports. Encoding a pathology report for cancer and tumor surveillance is a laborious task, and sometimes it is subjected to human errors and variability in the interpretation. One of the most important aspects of encoding a pathology report involves extracting the primary diagnosis. This may be very useful for content-based image retrieval to combine with visual information. We used existing classification model and TF-IDF features to predict the primary diagnosis. We achieved up to 92% accuracy using XGBoost classifier. The prediction accuracy empowers the adoption of machine learning methods for automated information extraction from pathology reports.
What features are used?
Unanswerable
2,108
qasper
3k
Introduction In recent years, there has been a movement to leverage social medial data to detect, estimate, and track the change in prevalence of disease. For example, eating disorders in Spanish language Twitter tweets BIBREF0 and influenza surveillance BIBREF1 . More recently, social media has been leveraged to monitor social risks such as prescription drug and smoking behaviors BIBREF2 , BIBREF3 , BIBREF4 as well as a variety of mental health disorders including suicidal ideation BIBREF5 , attention deficient hyperactivity disorder BIBREF6 and major depressive disorder BIBREF7 . In the case of major depressive disorder, recent efforts range from characterizing linguistic phenomena associated with depression BIBREF8 and its subtypes e.g., postpartum depression BIBREF5 , to identifying specific depressive symptoms BIBREF9 , BIBREF10 e.g., depressed mood. However, more research is needed to better understand the predictive power of supervised machine learning classifiers and the influence of feature groups and feature sets for efficiently classifying depression-related tweets to support mental health monitoring at the population-level BIBREF11 . This paper builds upon related works toward classifying Twitter tweets representing symptoms of major depressive disorder by assessing the contribution of lexical features (e.g., unigrams) and emotion (e.g., strongly negative) to classification performance, and by applying methods to eliminate low-value features. METHODS Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0. Features Furthermore, this dataset was encoded with 7 feature groups with associated feature values binarized (i.e., present=1 or absent=0) to represent potentially informative features for classifying depression-related classes. We describe the feature groups by type, subtype, and provide one or more examples of words representing the feature subtype from a tweet: lexical features, unigrams, e.g., “depressed”; syntactic features, parts of speech, e.g., “cried” encoded as V for verb; emotion features, emoticons, e.g., :( encoded as SAD; demographic features, age and gender e.g., “this semester” encoded as an indicator of 19-22 years of age and “my girlfriend” encoded as an indicator of male gender, respectively; sentiment features, polarity and subjectivity terms with strengths, e.g., “terrible” encoded as strongly negative and strongly subjective; personality traits, neuroticism e.g., “pissed off” implies neuroticism; LIWC Features, indicators of an individual's thoughts, feelings, personality, and motivations, e.g., “feeling” suggestions perception, feeling, insight, and cognitive mechanisms experienced by the Twitter user. A more detailed description of leveraged features and their values, including LIWC categories, can be found in BIBREF10 . Based on our prior initial experiments using these feature groups BIBREF10 , we learned that support vector machines perform with the highest F1-score compared to other supervised approaches. For this study, we aim to build upon this work by conducting two experiments: 1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. Feature Contribution Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group. We conducted a feature ablation study by holding out (sans) each feature group and training and testing the support vector model using a linear kernel and 5-fold, stratified cross-validation. We report the average F1-score from our baseline approach (all feature groups) and report the point difference (+ or -) in F1-score performance observed by ablating each feature set. By ablating each feature group from the full dataset, we observed the following count of features - sans lexical: 185, sans syntactic: 16,935, sans emotion: 16,954, sans demographics: 16,946, sans sentiment: 16,950, sans personality: 16,946, and sans LIWC: 16,832. In Figure 1, compared to the baseline performance, significant drops in F1-scores resulted from sans lexical for depressed mood (-35 points), disturbed sleep (-43 points), and depressive symptoms (-45 points). Less extensive drops also occurred for evidence of depression (-14 points) and fatigue or loss of energy (-3 points). In contrast, a 3 point gain in F1-score was observed for no evidence of depression. We also observed notable drops in F1-scores for disturbed sleep by ablating demographics (-7 points), emotion (-5 points), and sentiment (-5 points) features. These F1-score drops were accompanied by drops in both recall and precision. We found equal or higher F1-scores by removing non-lexical feature groups for no evidence of depression (0-1 points), evidence of depression (0-1 points), and depressive symptoms (2 points). Unsurprisingly, lexical features (unigrams) were the largest contributor to feature counts in the dataset. We observed that lexical features are also critical for identifying depressive symptoms, specifically for depressed mood and for disturbed sleep. For the classes higher in the hierarchy - no evidence of depression, evidence of depression, and depressive symptoms - the classifier produced consistent F1-scores, even slightly above the baseline for depressive symptoms and minor fluctuations of change in recall and precision when removing other feature groups suggesting that the contribution of non-lexical features to classification performance was limited. However, notable changes in F1-score were observed for the classes lower in the hierarchy including disturbed sleep and fatigue or loss of energy. For instance, changes in F1-scores driven by both recall and precision were observed for disturbed sleep by ablating demographics, emotion, and sentiment features, suggesting that age or gender (“mid-semester exams have me restless”), polarity and subjective terms (“lack of sleep is killing me”), and emoticons (“wide awake :(”) could be important for both identifying and correctly classifying a subset of these tweets. Feature Elimination Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach: Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset. Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation. Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class. All experiments were programmed using scikit-learn 0.18. The initial matrices of almost 17,000 features were reduced by eliminating features that only occurred once in the full dataset, resulting in 5,761 features. We applied Chi-Square feature selection and plotted the top-ranked subset of features for each percentile (at 5 percent intervals cumulatively added) and evaluated their predictive contribution using the support vector machine with linear kernel and stratified, 5-fold cross validation. In Figure 2, we observed optimal F1-score performance using the following top feature counts: no evidence of depression: F1: 87 (15th percentile, 864 features), evidence of depression: F1: 59 (30th percentile, 1,728 features), depressive symptoms: F1: 55 (15th percentile, 864 features), depressed mood: F1: 39 (55th percentile, 3,168 features), disturbed sleep: F1: 46 (10th percentile, 576 features), and fatigue or loss of energy: F1: 72 (5th percentile, 288 features) (Figure 1). We note F1-score improvements for depressed mood from F1: 13 at the 1st percentile to F1: 33 at the 20th percentile. We observed peak F1-score performances at low percentiles for fatigue or loss of energy (5th percentile), disturbed sleep (10th percentile) as well as depressive symptoms and no evidence of depression (both 15th percentile) suggesting fewer features are needed to reach optimal performance. In contrast, peak F1-score performances occurred at moderate percentiles for evidence of depression (30th percentile) and depressed mood (55th percentile) suggesting that more features are needed to reach optimal performance. However, one notable difference between these two classes is the dramatic F1-score improvements for depressed mood i.e., 20 point increase from the 1st percentile to the 20th percentile compared to the more gradual F1-score improvements for evidence of depression i.e., 11 point increase from the 1st percentile to the 20th percentile. This finding suggests that for identifying depressed mood a variety of features are needed before incremental gains are observed. RESULTS From our annotated dataset of Twitter tweets (n=9,300 tweets), we conducted two feature studies to better understand the predictive power of several feature groups for classifying whether or not a tweet contains no evidence of depression (n=6,829 tweets) or evidence of depression (n=2,644 tweets). If there was evidence of depression, we determined whether the tweet contained one or more depressive symptoms (n=1,656 tweets) and further classified the symptom subtype of depressed mood (n=1,010 tweets), disturbed sleep (n=98 tweets), or fatigue or loss of energy (n=427 tweets) using support vector machines. From our prior work BIBREF10 and in Figure 1, we report the performance for prediction models built by training a support vector machine using 5-fold, stratified cross-validation with all feature groups as a baseline for each class. We observed high performance for no evidence of depression and fatigue or loss of energy and moderate performance for all remaining classes. Discussion We conducted two feature study experiments: 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. Future Work Our next step is to address the classification of rarer depressive symptoms suggestive of major depressive disorder from our dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia BIBREF15 , BIBREF16 . We are developing a population-level monitoring framework designed to estimate the prevalence of depression (and depression-related symptoms and psycho-social stressors) over millions of United States-geocoded tweets. Identifying the most discriminating feature sets and natural language processing classifiers for each depression symptom is vital for this goal. Conclusions In summary, we conducted two feature study experiments to assess the contribution of feature groups and to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. From these experiments, we conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset. Acknowledgments Research reported in this publication was supported by the National Library of Medicine of the [United States] National Institutes of Health under award numbers K99LM011393 and R00LM011393. This study was granted an exemption from review by the University of Utah Institutional Review Board (IRB 00076188). Note that in order to protect tweeter anonymity, we have not reproduced tweets verbatim. Example tweets shown were generated by the researchers as exemplars only. Finally, we would like to thank the anonymous reviewers of this paper for their valuable comments.
How is the dataset annotated?
no evidence of depression, depressed mood, disturbed sleep, fatigue or loss of energy
1,947
qasper
3k
Introduction Pretrained Language Models (PTLMs) such as BERT BIBREF1 have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such as Wikipedia, digital books or the Common Crawl corpus. When applying PTLMs to specific domains, it can be useful to domain-adapt them. Domain adaptation of PTLMs has typically been achieved by pretraining on target-domain text. One such model is BioBERT BIBREF2, which was initialized from general-domain BERT and then pretrained on biomedical scientific publications. The domain adaptation is shown to be helpful for target-domain tasks such as biomedical Named Entity Recognition (NER) or Question Answering (QA). On the downside, the computational cost of pretraining can be considerable: BioBERTv1.0 was adapted for ten days on eight large GPUs (see Table TABREF1), which is expensive, environmentally unfriendly, prohibitive for small research labs and students, and may delay prototyping on emerging domains. We therefore propose a fast, CPU-only domain-adaptation method for PTLMs: We train Word2Vec BIBREF3 on target-domain text and align the resulting word vectors with the wordpiece vectors of an existing general-domain PTLM. The PTLM thus gains domain-specific lexical knowledge in the form of additional word vectors, but its deeper layers remain unchanged. Since Word2Vec and the vector space alignment are efficient models, the process requires a fraction of the resources associated with pretraining the PTLM itself, and it can be done on CPU. In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset). We improve over general-domain BERT on eight out of eight biomedical NER tasks, using a fraction of the compute cost associated with BioBERT. In Section SECREF5, we show how to quickly adapt an existing Question Answering model to text about the Covid-19 pandemic, without any target-domain Language Model pretraining or finetuning. Related work ::: The BERT PTLM For our purpose, a PTLM consists of three parts: A tokenizer $\mathcal {T}_\mathrm {LM} : \mathbb {L}^+ \rightarrow \mathbb {L}_\mathrm {LM}^+$, a wordpiece embedding function $\mathcal {E}_\mathrm {LM}: \mathbb {L}_\mathrm {LM} \rightarrow \mathbb {R}^{d_\mathrm {LM}}$ and an encoder function $\mathcal {F}_\mathrm {LM}$. $\mathbb {L}_\mathrm {LM}$ is a limited vocabulary of wordpieces. All words that are not in $\mathbb {L}_\mathrm {LM}$ are tokenized into sequences of shorter wordpieces, e.g., tachycardia becomes ta ##chy ##card ##ia. Given a sentence $S = [w_1, \ldots , w_T]$, tokenized as $\mathcal {T}_\mathrm {LM}(S) = [\mathcal {T}_\mathrm {LM}(w_1); \ldots ; \mathcal {T}_\mathrm {LM}(w_T)]$, $\mathcal {E}_\mathrm {LM}$ embeds every wordpiece in $\mathcal {T}_\mathrm {LM}(S)$ into a real-valued, trainable wordpiece vector. The wordpiece vectors of the entire sequence are stacked and fed into $\mathcal {F}_\mathrm {LM}$. Note that we consider position and segment embeddings to be a part of $\mathcal {F}_\mathrm {LM}$ rather than $\mathcal {E}_\mathrm {LM}$. In the case of BERT, $\mathcal {F}_\mathrm {LM}$ is a Transformer BIBREF4, followed by a final Feed-Forward Net. During pretraining, the Feed-Forward Net predicts the identity of masked wordpieces. When finetuning on a supervised task, it is usually replaced with a randomly initialized task-specific layer. Related work ::: Domain-adapted PTLMs Domain adaptation of PTLMs is typically achieved by pretraining on unlabeled target-domain text. Some examples of such models are BioBERT BIBREF2, which was pretrained on the PubMed and/or PubMed Central (PMC) corpora, SciBERT BIBREF5, which was pretrained on papers from SemanticScholar, ClinicalBERT BIBREF6, BIBREF7 and ClinicalXLNet BIBREF8, which were pretrained on clinical patient notes, and AdaptaBERT BIBREF9, which was pretrained on Early Modern English text. In most cases, a domain-adapted PTLM is initialized from a general-domain PTLM (e.g., standard BERT), though BIBREF5 report better results with a model that was pretrained from scratch with a custom wordpiece vocabulary. In this paper, we focus on BioBERT, as its domain adaptation corpora are publicly available. Related work ::: Word vectors Word vectors are distributed representations of words that are trained on unlabeled text. Contrary to PTLMs, word vectors are non-contextual, i.e., a word type is always assigned the same vector, regardless of context. In this paper, we use Word2Vec BIBREF3 to train word vectors. We will denote the Word2Vec lookup function as $\mathcal {E}_\mathrm {W2V} : \mathbb {L}_\mathrm {W2V} \rightarrow \mathbb {R}^{d_\mathrm {W2V}}$. Related work ::: Word vector space alignment Word vector space alignment has most frequently been explored in the context of cross-lingual word embeddings. For instance, BIBREF10 align English and Spanish Word2Vec spaces by a simple linear transformation. BIBREF11 use a related method to align cross-lingual word vectors and multilingual BERT wordpiece vectors. Method In the following, we assume access to a general-domain PTLM, as described in Section SECREF2, and a corpus of unlabeled target-domain text. Method ::: Creating new input vectors In a first step, we train Word2Vec on the target-domain corpus. In a second step, we take the intersection of $\mathbb {L}_\mathrm {LM}$ and $\mathbb {L}_\mathrm {W2V}$. In practice, the intersection mostly contains wordpieces from $\mathbb {L}_\mathrm {LM}$ that correspond to standalone words. It also contains single characters and other noise, however, we found that filtering them does not improve alignment quality. In a third step, we use the intersection to fit an unconstrained linear transformation $\mathbf {W} \in \mathbb {R}^{d_\mathrm {LM} \times d_\mathrm {W2V}}$ via least squares: Intuitively, $\mathbf {W}$ makes Word2Vec vectors “look like” the PTLM's native wordpiece vectors, just like cross-lingual alignment makes word vectors from one language “look like” word vectors from another language. In Table TABREF7 (top), we show examples of within-space and cross-space nearest neighbors after alignment. Method ::: Updating the wordpiece embedding layer Next, we redefine the wordpiece embedding layer of the PTLM. The most radical strategy would be to replace the entire layer with the aligned Word2Vec vectors: In initial experiments, this strategy led to a drop in performance, presumably because function words are not well represented by Word2Vec, and replacing them disrupts BERT's syntactic abilities. To prevent this problem, we leave existing wordpiece vectors intact and only add new ones: Method ::: Updating the tokenizer In a final step, we update the tokenizer to account for the added words. Let $\mathcal {T}_\mathrm {LM}$ be the standard BERT tokenizer, and let $\hat{\mathcal {T}}_\mathrm {LM}$ be the tokenizer that treats all words in $\mathbb {L}_\mathrm {LM} \cup \mathbb {L}_\mathrm {W2V}$ as one-wordpiece tokens, while tokenizing any other words as usual. In practice, a given word may or may not benefit from being tokenized by $\hat{\mathcal {T}}_\mathrm {LM}$ instead of $\mathcal {T}_\mathrm {LM}$. To give a concrete example, 82% of the words in the BC5CDR NER dataset that end in the suffix -ia are inside a disease entity (e.g., tachycardia). $\mathcal {T}_\mathrm {LM}$ tokenizes this word as ta ##chy ##card ##ia, thereby exposing the orthographic cue to the model. As a result, $\mathcal {T}_\mathrm {LM}$ leads to higher recall on -ia diseases. But there are many cases where wordpiece tokenization is meaningless or misleading. For instance euthymia (not a disease) is tokenized by $\mathcal {T}_\mathrm {LM}$ as e ##uth ##ym ##ia, making it likely to be classified as a disease. By contrast, $\hat{\mathcal {T}}_\mathrm {LM}$ gives euthymia a one-wordpiece representation that depends only on distributional semantics. We find that using $\hat{\mathcal {T}}_\mathrm {LM}$ improves precision on -ia diseases. To combine these complementary strengths, we use a 50/50 mixture of $\mathcal {T}_\mathrm {LM}$-tokenization and $\hat{\mathcal {T}}_\mathrm {LM}$-tokenization when finetuning the PTLM on a task. At test time, we use both tokenizers and mean-pool the outputs. Let $o(\mathcal {T}(S))$ be some output of interest (e.g., a logit), given sentence $S$ tokenized by $\mathcal {T}$. We predict: Experiment 1: Biomedical NER In this section, we use the proposed method to create GreenBioBERT, an inexpensive and environmentally friendly alternative to BioBERT. Recall that BioBERTv1.0 (biobert_v1.0_pubmed_pmc) was initialized from general-domain BERT (bert-base-cased) and pretrained on PubMed+PMC. Experiment 1: Biomedical NER ::: Domain adaptation We train Word2Vec with vector size $d_\mathrm {W2V} = d_\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT. Experiment 1: Biomedical NER ::: Finetuning We finetune GreenBioBERT on the eight publicly available NER tasks used in BIBREF2. We also do reproduction experiments with general-domain BERT and BioBERTv1.0, using the same setup as our model. We average results over eight random seeds. See Appendix for details on preprocessing, training and hyperparameters. Experiment 1: Biomedical NER ::: Results and discussion Table TABREF7 (bottom) shows entity-level precision, recall and F1. For ease of visualization, Figure FIGREF13 shows what portion of the BioBERT – BERT F1 delta is covered. We improve over general-domain BERT on all tasks with varying effect sizes. Depending on the points of reference, we cover an average 52% to 60% of the BioBERT – BERT F1 delta (54% for BioBERTv1.0, 60% for BioBERTv1.1 and 52% for our reproduction experiments). Table TABREF17 (top) shows the importance of vector space alignment: If we replace the aligned Word2Vec vectors with their non-aligned counterparts (by setting $\mathbf {W} = \mathbf {1}$) or with randomly initialized vectors, F1 drops on all tasks. Experiment 2: Covid-19 QA In this section, we use the proposed method to quickly adapt an existing general-domain QA model to an emerging target domain: Covid-19. Our baseline model is SQuADBERT (bert-large-uncased-whole-word-masking-finetuned-squad), a version of BERT that was finetuned on general-domain SQuAD BIBREF19. We evaluate on Deepset-AI Covid-QA, a SQuAD-style dataset with 1380 questions (see Appendix for details on data and preprocessing). We assume that there is no target-domain finetuning data, which is a realistic setup for a new domain. Experiment 2: Covid-19 QA ::: Domain adaptation We train Word2Vec with vector size $d_\mathrm {W2V} = d_\mathrm {LM} = 1024$ on CORD-19 (Covid-19 Open Research Dataset) and/or PubMed+PMC. The process takes less than an hour on CORD-19 and about one day on the combined corpus, again without the need for a GPU. Then, we update SQuADBERT's wordpiece embedding layer and tokenizer, as described in Section SECREF3. We refer to the resulting model as GreenCovidSQuADBERT. Experiment 2: Covid-19 QA ::: Results and discussion Table TABREF17 (bottom) shows that GreenCovidSQuADBERT outperforms general-domain SQuADBERT in all metrics. Most of the improvement can be achieved with just the small CORD-19 corpus, which is more specific to the target domain (compare “Cord-19 only” and “Cord-19+PubMed+PMC”). Conclusion As a reaction to the trend towards high-resource models, we have proposed an inexpensive, CPU-only method for domain-adapting Pretrained Language Models: We train Word2Vec vectors on target-domain data and align them with the wordpiece vector space of a general-domain PTLM. On eight biomedical NER tasks, we cover over 50% of the BioBERT – BERT F1 delta, at 5% of BioBERT's domain adaptation CO$_2$ footprint and 2% of its cloud compute cost. We have also shown how to rapidly adapt an existing BERT QA model to an emerging domain – the Covid-19 pandemic – without the need for target-domain Language Model pretraining or finetuning. We hope that our approach will benefit practitioners with limited time or resources, and that it will encourage environmentally friendlier NLP. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Word2Vec training We downloaded the PubMed, PMC and CORD-19 corpora from: https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/ [20 January 2020, 68GB raw text] https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/ [20 January 2020, 24GB raw text] https://pages.semanticscholar.org/coronavirus-research [17 April 2020, 2GB raw text] We extract all abstracts and text bodies and apply the BERT basic tokenizer (a word tokenizer that standard BERT uses before wordpiece tokenization). Then, we train CBOW Word2Vec with negative sampling. We use default parameters except for the vector size (which we set to $d_\mathrm {W2V} = d_\mathrm {LM}$). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Pretrained models General-domain BERT and BioBERTv1.0 were downloaded from: https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip https://github.com/naver/biobert-pretrained Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Data We downloaded the NER datasets by following instructions on https://github.com/dmis-lab/biobert#Datasets. For detailed dataset statistics, see BIBREF2. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Preprocessing We cut all sentences into chunks of 30 or fewer whitespace-tokenized words (without splitting inside labeled spans). Then, we tokenize every chunk $S$ with $\mathcal {T} = \mathcal {T}_\mathrm {LM}$ or $\mathcal {T} = \hat{\mathcal {T}}_\mathrm {LM}$ and add special tokens: Word-initial wordpieces in $\mathcal {T}(S)$ are labeled as B(egin), I(nside) or O(utside), while non-word-initial wordpieces are labeled as X(ignore). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Modeling, training and inference We follow BIBREF2's implementation (https://github.com/dmis-lab/biobert): We add a randomly initialized softmax classifier on top of the last BERT layer to predict the labels. We finetune the entire model to minimize negative log likelihood, with the standard Adam optimizer BIBREF20 and a linear learning rate scheduler (10% warmup). Like BIBREF2, we finetune on the concatenation of the training and development set. All finetuning runs were done on a GeForce Titan X GPU (12GB). Since we do not have the resources for an extensive hyperparameter search, we use defaults and recommendations from the BioBERT repository: Batch size of 32, peak learning rate of $1 \cdot 10^{-5}$, and 100 epochs. At inference time, we gather the output logits of word-initial wordpieces only. Since the number of word-initial wordpieces is the same for $\mathcal {T}_\mathrm {LM}(S)$ and $\hat{\mathcal {T}}_\mathrm {LM}(S)$, this makes mean-pooling the logits straightforward. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Note on our reproduction experiments We found it easier to reproduce or exceed BIBREF2's results for general-domain BERT, compared to their results for BioBERTv1.0 (see Figure FIGREF13, main paper). While this may be due to hyperparameters, it suggests that BioBERTv1.0 was more strongly tuned than BERT in the original BioBERT paper. This observation does not affect our conclusions, as GreenBioBERT performs better than reproduced BERT as well. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Pretrained model We downloaded the SQuADBERT baseline from: https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Data We downloaded the Deepset-AI Covid-QA dataset from: https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json [24 April 2020] At the time of writing, the dataset contains 1380 questions and gold answer spans. Every question is associated with one of 98 research papers (contexts). We treat the entire dataset as a test set. Note that there are some important differences between the dataset and SQuAD, which make the task challenging: The contexts are full documents rather than single paragraphs. Thus, the correct answer may appear several times, often with slightly different wordings. Only a single one of the occurrences is annotated as correct, e.g.: What was the prevalence of Coronavirus OC43 in community samples in Ilorin, Nigeria? 13.3% (95% CI 6.9-23.6%) # from main text (13.3%, 10/75). # from abstract SQuAD gold answers are defined as the “shortest span in the paragraph that answered the question” BIBREF19, but many Covid-QA gold answers are longer and contain non-essential context, e.g.: When was the Middle East Respiratory Syndrome Coronavirus isolated first? (MERS-CoV) was first isolated in 2012, in a 60-year-old man who died in Jeddah, KSA due to severe acute pneumonia and multiple organ failure 2012, Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Preprocessing We tokenize every question-context pair $(Q, C)$ with $\mathcal {T} = \mathcal {T}_\mathrm {LM}$ or $\mathcal {T} = \hat{\mathcal {T}}_\mathrm {LM}$, which yields $(\mathcal {T}(Q), \mathcal {T}(C))$. Since $\mathcal {T}(C)$ is usually too long to be digested in a single forward pass, we define a sliding window with width and stride $N = \mathrm {floor}(\frac{509 - |\mathcal {T}(Q)|}{2})$. At step $n$, the “active” window is between $a^{(l)}_n = nN$ and $a^{(r)}_n = \mathrm {min}(|C|, nN+N)$. The input is defined as: $p^{(l)}_n$ and $p^{(r)}_n$ are chosen such that $|X^{(n)}| = 512$, and such that the active window is in the center of the input (if possible). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Modeling and inference Feeding $X^{(n)}$ into the pretrained QA model yields start logits $\mathbf {h^{\prime }}^{(\mathrm {start}, n)} \in \mathbb {R}^{|X^{(n)}|}$ and end logits $\mathbf {h^{\prime }}^{(\mathrm {end},n)} \in \mathbb {R}^{|X^{(n)}|}$. We extract and concatenate the slices that correspond to the active windows of all steps: Next, we map the logits from the wordpiece level to the word level. This allows us to mean-pool the outputs of $\mathcal {T}_\mathrm {LM}$ and $\hat{\mathcal {T}}_\mathrm {LM}$ even when $|\mathcal {T}_\mathrm {LM}(C)| \ne |\hat{\mathcal {T}}_\mathrm {LM}(C)|$. Let $c_i$ be a whitespace-delimited word in $C$. Let $\mathcal {T}(C)_{j:j+|\mathcal {T}(c_i)|}$ be the corresponding wordpieces. The start and end logits of $c_i$ are derived as: Finally, we return the answer span $C_{k:k^{\prime }}$ that maximizes $o^{(\mathrm {start})}_k + o^{(\mathrm {end})}_{k^{\prime }}$, subject to the constraints that $k^{\prime }$ does not precede $k$ and the answer span is not longer than 500 characters.
Which eight NER tasks did they evaluate on?
BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800
2,800
qasper
3k
Introduction Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints. BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions: Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work. Data For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token. Word Embeddings To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 . Translating Lexicons Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 . All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 . For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05. Translating Data The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets. Algorithms Used Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish. For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set. Semi-supervised Learning One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set. First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet. Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded. This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks. Ensembling To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used. Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 . Results and Discussion Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks. However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging. Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors. Error Analysis Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors. Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand. The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix. Conclusion To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks.
How was the training data translated?
using the machine translation platform Apertium
2,423
qasper
3k
Introduction Propaganda aims at influencing people's mindset with the purpose of advancing a specific agenda. In the Internet era, thanks to the mechanism of sharing in social networks, propaganda campaigns have the potential of reaching very large audiences BIBREF0, BIBREF1, BIBREF2. Propagandist news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others (cf. Section SECREF3). Whereas proving intent is not easy, we can analyse the language of a claim/article and look for the use of specific propaganda techniques. Going at this fine-grained level can yield more reliable systems and it also makes it possible to explain to the user why an article was judged as propagandist by an automatic system. With this in mind, we organised the shared task on fine-grained propaganda detection at the NLP4IF@EMNLP-IJCNLP 2019 workshop. The task is based on a corpus of news articles annotated with an inventory of 18 propagandist techniques at the fragment level. We hope that the corpus would raise interest outside of the community of researchers studying propaganda. For example, the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis. Related Work Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6. A related field is that of computational argumentation which, among others, deals with some logical fallacies related to propaganda. BIBREF7 presented a corpus of Web forum discussions with instances of ad hominem fallacy. BIBREF8, BIBREF9 introduced Argotario, a game to educate people to recognize and create fallacies, a by-product of which is a corpus with $1.3k$ arguments annotated with five fallacies such as ad hominem, red herring and irrelevant authority, which directly relate to propaganda. Unlike BIBREF8, BIBREF9, BIBREF7, our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments. The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 BIBREF10 and describes a corpus that is a subset of the one used for this shared task. Propaganda Techniques Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques: Propaganda Techniques ::: 1. Loaded language. Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11. Propaganda Techniques ::: 2. Name calling or labeling. Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12. Propaganda Techniques ::: 3. Repetition. Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12. Propaganda Techniques ::: 4. Exaggeration or minimization. Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke. Propaganda Techniques ::: 5. Doubt. Questioning the credibility of someone or something. Propaganda Techniques ::: 6. Appeal to fear/prejudice. Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments. Propaganda Techniques ::: 7. Flag-waving. Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15. Propaganda Techniques ::: 8. Causal oversimplification. Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue. Propaganda Techniques ::: 9. Slogans. A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16. Propaganda Techniques ::: 10. Appeal to authority. Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14. Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship. Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship). Propaganda Techniques ::: 12. Thought-terminating cliché. Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18. Propaganda Techniques ::: 13. Whataboutism. Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19. Propaganda Techniques ::: 14. Reductio ad Hitlerum. Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20. Propaganda Techniques ::: 15. Red herring. Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20. Propaganda Techniques ::: 16. Bandwagon. Attempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15. Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion. Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion. Propaganda Techniques ::: 18. Straw man. When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22. Tasks The shared task features two subtasks: Tasks ::: Fragment-Level Classification task (FLC). Given a news article, detect all spans of the text in which a propaganda technique is used. In addition, for each span the propaganda technique applied must be identified. Tasks ::: Sentence-Level Classification task (SLC). A sentence is considered propagandist if it contains at least one propagandist fragment. We then define a binary classification task in which, given a sentence, the correct label, either propaganda or non-propaganda, is to be predicted. Data The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators. The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively. Figure FIGREF15 shows an annotated example, which contains several propaganda techniques. For example, the fragment babies on line 1 is an instance of both Name_Calling and Labeling. Note that the fragment not looking as though Trump killed his grandma on line 4 is an instance of Exaggeration_or_Minimisation and it overlaps with the fragment killed his grandma, which is an instance of Loaded_Language. Table TABREF23 reports the total number of instances per technique and the percentage with respect to the total number of annotations, for the training and for the development sets. Setup The shared task had two phases: In the development phase, the participants were provided labeled training and development datasets; in the testing phase, testing input was further provided. The participants tried to achieve the best performance on the development set. A live leaderboard kept track of the submissions. The test set was released and the participants had few days to make final predictions. In phase 2, no immediate feedback on the submissions was provided. The winner was determined based on the performance on the test set. Evaluation ::: FLC task. FLC is a composition of two subtasks: the identification of the propagandist text fragments and the identification of the techniques used (18-way classification task). While F$_1$ measure is appropriate for a multi-class classification task, we modified it to account for partial matching between the spans; see BIBREF10 for more details. We further computed an F$_1$ value for each propaganda technique (not shown below for the sake of saving space, but available on the leaderboard). Evaluation ::: SLC task. SLC is a binary classification task with imbalanced data. Therefore, the official evaluation measure for the task is the standard F$_1$ measure. We further report Precision and Recall. Baselines The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41. Participants and Approaches A total of 90 teams registered for the shared task, and 39 of them submitted predictions for a total of 3,065 submissions. For the FLC task, 21 teams made a total of 527 submissions, and for the SLC task, 35 teams made a total of 2,538 submissions. Below, we give an overview of the approaches as described in the participants' papers. Tables TABREF28 and TABREF29 offer a high-level summary. Participants and Approaches ::: Teams Participating in the Fragment-Level Classification Only Team newspeak BIBREF23 achieved the best results on the test set for the FLC task using 20-way word-level classification based on BERT BIBREF24: a word could belong to one of the 18 propaganda techniques, to none of them, or to an auxiliary (token-derived) class. The team fed one sentence at a time in order to reduce the workload. In addition to experimenting with an out-of-the-box BERT, they also tried unsupervised fine-tuning both on the 1M news dataset and on Wikipedia. Their best model was based on the uncased base model of BERT, with 12 Transformer layers BIBREF25, and 110 million parameters. Moreover, oversampling of the least represented classes proved to be crucial for the final performance. Finally, careful analysis has shown that the model pays special attention to adjectives and adverbs. Team Stalin BIBREF26 focused on data augmentation to address the relatively small size of the data for fine-tuning contextual embedding representations based on ELMo BIBREF27, BERT, and Grover BIBREF28. The balancing of the embedding space was carried out by means of synthetic minority class over-sampling. Then, the learned representations were fed into an LSTM. Participants and Approaches ::: Teams Participating in the Sentence-Level Classification Only Team CAUnLP BIBREF29 used two context-aware representations based on BERT. In the first representation, the target sentence is followed by the title of the article. In the second representation, the previous sentence is also added. They performed subsampling in order to deal with class imbalance, and experimented with BERT$_{BASE}$ and BERT$_{LARGE}$ Team LIACC BIBREF30 used hand-crafted features and pre-trained ELMo embeddings. They also observed a boost in performance when balancing the dataset by dropping some negative examples. Team JUSTDeep BIBREF31 used a combination of models and features, including word embeddings based on GloVe BIBREF32 concatenated with vectors representing affection and lexical features. These were combined in an ensemble of supervised models: bi-LSTM, XGBoost, and variations of BERT. Team YMJA BIBREF33 also based their approach on fine-tuned BERT. Inspired by kaggle competitions on sentiment analysis, they created an ensemble of models via cross-validation. Team jinfen BIBREF34 used a logistic regression model fed with a manifold of representations, including TF.IDF and BERT vectors, as well as vocabularies and readability measures. Team Tha3aroon BIBREF35 implemented an ensemble of three classifiers: two based on BERT and one based on a universal sentence encoder BIBREF36. Team NSIT BIBREF37 explored three of the most popular transfer learning models: various versions of ELMo, BERT, and RoBERTa BIBREF38. Team Mindcoders BIBREF39 combined BERT, Bi-LSTM and Capsule networks BIBREF40 into a single deep neural network and pre-trained the resulting network on corpora used for related tasks, e.g., emotion classification. Finally, team ltuorp BIBREF41 used an attention transformer using BERT trained on Wikipedia and BookCorpus. Participants and Approaches ::: Teams Participating in Both Tasks Team MIC-CIS BIBREF42 participated in both tasks. For the sentence-level classification, they used a voting ensemble including logistic regression, convolutional neural networks, and BERT, in all cases using FastText embeddings BIBREF43 and pre-trained BERT models. Beside these representations, multiple features of readability, sentiment and emotions were considered. For the fragment-level task, they used a multi-task neural sequence tagger, based on LSTM-CRF BIBREF44, in conjunction with linguistic features. Finally, they applied sentence- and fragment-level models jointly. Team CUNLP BIBREF45 considered two approaches for the sentence-level task. The first approach was based on fine-tuning BERT. The second approach complemented the fine-tuned BERT approach by feeding its decision into a logistic regressor, together with features from the Linguistic Inquiry and Word Count (LIWC) lexicon and punctuation-derived features. Similarly to BIBREF42, for the fragment-level problem they used a Bi-LSTM-CRF architecture, combining both character- and word-level embeddings. Team ProperGander BIBREF46 also used BERT, but they paid special attention to the imbalance of the data, as well as to the differences between training and testing. They showed that augmenting the training data by oversampling yielded improvements when testing on data that is temporally far from the training (by increasing recall). In order to deal with the imbalance, they performed cost-sensitive classification, i.e., the errors on the smaller positive class were more costly. For the fragment-level classification, inspired by named entity recognition, they used a model based on BERT using Continuous Random Field stacked on top of an LSTM. Evaluation Results The results on the test set for the SLC task are shown in Table TABREF33, while Table TABREF34 presents the results on the development set at the end of phase 1 (cf. Section SECREF6). The general decrease of the F$_1$ values between the development and the test set could indicate that systems tend to overfit on the development set. Indeed, the winning team ltuorp chose the parameters of their system both on the development set and on a subset of the training set in order to improve the robustness of their system. Tables TABREF36 and TABREF41 report the results on the test and on the development sets for the FLC task. For this task, the results tend to be more stable across the two sets. Indeed, team newspeak managed to almost keep the same difference in performance with respect to team Antiganda. Note that team MIC-CIS managed to reach the third position despite never having submitted a run on the development set. Conclusion and Further Work We have described the NLP4IF@EMNLP-IJCNLP 2019 shared task on fine-grained propaganda identification. We received 25 and 12 submissions on the test set for the sentence-level classification and the fragment-level classification tasks, respectively. Overall, the sentence-level task was easier and most submitted systems managed to outperform the baseline. The fragment-level task proved to be much more challenging, with lower absolute scores, but most teams still managed to outperform the baseline. We plan to make the schema and the dataset publicly available to be used beyond NLP4IF. We hope that the corpus would raise interest outside of the community of researchers studying propaganda: the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis. As a kind of advertisement, Task 11 at SemEval 2020 is a follow up of this shared task. It features two complimentary tasks: Given a free-text article, identify the propagandist text spans. Given a text span already flagged as propagandist and its context, identify the specific propaganda technique it contains. This setting would allow participants to focus their efforts on binary sequence labeling for Task 1 and on multi-class classification for Task 2. Acknowledgments This research is part of the Propaganda Analysis Project, which is framed within the Tanbih project. The Tanbih project aims to limit the effect of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking, which is arguably the best way to address disinformation and “fake news.” The project is developed in collaboration between the Qatar Computing Research Institute (QCRI), HBKU and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The corpus for the task was annotated by A Data Pro, a company that performs high-quality manual annotations.
What was the baseline for this task?
The baseline system for the SLC task is a very simple logistic regression classifier with default parameters. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly.
3,001
qasper
3k
Introduction There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples: The first punning joke exploits the sound similarity between the word “propane" and the latent target “profane", which can be categorized into the group of heterographic puns. Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word “gut". Pun detection is the task of detecting whether there is a pun residing in the given text. The goal of pun location is to find the exact word appearing in the text that implies more than one meanings. Most previous work addresses such two tasks separately and develop separate systems BIBREF10 , BIBREF5 . Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training. For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun. Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used. Compared to the pipeline methods, joint learning has been shown effective BIBREF11 , BIBREF12 since it is able to reduce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks. In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model. The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag. Since each context contains a maximum of one pun BIBREF9 , we design a novel tagging scheme to capture this structural constraint. Statistics on the corpora also show that a pun tends to appear in the second half of a context. To capture such a structural property, we also incorporate word position knowledge into our structured prediction model. Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks. To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach. Problem Definition We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }: INLINEFORM0 tag means the current word is not a pun. INLINEFORM0 tag means the current word is a pun. If the tag sequence of a sentence contains a INLINEFORM0 tag, then the text contains a pun and the word corresponding to INLINEFORM1 is the pun. The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }. INLINEFORM0 tag indicates that the current word appears before the pun in the given context. INLINEFORM0 tag highlights the current word is a pun. INLINEFORM0 tag indicates that the current word appears after the pun. We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text. Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure. Under the two schemes, if a sentence does not contain any puns, all words will be tagged with INLINEFORM0 or INLINEFORM1 , respectively. Exemplified by the second sentence “Some diets cause a gut reaction," the pun is given as “gut." Thus, under the INLINEFORM2 scheme, it should be tagged with INLINEFORM3 , while the words before it are assigned with the tag INLINEFORM4 and words after it are with INLINEFORM5 , as illustrated in Figure FIGREF8 . Likewise, the INLINEFORM6 scheme tags the word “gut" with INLINEFORM7 , while other words are tagged with INLINEFORM8 . Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach. Model Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 . Input. The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb. To capture this characteristic, we consider lexical features at the character level. Similar to the work of BIBREF15 , the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences. Nonlinear transformations are then applied to the character embeddings by highway networks BIBREF18 , which map the character-level features into different semantic spaces. We also observe that a pun tends to appear at the end of a sentence. Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively. We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information. A binary indicator can be mapped to a vector representation using a randomly initialized embedding table BIBREF19 , BIBREF20 . In this work, we directly adopt the value of the binary indicator as part of the input. The concatenation of the transformed character embeddings, the pre-trained word embeddings BIBREF21 , and the position indicators are taken as input of our model. Tagging. The input is then fed into a BiLSTM network, which will be able to capture contextual information. For a training instance INLINEFORM0 , we suppose the output by the word-level BiLSTM is INLINEFORM1 . The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models BIBREF14 , BIBREF15 . The conditional probability is defined as: where INLINEFORM0 is a set of all possible label sequences consisting of tags from INLINEFORM1 (or INLINEFORM2 ), INLINEFORM3 and INLINEFORM4 are weight and bias parameters corresponding to the label pair INLINEFORM5 . During training, we minimize the negative log-likelihood summed over all training instances: where INLINEFORM0 refers to the INLINEFORM1 -th instance in the training set. During testing, we aim to find the optimal label sequence for a new input INLINEFORM2 : This search process can be done efficiently using the Viterbi algorithm. Datasets and Settings We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end. For each fold, we randomly select 10% of the instances from the training set for development. Word embeddings are initialized with the 100-dimensional Glove BIBREF21 . The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training. The pre-trained word embeddings are not updated during training. The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300. We adopt stochastic gradient descent (SGD) BIBREF26 with a learning rate of 0.015. For the pun detection task, if the predicted tag sequence contains at least one INLINEFORM0 tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false. For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset. As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun. We report precision, recall and INLINEFORM1 score in Table TABREF11 . A list of prior works that did not employ joint learning are also shown in the first block of Table TABREF11 . Results We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger BIBREF27 , n-grams, label transitions, word suffixes and relative position to the end of the text are considered. We can see that our model with the INLINEFORM0 tagging scheme yields new state-of-the-art INLINEFORM1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block. For location on heterographic puns, our model's performance is slightly lower than the system of BIBREF25 , which is a rule-based locator. Compared to CRF, we can see that our model, either with the INLINEFORM2 or the INLINEFORM3 scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close. This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts – such information appears to be helpful in recalling more puns. Compared to the INLINEFORM0 scheme, the INLINEFORM1 tagging scheme is able to yield better performance on these two tasks. After studying outputs from these two approaches, we found that one leading source of error for the INLINEFORM2 approach is that there exist more than one words in a single instance that are assigned with the INLINEFORM3 tag. However, according to the description of pun in BIBREF9 , each context contains a maximum of one pun. Thus, such a useful structural constraint is not well captured by the simple approach based on the INLINEFORM4 tagging scheme. On the other hand, by applying the INLINEFORM5 tagging scheme, such a constraint is properly captured in the model. As a result, the results for such a approach are significantly better than the approach based on the INLINEFORM6 tagging scheme, as we can observe from the table. Under the same experimental setup, we also attempted to exclude word position features. Results are given by INLINEFORM7 - INLINEFORM8 . It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence. While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task. One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information. Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately. It can be regarded as a “pipeline” method where the classifier for pun detection is regarded as perfect. Following the prior work of BIBREF4 , we apply 10-fold cross validation. Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table TABREF11 . Compared with our approaches, the performance of such an approach drops significantly. On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other. These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner. Error Analysis We studied the outputs from our system and make some error analysis. We found the errors can be broadly categorized into several types, and we elaborate them here. 1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set. Learning the representations of such unseen words is challenging, which affects the model's performance. Such errors contribute around 40% of the total errors made by our system. 2) Detection errors: we found many errors are due to the model's inability to make correct pun detection. Such inability harms both pun detection and pun location. Although our approach based on the INLINEFORM0 tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group. 3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short. Consider the example “Superglue! Tom rejoined," here the word rejoined is the corresponding pun. However, it would be challenging to figure out the pun with such limited contextual information. Related Work Most existing systems address pun detection and location separately. BIBREF22 applied word sense knowledge to conduct pun detection. BIBREF24 trained a bidirectional RNN classifier for detecting homographic puns. Next, a knowledge-based approach is adopted to find the exact pun. Such a system is not applicable to heterographic puns. BIBREF28 applied Google n-gram and word2vec to make decisions. The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns. BIBREF10 used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns. BIBREF23 used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features. BIBREF25 proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics. Two systems are developed to conduct detection and location separately in the system known as UWAV BIBREF3 . The pun detector combines predictions from three classifiers. The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun. The state-of-the-art system for homographic pun location is a neural method BIBREF4 , where the word senses are incorporated into a bidirectional LSTM model. This method only supports the pun location task on homographic puns. Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER. The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN BIBREF13 , GRNN BIBREF29 , LSTM-CRF BIBREF30 , LSTM-CNN-CRF BIBREF14 , LM-LSTM-CRF BIBREF15 . In this work, we combine pun detection and location tasks as a single sequence labeling problem. Inspired by the work of BIBREF15 , we also adopt a LSTM-CRF with character embeddings to make labeling decisions. Conclusion In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective. We observe that each text in our corpora contains a maximum of one pun. Hence, we design a novel tagging scheme to incorporate such a constraint. Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase. We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works. Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns. Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective. Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns. Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies. Acknowledgments We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01.
What baselines do they compare with?
They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF.
2,991
qasper
3k
Introduction Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 . Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses: Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows: Related Work Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language. Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection. Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity. Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse. Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks. Hierarchically Modelling Offensive Content In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 . Level A: Offensive language Detection Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets. Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words. Level B: Categorization of Offensive Language Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C: Offensive Language Target Identification Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Data Collection The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 . Experiments and Evaluation We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM. Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes. Offensive Language Detection The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80. Categorization of Offensive Language In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 . The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT). Offensive Language Target Identification The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models. The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND. Conclusion and Future Work This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks. In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different. Acknowledgments The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton.
In what language are the tweets?
English
2,240
qasper
3k
Introduction Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm BIBREF0 , BIBREF1 . While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives BIBREF2 , priors or non-parametric models BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and manually-engineered features BIBREF7 , BIBREF8 to encourage the desired structures to emerge. We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG's rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains non-convex, recent work suggests that there are optimization benefits afforded by over-parameterized models BIBREF9 , BIBREF10 , BIBREF11 , and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentence-level continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the context-free assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longer-range dependencies within a tree-based generative process. To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational posterior approximated from an inference network BIBREF12 , BIBREF13 . On standard benchmarks for English and Chinese, the proposed approach is found to perform favorably against recent neural network-based approaches to grammar induction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Probabilistic Context-Free Grammars We consider context-free grammars (CFG) consisting of a 5-tuple INLINEFORM0 where INLINEFORM1 is the distinguished start symbol, INLINEFORM2 is a finite set of nonterminals, INLINEFORM3 is a finite set of preterminals, INLINEFORM6 is a finite set of terminal symbols, and INLINEFORM7 is a finite set of rules of the form, INLINEFORM0 A probabilistic context-free grammar (PCFG) consists of a grammar INLINEFORM0 and rule probabilities INLINEFORM1 such that INLINEFORM2 is the probability of the rule INLINEFORM3 . Letting INLINEFORM4 be the set of all parse trees of INLINEFORM5 , a PCFG defines a probability distribution over INLINEFORM6 via INLINEFORM7 where INLINEFORM8 is the set of rules used in the derivation of INLINEFORM9 . It also defines a distribution over string of terminals INLINEFORM10 via INLINEFORM0 where INLINEFORM0 , i.e. the set of trees INLINEFORM1 such that INLINEFORM2 's leaves are INLINEFORM3 . We will slightly abuse notation and use INLINEFORM0 to denote the posterior distribution over the unobserved latent trees given the observed sentence INLINEFORM0 , where INLINEFORM1 is the indicator function. Compound PCFGs A compound probability distribution BIBREF19 is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process, INLINEFORM0 Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally intractable unless conjugacy can be exploited. In this work, we study compound probabilistic context-free grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via INLINEFORM0 where INLINEFORM0 is a prior with parameters INLINEFORM1 (spherical Gaussian in this paper), and INLINEFORM2 is a neural network that concatenates the input symbol embeddings with INLINEFORM3 and outputs the sentence-level rule probabilities INLINEFORM4 , INLINEFORM0 where INLINEFORM0 denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by INLINEFORM1 , INLINEFORM0 This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentence-level rule probabilities parameterized by INLINEFORM0 . Importantly, under this generative model the context-free assumptions hold conditioned on INLINEFORM3 , but they do not hold unconditionally. This is shown in Figure FIGREF3 (right) where there is a dependence path through INLINEFORM4 if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees INLINEFORM5 via INLINEFORM0 where INLINEFORM0 . The subscript in INLINEFORM1 denotes the fact that the rule probabilities depend on INLINEFORM2 . Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a tree-based generative process, making it possible to learn latent tree structures. Our motivation for the compound PCFG is based on the observation that for grammar induction, context-free assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training. We can in principle model richer dependencies through vertical/horizontal Markovization BIBREF21 , BIBREF22 and lexicalization BIBREF23 . However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some lexicalized, higher-order PCFG where a child can depend on structural and lexical context through a shared latent vector. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb then the noun phrase is likely to be a movie. In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities BIBREF3 , BIBREF4 , BIBREF6 , the compound PCFG assumes a prior on local, sentence-level rule probabilities. It is therefore closely related to the Bayesian grammars studied by BIBREF25 and BIBREF26 , who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) BIBREF27 . Experimental Setup Results and Discussion Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length. Table TABREF27 analyzes the learned tree structures. We compare similarity as measured by INLINEFORM0 against gold, left, right, and “self" trees (top), where self INLINEFORM1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table TABREF27 , bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents. Related Work Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative BIBREF0 , BIBREF1 , BIBREF74 , though BIBREF75 reported some success on partially bracketed data. BIBREF76 and BIBREF2 were some of the first successful statistical approaches to grammar induction. In particular, the constituent-context model (CCM) of BIBREF2 , which explicitly models both constituents and distituents, was the basis for much subsequent work BIBREF27 , BIBREF7 , BIBREF8 . Other works have explored imposing inductive biases through Bayesian priors BIBREF4 , BIBREF5 , BIBREF6 , modified objectives BIBREF42 , and additional constraints on recursion depth BIBREF77 , BIBREF48 . While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. BIBREF43 consider a nonparametric-style approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. BIBREF46 utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. BIBREF47 induce parse trees through cascaded applications of finite state models. More recently, neural network-based approaches to grammar induction have shown promising results on inducing parse trees directly from words. BIBREF14 , BIBREF15 learn tree structures through soft gating layers within neural language models, while BIBREF16 combine recursive autoencoders with the inside-outside algorithm. BIBREF17 train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees, and BIBREF78 utilize image captions to identify and ground constituents. Our work is also related to latent variable PCFGs BIBREF79 , BIBREF80 , BIBREF81 , which extend PCFGs to the latent variable setting by splitting nonterminal symbols into latent subsymbols. In particular, latent vector grammars BIBREF82 and compositional vector grammars BIBREF83 also employ continuous vectors within their grammars. However these approaches have been employed for learning supervised parsers on annotated treebanks, in contrast to the unsupervised setting of the current work. Conclusion This work explores grammar induction with compound PCFGs, which modulate rule probabilities with per-sentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional first-order context-free assumptions within a tree-based generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work. Acknowledgments We thank Phil Blunsom for initial discussions which seeded many of the core ideas in the present work. We also thank Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for providing the parsed dataset from their DIORA model. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle. Model Parameterization We associate an input embedding INLINEFORM0 for each symbol INLINEFORM1 on the left side of a rule (i.e. INLINEFORM2 ) and run a neural network over INLINEFORM3 to obtain the rule probabilities. Concretely, each rule type INLINEFORM4 is parameterized as follows, INLINEFORM5 where INLINEFORM0 is the product space INLINEFORM1 , and INLINEFORM2 are MLPs with two residual layers, INLINEFORM3 The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure FIGREF3 we use the following to refer to rule probabilities of different rule types, INLINEFORM0 where INLINEFORM0 denotes the set of rules with INLINEFORM1 on the left hand side. The compound PCFG rule probabilities INLINEFORM0 given a latent vector INLINEFORM1 , INLINEFORM2 Again the bias terms are omitted for brevity, and INLINEFORM0 are as before where the first layer's input dimensions are appropriately changed to account for concatenation with INLINEFORM1 . Corpus/Sentence F 1 F_1 by Sentence Length For completeness we show the corpus-level and sentence-level INLINEFORM0 broken down by sentence length in Table TABREF44 , averaged across 4 different runs of each model. Experiments with RNNGs For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from BIBREF17 , which uses a 2-layer 650-dimensional stack LSTM (with dropout of 0.5) and a 650-dimensional tree LSTM BIBREF88 , BIBREF90 as the composition function. Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (shift or reduce) via an affine transformation followed by a sigmoid. If shift is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If reduce is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from BIBREF53 , which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works BIBREF89 , BIBREF87 , BIBREF85 : position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score INLINEFORM0 for a constituent spanning the INLINEFORM1 -th and INLINEFORM2 -th word is given by, INLINEFORM0 where the MLP has a single hidden layer with INLINEFORM0 nonlinearity followed by layer normalization BIBREF84 . For experiments on fine-tuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to BIBREF17 and their open source implementation for additional details. We also observe that as noted by BIBREF17 , a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a right-branching baseline. The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. The PRPN/ON baselines for perplexity/syntactic evaluation in Table TABREF30 also have 2 layers with 650 hidden units and 0.5 dropout. Therefore all models considered in Table TABREF30 have roughly the same capacity. For all models we share input/output word embeddings BIBREF86 . Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importance-weighted samples. For grammaticality judgment, we modify the publicly available dataset from BIBREF56 to only keep sentence pairs that did not have any unknown words with respect to our PTB vocabulary of 10K words. This results in 33K sentence pairs for evaluation. Nonterminal/Preterminal Alignments Figure FIGREF50 shows the part-of-speech alignments and Table TABREF46 shows the nonterminal label alignments for the compound PCFG/neural PCFG. Subtree Analysis Table TABREF53 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section UID36 for more details.
which chinese datasets were used?
Answer with content missing: (Data section) Chinese with version 5.1 of the Chinese Penn Treebank (CTB)
2,545
qasper
3k
Introduction Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds answer phrases whereas answer selection BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and answer triggering BIBREF6 , BIBREF7 find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT Start). Several datasets have been released for selection-based QA. wang:07a created the QASent dataset consisting of 277 questions, which has been widely used for benchmarking the answer selection task. feng:15a presented InsuranceQA comprising 16K+ questions on insurance contexts. yang:15a introduced WikiQA for answer selection and triggering. jurczyk:16 created SelQA for large real-scale answer triggering. rajpurkar2016squad presented SQuAD for answer extraction and selection as well as for reading comprehension. Finally, morales-EtAl:2016:EMNLP2016 provided InfoboxQA for answer selection. These corpora make it possible to evaluate the robustness of statistical question answering learning. Although all of these corpora target on selection-based QA, they are designed for different purposes such that it is important to understand the nature of these corpora so a better use of them can be made. In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, WikiQA, SelQA, SQuAD, and InfoboxQA. We first give a thorough intrinsic analysis regarding contextual similarities, question types, and answer categories (Section SECREF2 ). We then map questions in all corpora to the current version of English Wikipedia and benchmark another selection-based QA task, answer retrieval (Section SECREF3 ). Finally, we present an extrinsic analysis through a set of experiments cross-testing these corpora using a convolutional neural network architecture (Section SECREF4 ). Intrinsic Analysis Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems. WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection. SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering. SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script. InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection. Analysis All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes. All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close. Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on. Fig. FIGREF6 shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by li:02a. Interestingly, each corpus focuses on different categories, Numeric for WikiQA and SelQA, Entity for SQuAD, and Person for InfoboxQA, which gives enough diversities for statistical learning to build robust models. Answer Retrieval This section describes another selection-based QA task, called answer retrieval, that finds the answer context from a larger dataset, the entire Wikipedia. SQuAD provides no mapping of the answer contexts to Wikipedia, whereas WikiQA and SelQA provide mappings; however, their data do not come from the same version of Wikipedia. We propose an automatic way of mapping the answer contexts from all corpora to the same version of Wikipeda so they can be coherently used for answer retrieval. Each paragraph in Wikipedia is first indexed by Lucene using {1,2,3}-grams, where the paragraphs are separated by WikiExtractor and segmented by NLP4J (28.7M+ paragraphs are indexed). Each answer sentence from the corpora in Table TABREF3 is then queried to Lucene, and the top-5 ranked paragraphs are retrieved. The cosine similarity between each sentence in these paragraphs and the answer sentence is measured for INLINEFORM0 -grams, say INLINEFORM1 . A weight is assigned to each INLINEFORM2 -gram score, say INLINEFORM3 , and the weighted sum is measured: INLINEFORM4 . The fixed weights of INLINEFORM5 are used for our experiments, which can be improved. If there exists a sentence whose INLINEFORM0 , the paragraph consisting of that sentence is considered the silver-standard answer passage. Table TABREF3 shows how robust these silver-standard passages are based on human judgement ( INLINEFORM1 ) and how many passages are collected ( INLINEFORM2 ) for INLINEFORM3 , where the human judgement is performed on 50 random samples for each case. For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively. Finally, each question is queried to Lucene and the top- INLINEFORM7 paragraphs are retrieved from the entire Wikipedia. If the answer sentence exists within those retrieved paragraphs according to the silver-standard, it is considered correct. Finding a paragraph that includes the answer context out of the entire Wikipedia is an extremely difficult task (128.7M). The last row of Table TABREF3 shows results from answer retrieval. Given INLINEFORM0 , SelQA and SQuAD show about 34% and 35% accuracy, which are reasonable. However, WikiQA shows a significantly lower accuracy of 12.47%; this is because the questions in WikiQA is about twice shorter than the questions in the other corpora such that not enough lexicons can be extracted from these questions for the Lucene search. Answer Selection Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR). The bigram CNN introduced by yu:14a is used to generate all the results in Table TABREF11 , where models are trained on either single or combined datasets. Clearly, the questions in WikiQA are the most challenging, and adding more training data from the other corpora hurts accuracy due to the uniqueness of query-based questions in this corpus. The best model is achieved by training on W+S+Q for SelQA; adding InfoboxQA hurts accuracy for SelQA although it gives a marginal gain for SQuAD. Just like WikiQA, InfoboxQA performs the best when it is trained on only itself. From our analysis, we suggest that to use models trained on WikiQA and InfoboxQA for short query-like questions, whereas to use ones trained on SelQA and SQuAD for long natural questions. Answer Triggering The results of INLINEFORM0 from the answer retrieval task in Section SECREF13 are used to create the datasets for answer triggering, where about 65% of the questions are not expected to find their answer contexts from the provided paragraphs for SelQA and SQuAD and 87.5% are not expected for WikiQA. Answer triggering is evaluated by the F1 scores as presented in Table TABREF11 , where three corpora are cross validated. The results on WikiQA are pretty low as expected from the poor accuracy on the answer retrieval task. Training on SelQA gives the best models for both WikiQA and SelQA. Training on SQuAD gives the best model for SQuAD although the model trained on SelQA is comparable. Since the answer triggering datasets are about 5 times larger than the answer selection datasets, it is computationally too expensive to combine all data for training. We plan to find a strong machine to perform this experiment in near future. Related work Lately, several deep learning approaches have been proposed for question answering. yu:14a presented a CNN model that recognizes the semantic similarity between two sentences. wang-nyberg:2015:ACL-IJCNLP presented a stacked bidirectional LSTM approach to read words in sequence, then outputs their similarity scores. feng:15a applied a general deep learning framework to non-factoid question answering. santos:16a introduced an attentive pooling mechanism that led to further improvements in selection-based QA. Conclusion We present a comprehensive comparison study of the existing corpora for selection-based question answering. Our intrinsic analysis provides a better understanding of the uniqueness or similarity between these corpora. Our extrinsic analysis shows the strength or weakness of combining these corpora together for statistical learning. Additionally, we create a silver-standard dataset for answer retrieval and triggering, which will be publicly available. In the future, we will explore different ways of improving the quality of our silver-standard datasets by fine-tuning the hyper-parameters.
Do they employ their indexing-based method to create a sample of a QA Wikipedia dataset?
Yes
1,910
qasper
3k
Introduction Stance detection (also called stance identification or stance classification) is one of the considerably recent research topics in natural language processing (NLP). It is usually defined as a classification problem where for a text and target pair, the stance of the author of the text for that target is expected as a classification output from the set: {Favor, Against, Neither} BIBREF0 . Stance detection is usually considered as a subtask of sentiment analysis (opinion mining) BIBREF1 topic in NLP. Both are mostly performed on social media texts, particularly on tweets, hence both are important components of social media analysis. Nevertheless, in sentiment analysis, the sentiment of the author of a piece of text usually as Positive, Negative, and Neutral is explored while in stance detection, the stance of the author of the text for a particular target (an entity, event, etc.) either explicitly or implicitly referred to in the text is considered. Like sentiment analysis, stance detection systems can be valuable components of information retrieval and other text analysis systems BIBREF0 . Previous work on stance detection include BIBREF2 where a stance classifier based on sentiment and arguing features is proposed in addition to an arguing lexicon automatically compiled. The ultimate approach performs better than distribution-based and uni-gram-based baseline systems BIBREF2 . In BIBREF3 , the authors show that the use of dialogue structure improves stance detection in on-line debates. In BIBREF4 , Hasan and Ng carry out stance detection experiments using different machine learning algorithms, training data sets, features, and inter-post constraints in on-line debates, and draw insightful conclusions based on these experiments. For instance, they find that sequence models like HMMs perform better at stance detection when compared with non-sequence models like Naive Bayes (NB) BIBREF4 . In another related study BIBREF5 , the authors conclude that topic-independent features can be exploited for disagreement detection in on-line dialogues. The employed features include agreement, cue words, denial, hedges, duration, polarity, and punctuation BIBREF5 . Stance detection on a corpus of student essays is considered in BIBREF6 . After using linguistically-motivated feature sets together with multivalued NB and SVM as the learning models, the authors conclude that they outperform two baseline approaches BIBREF6 . In BIBREF7 , the author claims that Wikipedia can be used to determine stances about controversial topics based on their previous work regarding controversy extraction on the Web. Among more recent related work, in BIBREF8 stance detection for unseen targets is studied and bidirectional conditional encoding is employed. The authors state that their approach achieves state-of-the art performance rates BIBREF8 on SemEval 2016 Twitter Stance Detection corpus BIBREF0 . In BIBREF9 , a stance-community detection approach called SCIFNET is proposed. SCIFNET creates networks of people who are stance targets, automatically from the related document collections BIBREF9 using stance expansion and refinement techniques to arrive at stance-coherent networks. A tweet data set annotated with stance information regarding six predefined targets is proposed in BIBREF10 where this data set is annotated through crowdsourcing. The authors indicate that the data set is also annotated with sentiment information in addition to stance, so it can help reveal associations between stance and sentiment BIBREF10 . Lastly, in BIBREF0 , SemEval 2016's aforementioned shared task on Twitter Stance Detection is described. Also provided are the results of the evaluations of 19 systems participating in two subtasks (one with training data set provided and the other without an annotated data set) of the shared task BIBREF0 . In this paper, we present a tweet data set in Turkish annotated with stance information, where the corresponding annotations are made publicly available. The domain of the tweets comprises two popular football clubs which constitute the targets of the tweets included. We also provide the evaluation results of SVM classifiers (for each target) on this data set using unigram, bigram, and hashtag features. To the best of our knowledge, the current study is the first one to target at stance detection in Turkish tweets. Together with the provided annotated data set and the corresponding evaluations with the aforementioned SVM classifiers which can be used as baseline systems, our study will hopefully help increase social media analysis studies on Turkish content. The rest of the paper is organized as follows: In Section SECREF2 , we describe our tweet data set annotated with the target and stance information. Section SECREF3 includes the details of our SVM-based stance classifiers and their evaluation results with discussions. Section SECREF4 includes future research topics based on the current study, and finally Section SECREF5 concludes the paper with a summary. A Stance Detection Data Set We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs. In a previous study on the identification of public health-related tweets, two tweet data sets in Turkish (each set containing 1 million random tweets) have been compiled where these sets belong to two different periods of 20 consecutive days BIBREF11 . We have decided to use one of these sets (corresponding to the period between August 18 and September 6, 2015) and firstly filtered the tweets using the possible names used to refer to the target clubs. Then, we have annotated the stance information in the tweets for these targets as Favor or Against. Within the course of this study, we have not considered those tweets in which the target is not explicitly mentioned, as our initial filtering process reveals. For the purposes of the current study, we have not annotated any tweets with the Neither class. This stance class and even finer-grained classes can be considered in further annotation studies. We should also note that in a few tweets, the target of the stance was the management of the club while in some others a particular footballer of the club is praised or criticised. Still, we have considered the club as the target of the stance in all of the cases and carried out our annotations accordingly. At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against. To the best of our knowledge, this is the first publicly-available stance-annotated data set for Turkish. Hence, it is a significant resource as there is a scarcity of annotated data sets, linguistic resources, and NLP tools available for Turkish. Additionally, to the best of our knowledge, it is also significant for being the first stance-annotated data set including sports-related tweets, as previous stance detection data sets mostly include on-line texts on political/ethical issues. Stance Detection Experiments Using SVM Classifiers It is emphasized in the related literature that unigram-based methods are reliable for the stance detection task BIBREF2 and similarly unigram-based models have been used as baseline models in studies such as BIBREF0 . In order to be used as a baseline and reference system for further studies on stance detection in Turkish tweets, we have trained two SVM classifiers (one for each target) using unigrams as features. Before the extraction of unigrams, we have employed automated preprocessing to filter out the stopwords in our annotated data set of 700 tweets. The stopword list used is the list presented in BIBREF12 which, in turn, is the slightly extended version of the stopword list provided in BIBREF13 . We have used the SVM implementation available in the Weka data mining application BIBREF14 where this particular implementation employs the SMO algorithm BIBREF15 to train a classifier with a linear kernel. The 10-fold cross-validation results of the two classifiers are provided in Table TABREF1 using the metrics of precision, recall, and F-Measure. The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set. The performance of the classifiers is better for the Favor class for both targets when compared with the performance results for the Against class. This outcome may be due to the common use of some terms when expressing positive stance towards sports clubs in Turkish tweets. The same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pattern is observed in stance detection results of baseline systems given in BIBREF0 , i.e., better F-Measure rates have been obtained for the Against class when compared with the Favor class BIBREF0 . Some of the baseline systems reported in BIBREF0 are SVM-based systems using unigrams and ngrams as features similar to our study, but their data sets include all three stance classes of Favor, Against, and Neither, while our data set comprises only tweets classified as belonging to Favor or Against classes. Another difference is that the data sets in BIBREF0 have been divided into training and test sets, while in our study we provide 10-fold cross-validation results on the whole data set. On the other hand, we should also note that SVM-based sentiment analysis systems (such as those given in BIBREF16 ) have been reported to achieve better F-Measure rates for the Positive sentiment class when compared with the results obtained for the Negative class. Therefore, our evaluation results for each stance class seem to be in line with such sentiment analysis systems. Yet, further experiments on the extended versions of our data set should be conducted and the results should again be compared with the stance detection results given in the literature. We have also evaluated SVM classifiers which use only bigrams as features, as ngram-based classifiers have been reported to perform better for the stance detection problem BIBREF0 . However, we have observed that using bigrams as the sole features of the SVM classifiers leads to quite poor results. This observation may be due to the relatively limited size of the tweet data set employed. Still, we can conclude that unigram-based features lead to superior results compared to the results obtained using bigrams as features, based on our experiments on our data set. Yet, ngram-based features may be employed on the extended versions of the data set to verify this conclusion within the course of future work. With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams. The corresponding evaluation results of the SVM classifiers using unigrams together the existence of hashtags as features are provided in Table TABREF2 . When the results given in Table TABREF2 are compared with the results in Table TABREF1 , a slight decrease in F-Measure (0.5%) for Target-1 is observed, while the overall F-Measure value for Target-2 has increased by 1.8%. Although we could not derive sound conclusions mainly due to the relatively small size of our data set, the increase in the performance of the SVM classifier Target-2 is an encouraging evidence for the exploitation of hashtags in a stance detection system. We leave other ways of exploiting hashtags for stance detection as a future work. To sum up, our evaluation results are significant as reference results to be used for comparison purposes and provides evidence for the utility of unigram-based and hashtag-related features in SVM classifiers for the stance detection problem in Turkish tweets. Future Prospects Future work based on the current study includes the following: Conclusion Stance detection is a considerably new research area in natural language processing and is considered within the scope of the well-studied topic of sentiment analysis. It is the detection of stance within text towards a target which may be explicitly specified in the text or not. In this study, we present a stance-annotated tweet data set in Turkish where the targets of the annotated stances are two popular sports clubs in Turkey. The corresponding annotations are made publicly-available for research purposes. To the best of our knowledge, this is the first stance detection data set for the Turkish language and also the first sports-related stance-annotated data set. Also presented in this study are SVM classifiers (one for each target) utilizing unigram and bigram features in addition to using the existence of hashtags as another feature. 10-fold cross validation results of these classifiers are presented which can be used as reference results by prospective systems. Both the annotated data set and the classifiers with evaluations are significant since they are the initial contributions to stance detection problem in Turkish tweets.
Which sports clubs are the targets?
Galatasaray, Fenerbahçe
2,234
qasper
3k
Introduction Many research attempts have proposed novel features that improve the performance of learning algorithms in particular tasks. Such features are often motivated by domain knowledge or manual labor. Although useful and often state-of-the-art, adapting such solutions on NLP systems across tasks can be tricky and time-consuming BIBREF0 . Therefore, simple yet general and powerful methods that perform well across several datasets are valuable BIBREF1 . An approach that has become extremely popular lately in NLP tasks, is to train word embeddings in an unsupervised way. These embeddings are dense vectors that project words or short text spans like phrases in a vector space where dimensions are supposed to capture text properties. Such embeddings can then be used either as features with off-the-shelf algorithms like Support Vector Machines, or to initialize deep learning systems BIBREF2 . However, as shown in BIBREF3 linear architectures perform better in high-dimensional discrete spaces compared to continuous ones. The latter is probably the main reason of the high performance of the vector space model BIBREF4 in tasks like text classification with linear models like SVMs. Using linear algorithms, while taking advantage of the expressiveness of text embeddings is the focus of this work. In this paper, we explore a hybrid approach, that uses text embeddings as a proxy to create features. Motivated by the argument that text embeddings manage to encode the semantics of text, we explore how clustering text embeddings can impact the performance of different NLP tasks. Although such an approach has been used in different studies during feature engineering, the selection of word vectors and the number of clusters remain a trial-end-error procedure. In this work we present an empirical evaluation across diverse tasks to verify whether and when such features are useful. Word clusters have been used as features in various tasks like Part-of-Speech tagging and NER. Owoputi et al. Owoputi13 use Brown clusters BIBREF5 in a POS tagger showing that this type of features carry rich lexical knowledge as they can substitute lexical resources like gazetteers. Kiritchenko et al. KiritchenkoZM14 discusses their use on sentiment classification while Hee et al. HeeLH16 incorporate them in the task of irony detection in Twitter. Ritter et al. Ritter2011 inject also word clusters in a NER tagger. While these works show that word clusters are beneficial no clear guidelines can be concluded of how and when to use them. In this work, we empirically demonstrate that using different types of embeddings on three NLP tasks with twitter data we manage to achieve better or near to the state-of-the art performance on three NLP tasks: (i) Named Entity Recognition (NER) segmentation, (ii) NER classification, (iii) fine-grained sentiment analysis and (iv) fine-grained sentiment quantification. For each of the three tasks, we achieve higher performance than without using features which indicates the effectiveness of the cluster membership features. Importantly, our evaluation compared to previous work BIBREF6 who focus on old and well studied datasets uses recent and challenging datasets composed by tweets. The obtained results across all the tasks permits us to reveal important aspects of the use of word clusters and therefore provide guidelines. Although our obtained scores are state-of-the-art, our analysis reveals that the performance in such tasks is far from perfect and, hence, identifies that there is still much space for improvement and future work. Word Clusters Word embeddings associate words with dense, low-dimensional vectors. Recently, several models have been proposed in order to obtain these embeddings. Among others, the skipgram (skipgram) model with negative sampling BIBREF7 , the continuous bag-of-words (cbow) model BIBREF7 and Glove (glove) BIBREF8 have been shown to be effective. Training those models requires no annotated data and can be done using big amounts of text. Such a model can be seen as a function INLINEFORM0 that projects a word INLINEFORM1 in a INLINEFORM2 -dimensional space: INLINEFORM3 , where INLINEFORM4 is predefined. Here, we focus on applications using data from Twitter, which pose several difficulties due to being particularly short, using creative vocabulary, abbreviations and slang. For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. A pre-processing step has been applied to replace URLs with a placeholder and to pad punctuation. The final vocabulary size was around 1.6 millions words. Additionally to the in-domain corpus we collected, we use GloVe vectors trained on Wikipedia articles in order to investigate the impact of out-of-domain word-vectors. We cluster the embeddings with INLINEFORM0 -Means. The k-means clusters are initialized using “k-means++” as proposed in BIBREF9 , while the algorithm is run for 300 iterations. We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia. Experimental Evaluation We evaluate the proposed approach for augmenting the feature space in four tasks: (i) NER segmentation, (ii) NER classification, (iii) fine-grained sentiment classification and (iv) fine-grained sentiment quantification. The next sections present the evaluation settings we used. For each of the tasks, we use the designated training sets to train the learning algorithms, and we report the scores of the evaluation measures used in the respective test parts. Named-Entity Recognition in Twitter NER concerns the classification of textual segments in a predefined set of categories, like persons, organization and locations. We use the data of the last competition in NER for Twitter which released as a part of the 2nd Workshop on Noisy User-generated Text BIBREF10 . More specifically, the organizers provided annotated tweets with 10 named-entity types (person, movie, sportsteam, product etc.) and the task comprised two sub-tasks: 1) the detection of entity bounds and 2) the classification of an entity into one of the 10 types. The evaluation measure for both sub-tasks is the F INLINEFORM0 measure. The following is an example of a tweet which contains two named entities. Note that named entities may span several words in the text: INLINEFORM0 tonite ... 90 's music .. oldskool night wiith INLINEFORM1 Our model for solving the task is a learning to search approach. More specifically we follow BIBREF11 which has been ranked 2nd among 10 participants in the aforementioned competition BIBREF10 . The model uses handcrafted features like n-grams, part-of-speech tags, capitalization and membership in gazetteers. The algorithm used belongs to the family of learning to search for structured prediction tasks BIBREF12 . These methods decompose the problem in a search space with states, actions and policies and then learn a hypothesis controlling a policy over the state-action space. The BIO encoding is used for attributing the corresponding labels to the tokens where B-type is used for the first token of the entity, I-type for inside tokens in case of multi-term entities and O for non entity tokens. Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set. Regarding the segmentation task we notice that adding word clusters as features improve the performance of the best model up to 1.1 F-score points while it boosts performance in the majority of cases. In only one case, for glove INLINEFORM0 vectors, there is a drop across all number of clusters used. As for the number of clusters, the best results are generally obtained between 250 and 1000 classes for all word vector models. These dimensions seem to be sufficient for the three-class sub-task that we deal with. The different models of word vectors perform similarly and thus one cannot privilege a certain type of word vectors. Interestingly, the clusters learned on the Wikipedia GloVe vectors offer competitive performance with respect to the in-domain word vectors used for the other cases showing that one can rely to out-of-domain data for constructing such representations. Concerning the classification task (Table TABREF7 ) we generally observe a drop in the performance of the tagger as we deal with 10 classes. This essentially corresponds to a multi-class problem with 21 classes: one for the non-entity type and two classes for each entity type. In this setting we notice that the best results are obtained in most cases for higher number of classes (1000 or 2000) possibly due to a better discriminatory power in higher dimensions. Note also, that in some cases the addition of word cluster features does not necessarily improve the performance. Contrary, it may degrade it as it is evident in the case of glove INLINEFORM0 word clusters. Like in the case of segmentation we do not observe a word vector model that clearly outperforms the rest. Finally, we note the same competitive performance of the Wikipedia word clusters and notably for the glove INLINEFORM1 clusters which obtain the best F1-score. Fine-grained Sentiment Analysis The task of fine grained sentiment classification consists in predicting the sentiment of an input text according to a five point scale (sentiment INLINEFORM0 {VeryNegative, Negative, Neutral, Positive, VeryPositive}). We use the setting of task 4 of SemEval2016 “Sentiment Analysis in Twitter” and the dataset released by the organizers for subtask 4 BIBREF13 . In total, the training (resp. test) data consist of 9,070 (resp. 20,632) tweets. The evaluation measure selected in BIBREF13 for the task in the macro-averaged Mean Absolute Error (MAE INLINEFORM0 ). It is a measure of error, hence lower values are better. The measure's goal is to take into account the order of the classes when penalizing the decision of a classifier. For instance, misclassifying a very negative example as very positive is a bigger mistake than classifying it as negative or neutral. Penalizing a classifier according to how far the predictions are from the true class is captured by MAE INLINEFORM1 BIBREF14 . Also, the advantage of using the macro- version instead of the standard version of the measure is the robustness against the class imbalance in the data. Learning algorithm To demonstrate the efficiency of cluster membership features we rely on the system of BIBREF15 which was ranked 1st among 11 participants and uses a Logistic Regression as a learning algorithm. We follow the same feature extraction steps which consist of extracting n-gram and character n-gram features, part-of-speech counts as well as sentiment scores using standard sentiment lexicons such as the Bing Liu's BIBREF16 and the MPQA lexicons BIBREF17 . For the full description, we refer the interested reader to BIBREF15 . To evaluate the performance of the proposed feature augmentation technique, we present in Table TABREF10 the macro-averaged Mean Absolute Error scores for different settings on the official test set of BIBREF13 . First, notice that the best score in the test data is achieved using cluster membership features, where the word embeddings are trained using the skipgram model. The achieved score improves the state-of-the art on the dataset, which to the best of our knowledge was by BIBREF15 . Also, note that the score on the test data improves for each type of embeddings used, which means that augmenting the feature space using cluster membership features helps the sentiment classification task. Note, also, that using the clusters produced by the out-of-domain embeddings trained on wikipedia that were released as part of BIBREF8 performs surprisingly well. One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 . From the results of Table TABREF10 it is clear that the addition of the cluster membership features improves the sentiment classification performance. To better understand though why these clusters help, we manually examined a sample of the words associated with the clusters. To improve the eligibility of those results we first removed the hashtags and we filter the results using an English vocabulary. In Table TABREF11 we present sample words from two of the most characteristic clusters with respect to the task of sentiment classification. Notice how words with positive and negative meanings are put in the respective clusters. Fine-Grained Sentiment Quantification Quantification is the problem of estimating the prevalence of a class in a dataset. While classification concerns assigning a category to a single instance, like labeling a tweet with the sentiment it conveys, the goal of quantification is, given a set of instances, to estimate the relative frequency of single class. Therefore, sentiment quantification tries to answer questions like “Given a set of tweets about the new iPhone, what is the fraction of VeryPositive ones?”. In the rest, we show the effect of the features derived from the word embeddings clusters in the fine-grained classification problem, which was also part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF13 . Learning Algorithm To perform the quantification task, we rely on a classify and count approach, which was shown effective in a related binary quantification problem BIBREF15 . The idea is that given a set of instances on a particular subject, one first classifies the instances and then aggregates the counts. To this end, we use the same feature representation steps and data with the ones used for fine grained classification (Section 3.2). Note that the data of the task are associated with subjects (described in full detail at BIBREF13 ), and, hence, quantification is performed for the tweets of a subject. For each of the five categories, the output of the approach is a 5-dimensional vector with the estimated prevalence of the categories. The evaluation measure for the problem is the Earth Movers Distance (EMD) BIBREF18 . EMD is a measure of error, hence lower values are better. It assumes ordered categories, which in our problem is naturally defined. Further assuming that the distance of consecutive categories (e.g., Positive and VeryPositive) is 1, the measure is calculated by: INLINEFORM0 where INLINEFORM0 is number of categories (five in our case) and INLINEFORM1 and INLINEFORM2 are the true and predicted prevalence respectively BIBREF19 . Results Table TABREF13 presents the results of augmenting the feature set with the proposed features. We use Logistic Regression as a base classifier for the classify and count approach. Notice the positive impact of the features in the performance in the task. Adding the features derived from clustering the embeddings consistently improves the performance. Interestingly, the best performance ( INLINEFORM0 ) is achieved using the out-of-domain vectors, as in the NER classification task. Also, notice how the approach improves over the state-of-the-art performance in the challenge ( INLINEFORM1 ) BIBREF13 , held by the method of BIBREF20 . The improvement over the method of BIBREF20 however, does not necessarily mean that classify and count performs better in the task. It implies that the feature set we used is richer, that in turn highlights the value of robust feature extraction mechanisms which is the subject of this paper. Conclusion We have shown empirically the effectiveness of incorporating cluster membership features in the feature extraction pipeline of Named-Entity recognition, sentiment classification and quantification tasks. Our results strongly suggest that incorporating cluster membership features benefit the performance in the tasks. The fact that the performance improvements are consistent in the four tasks we investigated, further highlights their usefulness, both for practitioners and researchers. Although our study does not identify a clear winner with respect to the type of word vectors (skipgram, cbow, or GloVe), our findings suggest that one should first try skip-gram embeddings of low dimensionality ( INLINEFORM0 ) and high number of clusters (e.g., INLINEFORM1 ) as the results obtained using these settings are consistently competitive. Our results also suggest that using out-of-domain data, like Wikipedia articles in this case, to construct the word embeddings is a good practice, as the results we obtained with these vectors are also competitive. The positive of out-of-domain embeddings and their combination with in-domain ones remains to be further studied.
Which hyperparameters were varied in the experiments on the four tasks?
number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding
2,753
qasper
3k
Introduction Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints. BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions: Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work. Data For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token. Word Embeddings To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 . Translating Lexicons Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 . All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 . For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05. Translating Data The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets. Algorithms Used Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish. For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set. Semi-supervised Learning One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set. First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet. Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded. This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks. Ensembling To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used. Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 . Results and Discussion Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks. However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging. Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors. Error Analysis Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors. Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand. The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix. Conclusion To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks.
What were the scores of their system?
column Ens Test in Table TABREF19
2,424
qasper
3k
Introduction The automatic processing of medical texts and documents plays an increasingly important role in the recent development of the digital health area. To enable dedicated Natural Language Processing (NLP) that is highly accurate with respect to medically relevant categories, manually annotated data from this domain is needed. One category of high interest and relevance are medical entities. Only very few annotated corpora in the medical domain exist. Many of them focus on the relation between chemicals and diseases or proteins and diseases, such as the BC5CDR corpus BIBREF0, the Comparative Toxicogenomics Database BIBREF1, the FSU PRotein GEne corpus BIBREF2 or the ADE (adverse drug effect) corpus BIBREF3. The NCBI Disease Corpus BIBREF4 contains condition mention annotations along with annotations of symptoms. Several new corpora of annotated case reports were made available recently. grouin-etal-2019-clinical presented a corpus with medical entity annotations of clinical cases written in French, copdPhenotype presented a corpus focusing on phenotypic information for chronic obstructive pulmonary disease while 10.1093/database/bay143 presented a corpus focusing on identifying main finding sentences in case reports. The corpus most comparable to ours is the French corpus of clinical case reports by grouin-etal-2019-clinical. Their annotations are based on UMLS semantic types. Even though there is an overlap in annotated entities, semantic classes are not the same. Lab results are subsumed under findings in our corpus and are not annotated as their own class. Factors extend beyond gender and age and describe any kind of risk factor that contributes to a higher probability of having a certain disease. Our corpus includes additional entity types. We annotate conditions, findings (including medical findings such as blood values), factors, and also modifiers which indicate the negation of other entities as well as case entities, i. e., entities specific to one case report. An overview is available in Table TABREF3. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation tasks Case reports are standardized in the CARE guidelines BIBREF5. They represent a detailed description of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. We focus on documents freely available through PubMed Central (PMC). The presentation of the patient's case can usually be found in a dedicated section or the abstract. We perform a manual annotation of all mentions of case entities, conditions, findings, factors and modifiers. The scope of our manual annotation is limited to the presentation of a patient's signs and symptoms. In addition, we annotate the title of the case report. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Guidelines We annotate the following entities: case entity marks the mention of a patient. A case report can contain more than one case description. Therefore, all the findings, factors and conditions related to one patient are linked to the respective case entity. Within the text, this entity is often represented by the first mention of the patient and overlaps with the factor annotations which can, e. g., mark sex and age (cf. Figure FIGREF12). condition marks a medical disease such as pneumothorax or dislocation of the shoulder. factor marks a feature of a patient which might influence the probability for a specific diagnosis. It can be immutable (e. g., sex and age), describe a specific medical history (e. g., diabetes mellitus) or a behaviour (e. g., smoking). finding marks a sign or symptom a patient shows. This can be visible (e. g., rash), described by a patient (e. g., headache) or measurable (e. g., decreased blood glucose level). negation modifier explicitly negate the presence of a certain finding usually setting the case apart from common cases. We also annotate relations between these entities, where applicable. Since we work on case descriptions, the anchor point of these relations is the case that is described. The following relations are annotated: has relations exist between a case entity and factor, finding or condition entities. modifies relations exist between negation modifiers and findings. causes relations exist between conditions and findings. Example annotations are shown in Figure FIGREF16. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotators We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. Inter-annotator agreement is calculated based on two case reports. We reach a Cohen's kappa BIBREF6 of 0.68. Disagreements mainly appear for findings that are rather unspecific such as She no longer eats out with friends which can be seen as a finding referring to “avoidance behaviour”. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Tools and Format The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Corpus Overview The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24. Findings are the most frequently annotated type of entity. This makes sense given that findings paint a clinical picture of the patient's condition. The number of tokens per entity ranges from one token for all types to 5 tokens for cases (average length 3.1), nine tokens for conditions (average length 2.0), 16 tokens for factors (average length 2.5), 25 tokens for findings (average length 2.6) and 18 tokens for modifiers (average length 1.4) (cf. Table TABREF24). Examples of rather long entities are given in Table TABREF25. Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Figure FIGREF26). Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. Figure FIGREF12), or when there is a partial overlapping between the spans of two different entities (partially-nested; cf. Figure FIGREF12). There is a high number of inter-sentential relations in the corpus (cf. Table TABREF27). This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences. The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities. The relations contained in our corpus are summarized in Table TABREF27. Baseline systems for Named Entity Recognition in medical case reports We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section SECREF6). We approach this as a sequence labeling problem. Four systems were developed to offer comparable robust baselines. The original documents are pre-processed (sentence splitting and tokenization with ScispaCy). We do not perform stop word removal or lower-casing of the tokens. The BIO labeling scheme is used to capture the order of tokens belonging to the same entity type and enable span-level detection of entities. Detection of nested and/or discontinuous entities is not supported. The annotated corpus is randomized and split in five folds using scikit-learn BIBREF9. Each fold has a train, test and dev split with the test split defined as .15% of the train split. This ensures comparability between the presented systems. Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric. Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs. Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve. We rely on the model presented by bekoulis2018joint and reuse the implementation provided by the authors. The model jointly trains two objectives supported by the dataset: the main task of NER and a supporting task of Relation Extraction (RE). Two separate models are developed for each of the tasks. The NER task is solved with the help of a BiLSTM-CRF model, similar to the one presented in Section SECREF32 The RE task is solved by using a multi-head selection approach, where each token can have none or more relationships to in-sentence tokens. Additionally, this model also leverages the output of the NER branch model (the CRF prediction) to learn label embeddings. Shared layers consist of a concatenation of word and character embeddings followed by two bidirectional LSTM layers. We keep most of the parameters suggested by the authors and change (1) the number of training epochs to 100 to allow the comparison to other deep learning approaches in this work, (2) use label embeddings of size 64, (3) allow gradient clipping and (4) use $d=0.8$ as the pre-trained word embedding dropout and $d=0.5$ for all other dropouts. $\eta =1^{-3}$ is used as the learning rate with the Adam optimizer and tanh activation functions across layers. Although it is possible to use adversarial training BIBREF16, we omit from using it. We also omit the publication of results for the task of RE as we consider it to be a supporting task and no other competing approaches have been developed. Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning. Baseline systems for Named Entity Recognition in medical case reports ::: Evaluation To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores. The reported values are shown in Table TABREF29 and are averaged over five folds, utilising the seqeval framework. With a macro avg. F1-score of 0.59, MTL achieves the best result with a significant margin compared to CRF, BiLSTM-CRF and BERT. This confirms the usefulness of jointly training multiple objectives (minimizing multiple loss functions), and enabling knowledge transfer, especially in a setting with limited data (which is usually the case in the biomedical NLP domain). This result also suggest the usefulness of BioBERT for other biomedical datasets as reported by Lee2019. Despite being a rather standard approach, CRF outperforms the more elaborated BiLSTM-CRF, presumably due to data scarcity and class imbalance. We hypothesize that an increase in training data would yield better results for BiLSTM-CRF but not outperform transfer learning approach of MTL (or even BioBERT). In contrast to other common NER corpora, like CoNLL 2003, even the best baseline system only achieves relatively low scores. This outcome is due to the inherent difficulty of the task (annotators are experienced medical doctors) and the small number of training samples. Conclusion We present a new corpus, developed to facilitate the processing of case reports. The corpus focuses on five distinct entity types: cases, conditions, factors, findings and modifiers. Where applicable, relationships between entities are also annotated. Additionally, we annotate discontinuous entities with a special relationship type (discontinuous). The corpus presented in this paper is the very first of its kind and a valuable addition to the scarce number of corpora available in the field of biomedical NLP. Its complexity, given the discontinuous nature of entities and a high number of nested and multi-label entities, poses new challenges for NLP methods applied for NER and can, hence, be a valuable source for insights into what entities “look like in the wild”. Moreover, it can serve as a playground for new modelling techniques such as the resolution of discontinuous entities as well as multi-task learning given the combination of entities and their relations. We provide an evaluation of four distinct NER systems that will serve as robust baselines for future work but which are, as of yet, unable to solve all the complex challenges this dataset holds. A functional service based on the presented corpus is currently being integrated, as a NER service, in the QURATOR platform BIBREF20. Acknowledgments The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator.ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein.
How large is the corpus?
8,275 sentences and 167,739 words in total
2,669
qasper
3k
Introduction Deep learning systems have shown a lot of promise for extractive Question Answering (QA), with performance comparable to humans when large scale data is available. However, practitioners looking to build QA systems for specific applications may not have the resources to collect tens of thousands of questions on corpora of their choice. At the same time, state-of-the-art machine reading systems do not lend well to low-resource QA settings where the number of labeled question-answer pairs are limited (c.f. Table 2 ). Semi-supervised QA methods like BIBREF0 aim to improve this performance by leveraging unlabeled data which is easier to collect. In this work, we present a semi-supervised QA system which requires the end user to specify a set of base documents and only a small set of question-answer pairs over a subset of these documents. Our proposed system consists of three stages. First, we construct cloze-style questions (predicting missing spans of text) from the unlabeled corpus; next, we use the generated clozes to pre-train a powerful neural network model for extractive QA BIBREF1 , BIBREF2 ; and finally, we fine-tune the model on the small set of provided QA pairs. Our cloze construction process builds on a typical writing phenomenon and document structure: an introduction precedes and summarizes the main body of the article. Many large corpora follow such a structure, including Wikipedia, academic papers, and news articles. We hypothesize that we can benefit from the un-annotated corpora to better answer various questions – at least ones that are lexically similar to the content in base documents and directly require factual information. We apply the proposed system on three datasets from different domains – SQuAD BIBREF3 , TriviaQA-Web BIBREF4 and the BioASQ challenge BIBREF5 . We observe significant improvements in a low-resource setting across all three datasets. For SQuAD and TriviaQA, we attain an F1 score of more than 50% by merely using 1% of the training data. Our system outperforms the approaches for semi-supervised QA presented in BIBREF0 , and a baseline which uses the same unlabeled data but with a language modeling objective for pretraining. In the BioASQ challenge, we outperform the best performing system from previous year's challenge, improving over a baseline which does transfer learning from the SQuAD dataset. Our analysis reveals that questions which ask for factual information and match to specific parts of the context documents benefit the most from pretraining on automatically constructed clozes. Related Work Semi-supervised learning augments the labeled dataset $L$ with a potentially larger unlabeled dataset $U$ . BIBREF0 presented a model, GDAN, which trained an auxiliary neural network to generate questions from passages by reinforcement learning, and augment the labeled dataset with the generated questions to train the QA model. Here we use a much simpler heuristic to generate the auxiliary questions, which also turns out to be more effective as we show superior performance compared to GDAN. Several approaches have been suggested for generating natural questions BIBREF6 , BIBREF7 , BIBREF8 , however none of them show a significant improvement of using the generated questions in a semi-supervised setting. Recent papers also use unlabeled data for QA by training large language models and extracting contextual word vectors from them to input to the QA model BIBREF9 , BIBREF10 , BIBREF11 . The applicability of this method in the low-resource setting is unclear as the extra inputs increase the number of parameters in the QA model, however, our pretraining can be easily applied to these models as well. Domain adaptation (and Transfer learning) leverage existing large scale datasets from a source domain (or task) to improve performance on a target domain (or task). For deep learning and QA, a common approach is to pretrain on the source dataset and then fine-tune on the target dataset BIBREF12 , BIBREF13 . BIBREF14 used SQuAD as a source for the target BioASQ dataset, and BIBREF15 used Book Test BIBREF16 as source for the target SQuAD dataset. BIBREF17 transfer learned model layers from the tasks of sequence labeling, text classification and relation classification to show small improvements on SQuAD. All these works use manually curated source datatset, which in themselves are expensive to collect. Instead, we show that it is possible to automatically construct the source dataset from the same domain as the target, which turns out to be more beneficial in terms of performance as well (c.f. Section "Experiments & Results" ). Several cloze datasets have been proposed in the literature which use heuristics for construction BIBREF18 , BIBREF19 , BIBREF20 . We further see the usability of such a dataset in a semi-supervised setting. Methodology Our system comprises of following three steps: Cloze generation: Most of the documents typically follow a template, they begin with an introduction that provides an overview and a brief summary for what is to follow. We assume such a structure while constructing our cloze style questions. When there is no clear demarcation, we treat the first $K\%$ (hyperparameter, in our case 20%) of the document as the introduction. While noisy, this heuristic generates a large number of clozes given any corpus, which we found to be beneficial for semi-supervised learning despite the noise. We use a standard NLP pipeline based on Stanford CoreNLP (for SQuAD, TrivaQA and PubMed) and the BANNER Named Entity Recognizer (only for PubMed articles) to identify entities and phrases. Assume that a document comprises of introduction sentences $\lbrace q_1, q_2, ... q_n\rbrace $ , and the remaining passages $\lbrace p_1, p_2, .. p_m\rbrace $ . Additionally, let's say that each sentence $q_i$ in introduction is composed of words $\lbrace w_1, w_2, ... w_{l_{q_i}}\rbrace $ , where $l_{q_i}$ is the length of $q_i$ . We consider a $\text{match} (q_i, p_j)$ , if there is an exact string match of a sequence of words $\lbrace w_k, w_{k+1}, .. w_{l_{q_i}}\rbrace $ between the sentence $q_i$ and passage $p_j$ . If this sequence is either a noun phrase, verb phrase, adjective phrase or a named entity in $\lbrace p_1, p_2, .. p_m\rbrace $0 , as recognized by CoreNLP or BANNER, we select it as an answer span $\lbrace p_1, p_2, .. p_m\rbrace $1 . Additionally, we use $\lbrace p_1, p_2, .. p_m\rbrace $2 as the passage $\lbrace p_1, p_2, .. p_m\rbrace $3 and form a cloze question $\lbrace p_1, p_2, .. p_m\rbrace $4 from the answer bearing sentence $\lbrace p_1, p_2, .. p_m\rbrace $5 by replacing $\lbrace p_1, p_2, .. p_m\rbrace $6 with a placeholder. As a result, we obtain passage-question-answer ( $\lbrace p_1, p_2, .. p_m\rbrace $7 ) triples (Table 1 shows an example). As a post-processing step, we prune out $\lbrace p_1, p_2, .. p_m\rbrace $8 triples where the word overlap between the question (Q) and passage (P) is less than 2 words (after excluding the stop words). The process relies on the fact that answer candidates from the introduction are likely to be discussed in detail in the remainder of the article. In effect, the cloze question from the introduction and the matching paragraph in the body forms a question and context passage pair. We create two cloze datasets, one each from Wikipedia corpus (for SQuAD and TriviaQA) and PUBMed academic papers (for the BioASQ challenge), consisting of 2.2M and 1M clozes respectively. From analyzing the cloze data manually, we were able to answer 76% times for the Wikipedia set and 80% times for the PUBMed set using the information in the passage. In most cases the cloze paraphrased the information in the passage, which we hypothesized to be a useful signal for the downstream QA task. We also investigate the utility of forming subsets of the large cloze corpus, where we select the top passage-question-answer triples, based on the different criteria, like i) jaccard similarity of answer bearing sentence in introduction and the passage ii) the tf-idf scores of answer candidates and iii) the length of answer candidates. However, we empirically find that we were better off using the entire set rather than these subsets. Pre-training: We make use of the generated cloze dataset to pre-train an expressive neural network designed for the task of reading comprehension. We work with two publicly available neural network models – the GA Reader BIBREF2 (to enable comparison with prior work) and BiDAF + Self-Attention (SA) model from BIBREF1 (which is among the best performing models on SQuAD and TriviaQA). After pretraining, the performance of BiDAF+SA on a dev set of the (Wikipedia) cloze questions is 0.58 F1 score and 0.55 Exact Match (EM) score. This implies that the cloze corpus is neither too easy, nor too difficult to answer. Fine Tuning: We fine tune the pre-trained model, from the previous step, over a small set of labelled question-answer pairs. As we shall later see, this step is crucial, and it only requires a handful of labelled questions to achieve a significant proportion of the performance typically attained by training on tens of thousands of questions. Datasets We apply our system to three datasets from different domains. SQuAD BIBREF3 consists of questions whose answers are free form spans of text from passages in Wikipedia articles. We follow the same setting as in BIBREF0 , and split $10\%$ of training questions as the test set, and report performance when training on subsets of the remaining data ranging from $1\%$ to $90\%$ of the full set. We also report the performance on the dev set when trained on the full training set ( $1^\ast $ in Table 2 ). We use the same hyperparameter settings as in prior work. We compare and study four different settings: 1) the Supervised Learning (SL) setting, which is only trained on the supervised data, 2) the best performing GDAN model from BIBREF0 , 3) pretraining on a Language Modeling (LM) objective and fine-tuning on the supervised data, and 4) pretraining on the Cloze dataset and fine-tuning on the supervised data. The LM and Cloze methods use exactly the same data for pretraining, but differ in the loss functions used. We report F1 and EM scores on our test set using the official evaluation scripts provided by the authors of the dataset. TriviaQA BIBREF4 comprises of over 95K web question-answer-evidence triples. Like SQuAD, the answers are spans of text. Similar to the setting in SQuAD, we create multiple smaller subsets of the entire set. For our semi-supervised QA system, we use the BiDAF+SA model BIBREF1 – the highest performing publicly available system for TrivaQA. Here again, we compare the supervised learning (SL) settings against the pretraining on Cloze set and fine tuning on the supervised set. We report F1 and EM scores on the dev set. We also test on the BioASQ 5b dataset, which consists of question-answer pairs from PubMed abstracts. We use the publicly available system from BIBREF14 , and follow the exact same setup as theirs, focusing only on factoid and list questions. For this setting, there are only 899 questions for training. Since this is already a low-resource problem we only report results using 5-fold cross-validation on all the available data. We report Mean Reciprocal Rank (MRR) on the factoid questions, and F1 score for the list questions. Main Results Table 2 shows a comparison of the discussed settings on both SQuAD and TriviaQA. Without any fine-tuning (column 0) the performance is low, probably because the model never saw a real question, but we see significant gains with Cloze pretraining even with very little labeled data. The BiDAF+SA model, exceeds an F1 score of $50\%$ with only $1\%$ of the training data (454 questions for SQuAD, and 746 questions for TriviaQA), and approaches $90\%$ of the best performance with only $10\%$ labeled data. The gains over the SL setting, however, diminish as the size of the labeled set increases and are small when the full dataset is available. Cloze pretraining outperforms the GDAN baseline from BIBREF0 using the same SQuAD dataset splits. Additionally, we show improvements in the $90\%$ data case unlike GDAN. Our approach is also applicable in the extremely low-resource setting of $1\%$ data, which we suspect GDAN might have trouble with since it uses the labeled data to do reinforcement learning. Furthermore, we are able to use the same cloze dataset to improve performance on both SQuAD and TriviaQA datasets. When we use the same unlabeled data to pre-train with a language modeling objective, the performance is worse, showing the bias we introduce by constructing clozes is important. On the BioASQ dataset (Table 3 ) we again see a significant improvement when pretraining with the cloze questions over the supervised baseline. The improvement is smaller than what we observe with SQuAD and TriviaQA datasets – we believe this is because questions are generally more difficult in BioASQ. BIBREF14 showed that pretraining on SQuAD dataset improves the downstream performance on BioASQ. Here, we show a much larger improvement by pretraining on cloze questions constructed in an unsupervised manner from the same domain. Analysis Regression Analysis: To understand which types of questions benefit from pre-training, we pre-specified certain features (see Figure 1 right) for each of the dev set questions in SQuAD, and then performed linear regression to predict the F1 score for that question from these features. We predict the F1 scores from the cloze pretrained model ( $y^{\text{cloze}}$ ), the supervised model ( $y^{\text{sl}}$ ), and the difference of the two ( $y^{\text{cloze}}-y^{\text{sl}}$ ), when using $10\%$ of labeled data. The coefficients of the fitted model are shown in Figure 1 (left) along with their std errors. Positive coefficients indicate that a high value of that feature is predictive of a high F1 score, and a negative coefficient indicates that a small value of that feature is predictive of a high F1 score (or a high difference of F1 scores from the two models in the case of $y^{\text{cloze}}-y^{\text{sl}}$ ). The two strongest effects we observe are that a high lexical overlap between the question and the sentence containing the answer is indicative of high boost with pretraining, and that a high lexical overlap between the question and the whole passage is indicative of the opposite. This is hardly surprising, since our cloze construction process is biased towards questions which have a similar phrasing to the answer sentences in context. Hence, test questions with a similar property are answered correctly after pretraining, whereas those with a high overlap with the whole passage tend to have lower performance. The pretraining also favors questions with short answers because the cloze construction process produces short answer spans. Also passages and questions which consist of tokens infrequent in the SQuAD training corpus receive a large boost after pretraining, since the unlabeled data covers a larger domain. Performance on question types: Figure 2 shows the average gain in F1 score for different types of questions, when we pretrain on the clozes compared to the supervised case. This analysis is done on the $10\%$ split of the SQuAD training set. We consider two classifications of each question – one determined on the first word (usually a wh-word) of the question (Figure 2 (bottom)) and one based on the output of a separate question type classifier adapted from BIBREF21 . We use the coarse grain labels namely Abbreviation (ABBR), Entity (ENTY), Description (DESC), Human (HUM), Location (LOC), Numeric (NUM) trained on a Logistic Regression classification system . While there is an improvement across the board, we find that abbreviation questions in particular receive a large boost. Also, "why" questions show the least improvement, which is in line with our expectation, since these usually require reasoning or world knowledge which cloze questions rarely require. Conclusion In this paper, we show that pre-training QA models with automatically constructed cloze questions improves the performance of the models significantly, especially when there are few labeled examples. The performance of the model trained only on the cloze questions is poor, validating the need for fine-tuning. Through regression analysis, we find that pretraining helps with questions which ask for factual information located in a specific part of the context. For future work, we plan to explore the active learning setup for this task – specifically, which passages and / or types of questions can we select to annotate, such that there is a maximum performance gain from fine-tuning. We also want to explore how to adapt cloze style pre-training to NLP tasks other than QA. Acknowledgments Bhuwan Dhingra is supported by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google. Danish Pruthi and Dheeraj Rajagopal are supported by the DARPA Big Mechanism program under ARO contract W911NF-14-1-0436.
Is it possible to convert a cloze-style questions to a naturally-looking questions?
Unanswerable
2,764
qasper
3k
Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data.
How many sentences does the dataset contain?
3606
2,835
qasper
3k
Introduction Deep neural networks (DNNs), in particular convolutional and recurrent neural networks, with huge architectures have been proven successful in wide range of tasks including audio processing such as speech to text [1 - 4], emotion recognition [5 - 8], speech/non-speech (e.g., of non-speech include noise, music, etc.,) classification [9 - 12], etc. Training these deep architectures require large amount of annotated data, as a result, they cannot be used in low data resource scenarios which is common in speech-based applications [13 - 15]. Apart from collecting large data corpus, annotating the data is also very difficult, and requires manual supervision and efforts. Especially, annotation of speech for tasks like emotion recognition also suffer from lack of agreement among the annotators [16]. Hence, there is a need to build reliable systems that can work in low resource scenario. In this work, we propose a novel approach to address the task of classification in low data resource scenarios. Our approach involves simultaneously considering more than one sample (in this work, two samples are considered) to train the classifier. We call this approach as simultaneous two sample learning (s2sL). The proposed approach is also applicable to low resource data suffering with data imbalance. The contributions of this paper are: Proposed approach The s2sL approach proposed to address low data resource problem is explained in this Section. In this work, we use MLP (modified to handle our data representation) as the base classifier. Here, we explain the s2sL approach by considering two-class classification task. Data representation Consider a two-class classification task with INLINEFORM0 denoting the set of class labels, and let INLINEFORM1 and INLINEFORM2 be the number of samples corresponding to INLINEFORM3 and INLINEFORM4 , respectively. In general, to train a classifier, the samples in the train set are provided as input-output pairs as follows. DISPLAYFORM0 where INLINEFORM0 refers to the INLINEFORM1 -dimensional feature vector representing the INLINEFORM2 sample corresponding to INLINEFORM3 class label, and INLINEFORM4 , refers to output label of INLINEFORM5 class. In the proposed data representation format, called simultaneous two sample (s2s) representation, we will simultaneously consider two samples as follows. DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 are the INLINEFORM2 -dimensional feature vectors representing the INLINEFORM3 sample in INLINEFORM4 class and INLINEFORM5 sample in the INLINEFORM6 class, respectively; and INLINEFORM7 , INLINEFORM8 refers to the output labels of INLINEFORM9 and INLINEFORM10 class, respectively. Hence, in s2s data representation, we will have an input feature vector of length INLINEFORM0 i.e., INLINEFORM1 , and output class labels as either INLINEFORM2 or INLINEFORM3 . Each sample can be combined with all the samples (i.e., with ( INLINEFORM4 ) samples) in the dataset. Therefore, by representing the data in the s2s format, the number of samples in the train set increases to INLINEFORM5 from INLINEFORM6 samples. We hypothesize that the s2s format is expected to help the network not only to learn the characteristics of the two classes separately, but also the difference and similarity in characteristics of the two classes. Classifier training MLP, the most commonly used feed forward neural network, is considered as the base classifier to validate our proposed s2s framework. Generally, MLPs are trained using the data format given by eq. INLINEFORM0 . But to train the MLP on our s2s based data representation (as in eq. INLINEFORM1 ), the following modifications are made to the MLP architecture (refer to Figure FIGREF4 ). We have INLINEFORM0 units (instead of INLINEFORM1 units) in the input layer to accept the two samples i.e., INLINEFORM2 and INLINEFORM3 , simultaneously. The structure of the hidden layer in this approach is similar to that of a regular MLP. The number of hidden layers and hidden units can be varied depending upon the complexity of the problem. The number of units in the hidden layer is selected empirically by varying the number of hidden units from 2 to twice the length of the input layer (i.e., 2 to INLINEFORM0 ) and the unit at which the highest performance is obtained are selected. In this paper, we considered only a single hidden layer. Rectified linear units (ReLU) are used for hidden layer. The output layer will consist of units equal to twice the considered number of classes in the classification task i.e, the output layer will have four units for two-class classification task. The sigmoid activation function (not softmax) is used on the output layer units. Unlike regular MLP, we use sigmoid activation units in the output layer, with binary cross-entropy as the cost function, because the output labels in the proposed s2s based data representation will have more than one unit active at a time (not one-hot encoded output) and this condition cannot be handled using softmax function. As can be seen from Figure FIGREF4 , the output layer in our proposed method has outputs INLINEFORM0 and INLINEFORM1 which corresponds to the outputs associated with the input feature vectors INLINEFORM2 and INLINEFORM3 , respectively. For a two-class classification problem, there will be four units in the output layer and the possible output labels are INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 corresponding to the class labels INLINEFORM8 and INLINEFORM9 , respectively. This architecture is referred to as s2s-MLP. In s2sL, s2s-MLP is trained using the s2s data representation format. Further, the s2s-MLP is trained using adam optimizer. Classifier testing Generally, the feature vector corresponding to the test sample is provided as input to the trained MLP in the testing phase and the class label is decided based on the obtained output. However, in s2sL method, the feature vector corresponding to the test sample should also be converted to the s2s data representation format to test the trained s2s-MLP. We propose a testing approach, where the given test sample is combined with a set of preselected reference samples, whose class label is known a priori, to generate multiple instances of the same test sample as follows. DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 refer to the INLINEFORM2 -dimensional feature vector corresponding to the test sample and the INLINEFORM3 reference sample, respectively. INLINEFORM4 refers to the considered number of reference samples. These reference samples can be selected from any of the two classes. For testing the s2s-MLP (as shown in Figure FIGREF4 ), each test sample INLINEFORM0 (same as ' INLINEFORM1 ' in (3)) is combined with all the INLINEFORM2 reference samples ( INLINEFORM3 ) to form INLINEFORM4 different instances of the same test sample INLINEFORM5 . The corresponding outputs ( INLINEFORM6 ) obtained from s2s-MLP for the INLINEFORM7 generated instances of INLINEFORM8 are combined by voting-based decision approach to obtain the final decision INLINEFORM9 . The class label that gets maximum votes is considered as the predicted output label. Experiments We validate the performance of the proposed s2sL by providing the preliminary results obtained on two different tasks namely, Speech/Music discrimination and emotion classification. We considered the GTZAN Music-Speech dataset [17], consisting of 120 audio files (60 speech and 60 music), for task of classifying speech and music. Each audio file (of 2 seconds duration) is represented using a 13-dimensional mel-frequency cepstral coefficient (MFCC) vector, where each MFCC vector is the average of all the frame level (frame size of 30 msec and an overlap of 10 msec) MFCC vectors. It is to be noted that our main intention for this task is not better feature selection, but to demonstrate the effectiveness of our approach, in particular for low data scenarios. The standard Berlin speech emotion database (EMO-DB) [18] consisting of 535 utterances corresponding to 7 different emotions is considered for the task of emotion classification. Each utterance is represented by a 19-dimensional feature vector obtained by using the feature selection algorithm from WEKA toolkit [19] on the 384-dimensional utterance level feature vector obtained using openSMILE toolkit [20]. For two class classification, we considered the two most confusing emotion pairs i.e., (Neutral,Sad) and (Anger, Happy). Data corresponding to Speech/Music classification (60 speech and 60 music samples) and Neutral/Sad classification (79 neutral and 62 sad utterances) is balanced whereas Anger/Happy classification task has data imbalance, with anger forming the majority class (127 samples) and happy forming the minority class (71 samples). Therefore, we show the performance of s2sL on both, balanced and imbalanced datasets. All experimental results are validated using 5-fold cross validation (80% of data for training and 20% for testing in each fold). Further, to analyze the effectiveness of s2sL in low resource scenarios, different proportions of training data, within each fold, are considered to train the system. For this analysis, we considered 4 different proportions i.e., INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 of the training data to train the classifier. For instance, INLINEFORM4 means considering only half of the original training data to train the classifier, and INLINEFORM5 means considering the complete training data. 5-fold cross validation is considered for all data proportions. Accuracy (in %) is used as a performance measure for balanced data classification tasks (i.e., Speech/Music classification and Neutral/Sad emotion classification), whereas the more preferred INLINEFORM6 measure [21] is used as a measure for imbalanced data classification task (i.e., Anger/Happy emotion classification). Table TABREF14 show the results obtained for proposed s2sL approach in comparison to that of MLP for the tasks of Speech/Music and Neutral/Sad classification, by considering different proportions of training data. The values in Table TABREF14 are mean accuracies (in %) obtained by 5-fold cross validation. It can be observed from Table TABREF14 that for both tasks, s2sL method outperforms MLP, especially at low resource conditions. s2sL shows an absolute improvement in accuracy of INLINEFORM0 % and INLINEFORM1 % over MLP for Speech/Music and Neutral/Sad classification tasks, respectively, when INLINEFORM2 of the original training data is used in experiments. Table TABREF14 show the results (in terms of INLINEFORM0 values) obtained for proposed s2sL approach in comparison to that of MLP for Anger/Happy classification (data imbalance problem). Here, state-of-the-art methods i.e., Eusboost [22] and MWMOTE [23] are also considered for comparison. It can be observed from Table TABREF14 that the s2sL method outperforms MLP, and also performs better than Eusboost and MWMOTE techniques on imbalanced data (around 3 % absolute improvement in INLINEFORM1 value for s2sL compared to MWMOTE, when INLINEFORM2 of the training data is considered). In particular, at lower amounts of training data, s2sL outperforms all the other methods, illustrating its effectiveness even for low resourced data imbalance problems. s2sL method shows an absolute improvement of 6% ( INLINEFORM3 ) in INLINEFORM4 value over the second best ( INLINEFORM5 for MWMOTE), when only INLINEFORM6 of the training data is used. Conclusions In this paper, we introduced a novel s2s framework to effectively learn the class discriminative characteristics, even from low data resources. In this framework, more than one sample (here, two samples) are simultaneously considered to train the classifier. Further, this framework allows to generate multiple instances of the same test sample, by considering preselected reference samples, to achieve a more profound decision making. We illustrated the significance of our approach by providing the experimental results for two different tasks namely, speech/music discrimination and emotion classification. Further, we showed that the s2s framework can also handle the low resourced data imbalance problem. References [1] Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N. & Kingsbury, B. (2012) Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, pp. 82–97. [2] Vinyals, O., Ravuri, S. V. & Povey, D. (2012) Revisiting recurrent neural networks for robust ASR. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4085–4088. [3] Lu, L., Zhang, X., Cho K. & Renals, S. (2015) A study of the recurrent neural network encoder-decoder for large vocabulary speech recognition. In Proc. INTERSPEECH, pp. 3249–3253. [4] Zhang, Y., Pezeshki, M., Brakel, P., Zhang, S., Bengio, C.L.Y. & Courville, A. (2017) Towards end-to-end speech recognition with deep convolutional neural networks. arXiv preprint, arXiv:1701.02720. [5] Han, K., Yu, D. & Tashev, I. (2014) Speech emotion recognition using deep neural network and extreme learning machine. In Proc. Interspeech, pp. 223–227. [6] Trigeorgis, G., Ringeval, F., Brueckner, R., Marchi, E., Nicolaou, M. A., Schuller, B., & Zafeiriou, S. (2016) Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network. In Proc. ICASSP, pp. 5200–5204). [7] Huang, Che-Wei, & Shrikanth S. Narayanan. (2016) Attention Assisted Discovery of Sub-Utterance Structure in Speech Emotion Recognition. In Proc. INTERSPEECH, pp. 1387–1391. [8] Zhang, Z., Ringeval, F., Han, J., Deng, J., Marchi, E. & Schuller, B. (2016) Facing realism in spontaneous emotion recognition from speech: Feature enhancement by autoencoder with LSTM neural networks. In Proc. INTERSPEECH, pp. 3593–3597. [9] Scheirer, E. D. & Slaney, M. (2003) Multi-feature speech/music discrimination system. U.S. Patent 6,570,991. [10] Pikrakis, A. & Theodoridis S. (2014) Speech-music discrimination: A deep learning perspective. In Proc. European Signal Processing Conference (EUSIPCO), pp. 616–620. [11] Choi, K., Fazekas, G. & Sandler, M. (2016) Automatic tagging using deep convolutional neural networks. arXiv preprint, arXiv:1606.00298. [12] Zazo Candil, R., Sainath, T.N., Simko, G. & Parada, C. (2016) Feature learning with raw-waveform CLDNNs for Voice Activity Detection. In Proc. Interspeech, pp. 3668–3672. [13] Tian, L., Moore, JD. & Lai, C. (2015) Emotion recognition in spontaneous and acted dialogues. In Proc. International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 698–704. [14] Dumpala, S.H., Chakraborty, R. & Kopparapu, S.K. (2017) k-FFNN: A priori knowledge infused feed-forward neural networks. arXiv preprint, arXiv:1704.07055. [15] Dumpala, S.H., Chakraborty, R. & Kopparapu, S.K. (2018) Knowledge-driven feed-forward neural network for audio affective content analysis. To appear in AAAI-2018 workshop on Affective Content Analysis. [16] Deng, J., Zhang, Z., Marchi, E. & Schuller, B. (2013) Sparse autoencoder-based feature transfer learning for speech emotion recognition. In Proc. Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), pp. 511–516. [17] George Tzanetakis, Gtzan musicspeech. Availabe on-line at http://marsyas.info/download/datasets. [18] Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F. & Weiss, B. (2005) A database of german emotional speech. In Proc. Interspeech, pp. 1517–1520. [19] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P. & Witten, I.H. (2009) The WEKA data mining software: an update. ACM SIGKDD explorations newsletter, 11(1), pp. 10–18. [20] Eyben, F., Weninger, F., Gross, F. & Schuller, B. (2013) Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proc. ACM international conference on Multimedia, pp. 835–838. [21] Maratea, A., Petrosino, A. & Manzo, M. (2014) Adjusted f-measure and kernel scaling for imbalanced data learning. Information Sciences 257:331–341. [22] Galar, M., Fernandez, A., Barrenechea, E. & Herrera, F. (2013) Eusboost: Enhancing ensembles for highly imbalanced data-sets by evolutionary undersampling. Pattern Recognition 46(12):3460–3471. [23] Barua, S., Islam, M. M., Yao, X. & Murase, K. (2014) Mwmote–majority weighted minority oversampling technique for imbalanced data set learning. IEEE Transactions on Knowledge and Data Engineering 26(2):405–425.
Which models/frameworks do they compare to?
MLP
2,474
qasper
3k
Introduction Deep Neural Networks (DNN) have been widely employed in industry for solving various Natural Language Processing (NLP) tasks, such as text classification, sequence labeling, question answering, etc. However, when engineers apply DNN models to address specific NLP tasks, they often face the following challenges. The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas. To satisfy the requirements of all the above three personas, the NLP toolkit has to be generic enough to cover as many tasks as possible. At the same time, it also needs to be flexible enough to allow alternative network architectures as well as customized modules. Therefore, we analyzed the NLP jobs submitted to a commercial centralized GPU cluster. Table TABREF11 showed that about 87.5% NLP related jobs belong to a few common tasks, including sentence classification, text matching, sequence labeling, MRC, etc. It further suggested that more than 90% of the networks were composed of several common components, such as embedding, CNN/RNN, Transformer and so on. Based on the above observations, we developed NeuronBlocks, a DNN toolkit for NLP tasks. The basic idea is to provide two layers of support to the engineers. The upper layer targets common NLP tasks. For each task, the toolkit contains several end-to-end network templates, which can be immediately instantiated with simple configuration. The bottom layer consists of a suite of reusable and standard components, which can be adopted as building blocks to construct networks with complex architecture. By following the interface guidelines, users can also contribute to this gallery of components with their own modules. The technical contributions of NeuronBlocks are summarized into the following three aspects. Related Work There are several general-purpose deep learning frameworks, such as TensorFlow, PyTorch and Keras, which have gained popularity in NLP community. These frameworks offer huge flexibility in DNN model design and support various NLP tasks. However, building models under these frameworks requires a large overhead of mastering these framework details. Therefore, higher level abstraction to hide the framework details is favored by many engineers. There are also several popular deep learning toolkits in NLP, including OpenNMT BIBREF0 , AllenNLP BIBREF1 etc. OpenNMT is an open-source toolkit mainly targeting neural machine translation or other natural language generation tasks. AllenNLP provides several pre-built models for NLP tasks, such as semantic role labeling, machine comprehension, textual entailment, etc. Although these toolkits reduce the development cost, they are limited to certain tasks, and thus not flexible enough to support new network architectures or new components. Design The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts. Block Zoo We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported. Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability. Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 . Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported. Model Zoo In NeuronBlocks, we identify four types of most popular NLP tasks. For each task, we provide various end-to-end network templates. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Text Classification and Matching. Tasks such as domain/intent classification, question answer matching are supported. Sequence Labeling. Predict each token in a sequence into predefined types. Common tasks include NER, POS tagging, Slot tagging, etc. Knowledge Distillation BIBREF7 . Teacher-Student based knowledge distillation is one common approach for model compression. NeuronBlocks provides knowledge distillation template to improve the inference speed of heavy DNN models like BERT/GPT. Extractive Machine Reading Comprehension. Given a pair of question and passage, predict the start and end positions of the answer spans in the passage. User Interface NeuronBlocks provides convenient user interface for users to build, train, and test DNN models. The details are described in the following. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] I/O interface. This part defines model input/output, such as training data, pre-trained models/embeddings, model saving path, etc. Model Architecture interface. This is the key part of the configuration file, which defines the whole model architecture. Figure FIGREF19 shows an example of how to specify a model architecture using the blocks in NeuronBlocks. To be more specific, it consists of a list of layers/blocks to construct the architecture, where the blocks are supplied in the gallery of Block Zoo. Training Parameters interface. In this part, the model optimizer as well as all other training hyper parameters are indicated. Workflow Figure FIGREF34 shows the workflow of building DNN models in NeuronBlocks. Users only need to write a JSON configuration file. They can either instantiate an existing template from Model Zoo, or construct a new architecture based on the blocks from Block Zoo. This configuration file is shared across training, test, and prediction. For model hyper-parameter tuning or architecture modification, users just need to change the JSON configuration file. Advanced users can also contribute novel customized blocks into Block Zoo, as long as they follow the same interface guidelines with the existing blocks. These new blocks can be further shared across all users for model architecture design. Moreover, NeuronBlocks has flexible platform support, such as GPU/CPU, GPU management platforms like PAI. Experiments To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved. Sequence Labeling For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance. GLUE Benchmark The General Language Understanding Evaluation (GLUE) benchmark BIBREF13 is a collection of natural language understanding tasks. We experimented on the GLUE benchmark tasks using BiLSTM and Attention based models. As shown in Table TABREF29 , the models built by NeuronBlocks can achieve competitive or even better results on GLUE tasks with minimal coding efforts. Knowledge Distillation We evaluated Knowledge Distillation task in NeuronBlocks on a dataset collected from one commercial search engine. We refer to this dataset as Domain Classification Dataset. Each sample in this dataset consists of two parts, i.e., a question and a binary label indicating whether the question belongs to a specific domain. Table TABREF36 shows the results, where Area Under Curve (AUC) metric is used as the performance evaluation criteria and Queries per Second (QPS) is used to measure inference speed. By knowledge distillation training approach, the student model by NeuronBlocks managed to get 23-27 times inference speedup with only small performance regression compared with BERTbase fine-tuned classifier. WikiQA The WikiQA corpus BIBREF15 is a publicly available dataset for open-domain question answering. This dataset contains 3,047 questions from Bing query logs, each associated with some candidate answer sentences from Wikipedia. We conducted experiments on WikiQA dataset using CNN, BiLSTM, and Attention based models. The results are shown in Table TABREF41 . The models built in NeuronBlocks achieved competitive or even better results with simple model configurations. Conclusion and Future Work In this paper, we introduce NeuronBlocks, a DNN toolkit for NLP tasks built on PyTorch. NeuronBlocks targets three types of engineers, and provides a two-layer solution to satisfy the requirements from all three types of users. To be more specific, the Model Zoo consists of various templates for the most common NLP tasks, while the Block Zoo supplies a gallery of alternative layers/modules for the networks. Such design achieves a balance between generality and flexibility. Extensive experiments have verified the effectiveness of this approach. NeuronBlocks has been widely used in a product team of a commercial search engine, and significantly improved the productivity for developing NLP DNN approaches. As an open-source toolkit, we will further extend it in various directions. The following names a few examples.
How do the authors evidence the claim that many engineers find it a big overhead to choose from multiple frameworks, models and optimization techniques?
By conducting a survey among engineers
1,692
qasper
3k
Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history. In early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu. History Kotayk Urartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan. Yerevan FC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007. Experience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed. The club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan. Domestic European Stadium The construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand). Training centre/academy Banants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin. Fans The most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments. Achievements Armenian Premier League Winner (1): 2013–14. Runner-up (5): 2003, 2006, 2007, 2010, 2018. Armenian Cup Winner (3): 1992, 2007, 2016. Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22 Armenian Supercup Winner (1): 2014. Runner-up (5): 2004, 2007, 2009, 2010, 2016. Current squad Out on loan Personnel Technical staff Management Urartu-2 FC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre. Managerial history Varuzhan Sukiasyan (1992–94) Poghos Galstyan (July 1, 1996 – June 30, 1998) Oganes Zanazanyan (2001–05) Ashot Barseghyan (2005–06) Nikolay Kiselyov (2006–07) Jan Poštulka (2007) Nikolay Kostov (July 1, 2007 – April 8, 2008) Nedelcho Matushev (April 8, 2008 – June 30, 2008) Kim Splidsboel (2008) Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009) Ashot Barseghyan (interim) (2009) Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010) Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012) Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013) Zsolt Hornyák (July 1, 2013 – May 30, 2015) Aram Voskanyan (July 1, 2015 – Oct 11, 2015) Tito Ramallo (Oct 12, 2015 – Oct 3, 2016) Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018) Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019) Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021) Robert Arzumanyan (10 March 2021–24 June 2022) Dmitri Gunko (27 June 2022–) References External links Official website Banants at Weltfussball.de Urartu Urartu Urartu Urartu
What is the name of the most active fan club?
South West Ultras fan club.
819
multifieldqa_en
3k
Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War. Following the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command. Early life and career Hugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea. Although he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname "Huge" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea. Goodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner. He then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator. Goodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King. In June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States. He was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College. Goodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and West Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship . When his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940. World War II Following the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942. By the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best. During the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas. The air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her. Goodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat "V". He was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force. Goodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat "V". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation. Postwar service Following the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945. Goodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy. Revolt of the Admirals In April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force. Goodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus "rob" the branches of autonomy. The outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier. Later service Due to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS). While in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953. Goodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955. On December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States. Retirement Goodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States. Vice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training. Decorations Here is the ribbon bar of Vice admiral Hugh H. Goodwin: References 1900 births 1980 deaths People from Monroe, Louisiana Military personnel from Louisiana United States Naval Academy alumni Naval War College alumni United States Naval Aviators United States Navy personnel of World War I United States Navy World War II admirals United States Navy vice admirals United States submarine commanders Recipients of the Legion of Merit
What was Hugh H. Goodwin's rank in the United States Navy?
Vice Admiral.
2,292
multifieldqa_en
3k
The 1951 Ohio State Buckeyes baseball team represented the Ohio State University in the 1951 NCAA baseball season. The head coach was Marty Karow, serving his 1st year. The Buckeyes lost in the College World Series, defeated by the Texas A&M Aggies. Roster Schedule ! style="" | Regular Season |- valign="top" |- align="center" bgcolor="#ccffcc" | 1 || March 16 || at || Unknown • San Antonio, Texas || 15–3 || 1–0 || 0–0 |- align="center" bgcolor="#ffcccc" | 2 || March 17 || at B. A. M. C. || Unknown • San Antonio, Texas || 7–8 || 1–1 || 0–0 |- align="center" bgcolor="#ffcccc" | 3 || March 19 || at || Clark Field • Austin, Texas || 0–8 || 1–2 || 0–0 |- align="center" bgcolor="#ffcccc" | 4 || March 20 || at Texas || Clark Field • Austin, Texas || 3–4 || 1–3 || 0–0 |- align="center" bgcolor="#ccffcc" | 5 || March 21 || at || Unknown • Houston, Texas || 14–6 || 2–3 || 0–0 |- align="center" bgcolor="#ffcccc" | 6 || March 22 || at Rice || Unknown • Houston, Texas || 2–3 || 2–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 7 || March 23 || at || Unknown • Fort Worth, Texas || 4–2 || 3–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 8 || March 24 || at TCU || Unknown • Fort Worth, Texas || 7–3 || 4–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 9 || March 24 || at || Unknown • St. Louis, Missouri || 10–4 || 5–4 || 0–0 |- |- align="center" bgcolor="#ccffcc" | 10 || April 6 || || Varsity Diamond • Columbus, Ohio || 2–0 || 6–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 11 || April 7 || || Varsity Diamond • Columbus, Ohio || 15–1 || 7–4 || 0–0 |- align="center" bgcolor="#ffcccc" | 12 || April 14 || || Varsity Diamond • Columbus, Ohio || 0–1 || 7–5 || 0–0 |- align="center" bgcolor="#ccffcc" | 13 || April 20 || || Varsity Diamond • Columbus, Ohio || 10–9 || 8–5 || 1–0 |- align="center" bgcolor="#ccffcc" | 14 || April 21 || Minnesota || Varsity Diamond • Columbus, Ohio || 7–0 || 9–5 || 2–0 |- align="center" bgcolor="#ffcccc" | 15 || April 24 || at || Unknown • Oxford, Ohio || 3–4 || 9–6 || 2–0 |- align="center" bgcolor="#ffcccc" | 16 || April 27 || at || Hyames Field • Kalamazoo, Michigan || 2–3 || 9–7 || 2–0 |- align="center" bgcolor="#ffcccc" | 17 || April 28 || at Western Michigan || Hyames Field • Kalamazoo, Michigan || 5–7 || 9–8 || 2–0 |- |- align="center" bgcolor="#ccffcc" | 18 || May 1 || at || Unknown • Athens, Ohio || 7–6 || 10–8 || 2–0 |- align="center" bgcolor="#ccffcc" | 19 || May 4 || || Varsity Diamond • Columbus, Ohio || 12–6 || 11–8 || 3–0 |- align="center" bgcolor="#ccffcc" | 20 || May 5 || Purdue || Varsity Diamond • Columbus, Ohio || 14–4 || 12–8 || 4–0 |- align="center" bgcolor="#ffcccc" | 21 || May 8 || || Varsity Diamond • Columbus, Ohio || 6–8 || 12–9 || 4–0 |- align="center" bgcolor="#ccffcc" | 22 || May 9 || at Dayton || Unknown • Dayton, Ohio || 11–2 || 13–9 || 4–0 |- align="center" bgcolor="#ccffcc" | 23 || May 12 || || Varsity Diamond • Columbus, Ohio || 6–5 || 14–9 || 5–0 |- align="center" bgcolor="#ccffcc" | 24 || May 12 || Indiana || Varsity Diamond • Columbus, Ohio || 5–2 || 15–9 || 6–0 |- align="center" bgcolor="#ccffcc" | 25 || May 15 || Ohio || Varsity Diamond • Columbus, Ohio || 6–0 || 16–9 || 6–0 |- align="center" bgcolor="#ffcccc" | 26 || May 18 || at || Northwestern Park • Evanston, Illinois || 1–3 || 16–10 || 6–1 |- align="center" bgcolor="#ccffcc" | 27 || May 19 || at Northwestern || Northwestern Park • Evanston, Illinois || 10–3 || 17–10 || 7–1 |- align="center" bgcolor="#ccffcc" | 28 || May 22 || at Cincinnati || Carson Field • Cincinnati, Ohio || 8–4 || 18–10 || 7–1 |- align="center" bgcolor="#ccffcc" | 29 || May 25 || || Varsity Diamond • Columbus, Ohio || 4–1 || 19–10 || 8–1 |- align="center" bgcolor="#ffcccc" | 30 || May 25 || Michigan || Varsity Diamond • Columbus, Ohio || 3–6 || 19–11 || 8–2 |- align="center" bgcolor="#ffcccc" | 31 || May 30 || Miami (OH) || Varsity Diamond • Columbus, Ohio || 3–4 || 19–12 || 8–2 |- |- align="center" bgcolor="#ccffcc" | 32 || June 1 || at || Old College Field • East Lansing, Michigan || 8–0 || 20–12 || 9–2 |- align="center" bgcolor="#ccffcc" | 33 || June 2 || at Michigan State || Old College Field • East Lansing, Michigan || 9–8 || 21–12 || 10–2 |- |- |- ! style="" | Postseason |- valign="top" |- align="center" bgcolor="#ccffcc" | 34 || June 8 || Western Michigan || Varsity Diamond • Columbus, Ohio || 1–0 || 22–12 || 10–2 |- align="center" bgcolor="#ffcccc" | 35 || June 8 || Western Michigan || Varsity Diamond • Columbus, Ohio || 2–4 || 22–13 || 10–2 |- align="center" bgcolor="#ccffcc" | 36 || June 9 || Western Michigan || Varsity Diamond • Columbus, Ohio || 3–2 || 23–13 || 10–2 |- |- align="center" bgcolor="#ffcccc" | 37 || June 13 || Oklahoma || Omaha Municipal Stadium • Omaha, Nebraska || 8–9 || 23–14 || 10–2 |- align="center" bgcolor="#ffcccc" | 38 || June 13 || Texas A&M || Omaha Municipal Stadium • Omaha, Nebraska || 2–3 || 23–15 || 10–2 |- Awards and honors Dick Hauck First Team All-Big Ten Stewart Hein First Team All-Big Ten References Ohio State Buckeyes baseball seasons Ohio State Buckeyes baseball Big Ten Conference baseball champion seasons Ohio State College World Series seasons
What was the Buckeyes' record in their first game of the season?
They won their first game with a score of 15-3.
972
multifieldqa_en
3k
Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.<ref name="nytimes">Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008.</ref> Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives. In 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. Early life and education Born graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead. She then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the "Outstanding Senior" award and graduated as valedictorian of the class of 1964. Legal career Immediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz. Born's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice. Born was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first "Women and the Law" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench. During her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court. In 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated. In July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC). Born and the OTC derivatives market Born was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a "legal uncertainty" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would "stifle financial innovation" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies. In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, "I thought that LTCM was exactly what I had been worried about". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked "How many more failures do you think we'd have to have before some regulation in this area might be appropriate?" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that "the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: "Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did." Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999. The derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008. Born declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: "The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been." She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms. An October 2009 Frontline documentary titled "The Warning" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: "I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience." In 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. According to Caroline Kennedy, "Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right." One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated "I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know", adding that "I could have done much better. I could have made a difference" in response to her warnings. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk. Personal life Born is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time. References External links Attorney profile at Arnold & Porter Brooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video Profile at MarketsWiki Speeches and statements "Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market", before the House Committee On Banking And Financial Services, July 24, 1998. "The Lessons of Long Term Capital Management L.P.", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998. Interview: Brooksley Born for "PBS Frontline: The Warning", PBS, (streaming VIDEO 1 hour), October 20, 2009. Articles Manuel Roig-Franzia. "Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On", The Washington Post, May 26, 2009 Taibbi, Matt. "The Great American Bubble Machine", Rolling Stone'', July 9–23, 2009 1940 births American women lawyers Arnold & Porter people Clinton administration personnel Columbus School of Law faculty Commodity Futures Trading Commission personnel Heads of United States federal agencies Lawyers from San Francisco Living people Stanford Law School alumni 21st-century American women Stanford University alumni.
Who was Brooksley Elizabeth's first husband?
Jacob C. Landau.
2,085
multifieldqa_en
3k
Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises. Biography Before her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first "e-book" mid-July. Margaret died on the 10th of August 2022 in Cleveland, Queensland. Bibliography Single Novels King Country (1970) Blaze of Silk (1970) The Time of the Jacaranda (1970) Bauhinia Junction (1971) Man from Bahl Bahla (1971) Summer Magic (1971) Return to Belle Amber (1971) Ring of Jade (1972) Copper Moon (1972) Rainbow Bird (1972) Man Like Daintree (1972) Noonfire (1972) Storm Over Mandargi (1973) Wind River (1973) Love Theme (1974) McCabe's Kingdom (1974) Sweet Sundown (1974) Reeds of Honey (1975) Storm Flower (1975) Lesson in Loving (1975) Flight into Yesterday (1976) Red Cliffs of Malpara (1976) Man on Half-moon (1976) Swan's Reach (1976) Mutiny in Paradise (1977) One Way Ticket (1977) Portrait of Jaime (1977) Black Ingo (1977) Awakening Flame (1978) Wild Swan (1978) Ring of Fire (1978) Wake the Sleeping Tiger (1978) Valley of the Moon (1979) White Magnolia (1979) Winds of Heaven (1979) Blue Lotus (1979) Butterfly and the Baron (1979) Golden Puma (1980) Temple of Fire (1980) Lord of the High Valley (1980) Flamingo Park (1980) North of Capricorn (1981) Season for Change (1981) Shadow Dance (1981) McIvor Affair (1981) Home to Morning Star (1981) Broken Rhapsody (1982) The Silver Veil (1982) Spellbound (1982) Hunter's Moon (1982) Girl at Cobalt Creek (1983) No Alternative (1983) House of Memories (1983) Almost a Stranger (1984) A place called Rambulara (1984) Fallen Idol (1984) Hunt the Sun (1985) Eagle's Ridge (1985) The Tiger's Cage (1986) Innocent in Eden (1986) Diamond Valley (1986) Morning Glory (1988) Devil Moon (1988) Mowana Magic (1988) Hungry Heart (1988) Rise of an Eagle (1988) One Fateful Summer (1993) The Carradine Brand (1994) Holding on to Alex (1997) The Australian Heiress (1997) Claiming His Child (1999) The Cattleman's Bride (2000) The Cattle Baron (2001) The Husbands of the Outback (2001) Secrets of the Outback (2002) With This Ring (2003) Innocent Mistress (2004) Cattle Rancher, Convenient Wife (2007) Outback Marriages (2007) Promoted: Nanny to Wife (2007) Cattle Rancher, Secret Son (2007) Genni's Dilemma (2008) Bride At Briar Ridge (2009) Outback Heiress, Surprise Proposal (2009) Cattle Baron, Nanny Needed (2009) Legends of the Outback Series Mail Order Marriage (1999) The Bridesmaid's Wedding (2000) The English Bride (2000) A Wife at Kimbara (2000) Koomera Crossing Series Sarah's Baby (2003) Runaway Wife (2003) Outback Bridegroom (2003) Outback Surrender (2003) Home to Eden (2004) McIvor Sisters Series The Outback Engagement (2005) Marriage at Murraree (2005) Men Of The Outback Series The Cattleman (2006) The Cattle Baron's Bride (2006) Her Outback Protector (2006) The Horseman (2006) Outback Marriages Series Outback Man Seeks Wife (2007) Cattle Rancher, Convenient Wife (2007) Barons of the Outback Series Multi-Author Wedding At Wangaree Valley (2008) Bride At Briar's Ridge (2008) Family Ties Multi-Author Once Burned (1995) Hitched! Multi-Author A Faulkner Possession (1996) Simply the Best Multi-Author Georgia and the Tycoon (1997) The Big Event Multi-Author Beresford's Bride (1998) Guardian Angels Multi-Author Gabriel's Mission (1998) Australians Series Multi-Author 7. Her Outback Man (1998) 17. Master of Maramba (2001) 19. Outback Fire (2001) 22. Mistaken Mistress (2002) 24. Outback Angel (2002) 33. The Australian Tycoon's Proposal (2004) 35. His Heiress Wife (2004) Marrying the Boss Series Multi-Author Boardroom Proposal (1999) Contract Brides Series Multi-Author Strategy for Marriage (2002) Everlasting Love Series Multi-Author Hidden Legacy (2008) Diamond Brides Series Multi-Author The Australian's Society Bride (2008) Collections Summer Magic / Ring of Jade / Noonfire (1981) Wife at Kimbara / Bridesmaid's Wedding (2005) Omnibus in Collaboration Pretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm) Dear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham) The Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid) The Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton) Winds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton) Moorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter) The Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe) Head of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith) Heart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf) One Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks) Marry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister) Husbands on Horseback (1996) (with Diana Palmer) Wedlocked (1999) (with Day Leclaire and Anne McAllister) Mistletoe Magic (1999) (with Betty Neels and Rebecca Winters) The Australians (2000) (with Helen Bianchin and Miranda Lee) Weddings Down Under (2001) (with Helen Bianchin and Jessica Hart) Outback Husbands (2002) (with Marion Lennox) The Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann) Australian Nights (2003) (with Miranda Lee) Outback Weddings (2003) (with Barbara Hannay) Australian Playboys (2003) (with Helen Bianchin and Marion Lennox) Australian Tycoons (2004) (with Emma Darcy and Marion Lennox) A Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe) White Wedding (2004) (with Judy Christenberry and Jessica Steele) A Christmas Engagement (2004) (with Sara Craven and Jessica Matthews) A Very Special Mother's Day (2005) (with Anne Herries) All I Want for Christmas... (2005) (with Betty Neels and Jessica Steele) The Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan) Outback Desire (2006) (with Emma Darcy and Carol Marinelli) To Mum, with Love (2006) (with Rebecca Winters) Australian Heroes (2007) (with Marion Lennox and Fiona McArthur) Tall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin) The Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer) Island Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver) Australian Billionaires (2009) (with Jennie Adams and Amy Andrews) Cattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas) External links Margaret Way at Harlequin Enterprises Ltd Australian romantic fiction writers Australian women novelists Living people Year of birth missing (living people) Women romantic fiction writers
Where was Margaret Way born and where did she die?
Margaret Way was born in Brisbane and died in Cleveland, Queensland, Australia.
1,203
multifieldqa_en
3k
Paper Info Title: Is In-hospital Meta-information Useful for Abstractive Discharge Summary Generation? Publish Date: 10 Mar 2023 Author List: Mamoru Komachi (from Tokyo Metropolitan University), Takashi Okumura (from Kitami Institute of Technology), Hiromasa Horiguchi (from National Hospital Organization), Yuji Matsumoto Figure Fig. 1.Example of part of a discharge summary which is a dummy we created. Fig. 2. Overview of our proposed method.A new feature embedding layer encoding hospital, physician, disease, and length of stay is added to the standard transformer architecture.The figure shows an example of hospital embedding. Statistics of our data for experiment. of summarization models with different meta-information.The best results are highlighted in bold.Each score is the average of three models with different seeds.The BS and BR indicate BERTScore and BLEURT, respectively. Statistics on the number of cases handled by physicians.C/P denotes Cases/Physician, which indicates how many cases an individual physician has.Method of Grouping Physician IDs A most naive method of mapping physician IDs to features is without any grouping process.The data contains 4,846 physicians, so |M | was set to 4,846.However it caused our model's training to be unstable.This might be due to the many physician IDs appearing for the first time in the test time.Table abstract During the patient's hospitalization, the physician must record daily observations of the patient and summarize them into a brief document called "discharge summary" when the patient is discharged. Automated generation of discharge summary can greatly relieve the physicians' burden, and has been addressed recently in the research community. Most previous studies of discharge summary generation using the sequenceto-sequence architecture focus on only inpatient notes for input. However, electric health records (EHR) also have rich structured metadata (e.g., hospital, physician, disease, length of stay, etc.) that might be useful. This paper investigates the effectiveness of medical meta-information for summarization tasks. We obtain four types of meta-information from the EHR systems and encode each meta-information into a sequence-to-sequence model. Using Japanese EHRs, meta-information encoded models increased ROUGE-1 by up to 4.45 points and BERTScore by 3.77 points over the vanilla Longformer. Also, we found that the encoded meta-information improves the precisions of its related terms in the outputs. Our results showed the benefit of the use of medical meta-information. INTRODUCTION Clinical notes are written daily by physicians from their consults and are used for their own decision-making or coordination of treatment. They contain a large amount of important data for machine learning, such as conditions, laboratory tests, diagnoses, procedures, and treatments. While invaluable to physicians and researchers, the paperwork is burdensome for physicians , . Discharge summaries, a subset of these, also play a crucial role in patient care, and are used to share information between hospitals and physicians (see an example in Figure ). It is created by the physician as a summary of notes during hospitalization at the time of the patient's discharge, which is known to be very time-consuming. Researchers have begun to apply automatic summarization techniques to address this problem - . Previous studies used extractive or abstractive summarization methods, but most of them focused on only progress notes for inputs. Properly summarizing an admission of a patient is a quite complex task, and requires various meta-information such as the patient's age, gender, vital signs, laboratory values and background to specific diseases. Therefore, discharge summary generation needs more medical meta-information, than similar but narrower tasks such as radiology report generation. However, what kind of meta-information is important for summarization has not been investigated, even though it is critical not only for future research on medical summarization but also for the policy of data collection infrastructure. In this paper, we first reveal the effects of meta-information on neural abstractive summarization on admissions. Our model is based on an encoder-decoder transformer with an additional feature embedding layer in the encoder (Figure ). Hospital, physician, disease, and length of stay are used as meta-information, and each feature is embedded in the vector space. For experiments, we collect progress notes, discharge summaries and coded information from the electronic health record system, which are managed by a largest multi-hospital organization in Japan. Our main contributions are as follows: • We found that a transformer encoding meta-information generates higher quality summaries than the vanilla one, and clarified the benefit of using meta-information for medical summarization tasks. • We found that a model encoding disease information can produce proper disease and symptom words following the source. In addition, we found that the model using physician and hospital information can generate symbols that are commonly written in the summary. • We are the first to apply the abstractive summarization method to generate Japanese discharge summaries. In the studies of summarization of medical documents, it is common to retrieve key information such as disease, examination result, or medication from EHRs - . Other researchs more similar to our study targeted to help physicians get the point of medical documents quickly by generating a few key sentences - . Studies generating contextualized summaries can be categorized by the type of model inputs and architectures. Some studies produced a whole discharge summary using structured data for input - The sensitivity of the gram stain for bacterial meningitis is about 60%, and the sensitivity of the culture is not high either. Also, the glucose in the cerebrospinal fluid would have been slightly lower. Although no definitive diagnosis could be made, bacterial meningitis was the most suspicious disease. The causative organism was assumed to be MRSA, and vancomycin and meropenem (meningitis dose) were used to cover a wide range of enteric bacteria. a whole discharge summary from free-form inpatient records - . The free-form data is more challenging since it is noisier than structured data. In inputting of the free-form data, extractive summarization methods, which extract sentences from the source, are commonly used , - . On the other hands, an encoder-decoder model was used for abstractive summarization , , with a limited number of studies. The various issues in the abstractive generation of discharge summary would be studied in the future. Studies using medical meta-information have long been conducted on a lot of tasks - . In abstractive summarization on discharge summary, developed a model incorporating similarity of progress notes and information of the record author. They presented an idea of integrating meta-information into the abstractive summarization model on medical documents, but did not reveal how meta-information would affect the quality of the summaries. Our method is based on the encoder-decoder transformer model. The transformer model is known for its high performance and has been widely used in recent studies, thus it is suitable for our purpose. As shown in Figure , the standard input to a transformer's encoder is created by a token sequence T = [t 0 , t 1 , ..., t i ] and position sequence P = [p 0 , p 1 , ..., p i ], where i is the maximum input length. The token and position sequences are converted into token embeddings E T and positional embeddings E P by looking up the vocabulary tables. The sum of E T and E P is input into the model. In this paper, we attempt to encode meta-information to feature embeddings. We follow the segment embeddings of BERT and the language embeddings of XLM , which provide additional information to the model. It is not a new idea but is suitable for our validation. Our method is formulated as follows: Let M be feature type, M ∈ {Vanilla, Hospital, Physician, Disease, Length of stay}, since we set five types of features. Feature embeddings E M is created by looking up the feature table where m j is featue value (e.g., pysician ID, disease code, etc.) and |M | is the maximum number of differences in a feature. In our study, |M | is set to four different values depending on features. Specifically, they are as follows. a) Hospital: As shown in Table , the data includes five hospital records. They were obtained mechanically from the EHR system. b) Physician: Physicians are also managed by IDs in the EHR systems. We hashed the physician IDs into 485 groups containing 10 people each. Specifically, as a naive strategy, we shuffled and listed the cases within each hospital, and hashed them into groups in the order of appearance of the physician IDs. So each group has the information about the relevance of the hospitals. The reason for employing a grouping strategy is described in Appendix A. c) Disease: Two types of disease information exist in our EHRs: disease names and disease codes called ICD-10 . We did not use any disease names in the inputs for our experiment. Instead, we encoded diseases with the first three letters of the ICD-10 code, because they represent well the higher level concept. The initial three letters of the ICD-10 codes are arranged in the order of an alphabetic letter, a digit, and a digit, so there are a total of 2,600 ways to encode a disease. In our data, some ICD-10 codes were missing, although all disease names were systematically obtained from the EHR system. For such cases, we converted the disease names into ICD-10 codes using MeCab with the J-MeDic (MANBYO 201905) dictionary. Also, diseases can be divided into primary and secondary diseases, but we only deal with the primary diseases. d) Length of stay: The length of stay can be obtained mechanically from the EHR system and the maximum value was set to 1,000 days. We set |M | for vanilla, hospital, physician, disease, and length of stay to 1, 5, 485, 2,600, and 1,000, respectively . The vanilla embedding is prepared for the baseline in our experiment and to equalize the total number of parameters with the other models. The input to our model is the sum of E T , E P and E M . We also prepare an extra model with all features for our experiments. This takes all four feature embeddings (hospital, physician, disease, and length of stay) added to the encoder. Datasets and Metrics We evaluated our proposed method on a subset of data from National Hospital Organization (NHO), the largest multiinstitutional organization in Japan. The statistics of our data are shown in Table , which includes 24,630 cases collected from five hospitals. Each case includes a discharge summary and progress notes for the days of stay. The data are randomly split into 22,630, 1,000, and 1,000 for train, validation, and test, respectively. Summarization performances are reported in ROUGE-1, ROUGE-2, ROUGE-L and BERTScore in terms of F1. In addition, we also employed BLEURT , which models human judgment. Architectures and Hyperparameters Due to our hardware constraints we need a model that is computationally efficient, so we employed the Longformer instead of the conventional transformer. Longformer can . In our model, number of layers, window size, dilation, input sequence length, output sequence length, batch size, learning rate and number of warmup steps are 8, 256, 1, 1024, 256, 4, 3e-5 and 1K, respectively. Other hyperparameters are the same as in the original Longformer, except for the maximum number of epochs is not fixed and the best epoch. It is selected for each training using the validation data based on ROUGE-1. Also, the original Longformer imports pretrained-BART parameters to initial values, but we do not use pre-trained Japanese BART in this study. We used three GeForce RTX 2080 TI for our experiments. Our vocabulary for preparing input to Longformer is taken from UTH-BERT , which is pre-trained on the Japanese clinical records. Since the vocabulary of UTH-BERT is trained by WordPiece , we also tokenize our data with WordPiece. However, the vocabulary does not include white space and line breaks, which cannot be handled, so we add those two tokens to the vocabulary, resulting in a total size of 25,002. The vocabulary has all tokens in full characters, so we normalized full-wdith characters by converting all alphanumeric and symbolic characters to half-width for byte fallback. , we found that all the models with encoded medical meta-information perform better in ROUGE-1, ROUGE-L and BLEURT than the vanilla Longformer. However, in BERTScore, only hospital and disease models outperform the vanilla. Specifically, disease information is most effective, improving ROUGE-1, ROUGE-2, ROUGE-L, BERTScore and BLEURT by 4.45, 0.73, 3.12, 3.77 and 0.21 points over the vanilla model, respectively. This seems to be because disease information and the ICD-10 ontology efficiently cluster groups with similar representations. In contrast, in ROUGE-2 and ROUGE-L, the model with physician embedding is inferior to the vanilla model. This seems to be a negative effect of grouping physicians without any consideration of their relevance. It would be better to cluster them by department, physician attributes, similarity of progress notes, etc. Regarding low ROUGE-2 scores in all models, a previous study using the English data set also reported a low ROUGE-2 score of about 5%, which may indicate an inherent difficulty in discharge summary generation. In BERTScore, the models with the physician and the length of stay did not reach the performance of the vanilla model, suggesting that the system's outputs are semantically inferior. The model with all features performed the lowest of all models in BERTScore. The reason for the low score of the model with all features seems to be that its number of parameters in feature embedding was four times larger than that of the model with the individual feature, and the amount of training data was insufficient. In BLEURT, all models with meta-information outperform vanilla, which suggests that they are more natural to humans. To analyze the influence of encoded meta-information on the outputs, we evaluate the precisions of the generated text. Specifically, we measure the probability that the generated words are included in the gold summary to investigate if the proper words are generated. Some previous studies on faithfulness, which also analyze the output of summarization, have employed words or entities - . In this study, we focused on words, not entities, because we wanted to visualize expressions that are not only nouns. The words were segmented by MeCab with the J-MeDic. For each segmented word, the numeral and symbol labels were assigned as parts of speech by MeCab, the morphological analyzer, while the disease and symptom were assigned by the J-Medic dictionary. The results, shown in Figure , indicate that the encoded disease information leads to generate more proper disease and symptom words. This indicates that the meta-information successfully learns disease-related expressions. The encoded hospital or physician information also improved the precision of symbols generation. This suggests that different hospitals and physicians have different description habits (e.g., bullet points such as "•", "*" and "-", punctuation such as "。" and ".", etc.), which can be grouped by meta-information. In this paper, we conducted a discharge summary generation experiment by adding four types of information to Longformer and verified the impact of the meta-information. The results showed that all four types of information exceeded the performance of the vanilla Longformer model, with the highest performance achieved by encoding disease information. We found that meta-information is useful for abstractive summarization on discharge summaries. Our limitations are that we used Japanese EHR, the limited number of tested features and not performing human evaluations. As for the efficacy of the meta-information, we believe that our results are applicable to non-Japanese, but it is left as Fig. . The precisions of words in the generated summaries. The vertical axis shows the probability that the words exist in the gold summary. a future work. Other meta-information may be worth verifying such as the patient's gender, age, race, religion and used EHR system, etc. It is hard to collect a large amount of medical information and process it into meta-information, so we may need to develop a robust and flexible research infrastructure to conduct a more large scale cross-sectional study in the future. In the discharge summary generation task, which demands a high level of expertise, the human evaluation requires a lot of physicians' efforts and it is a very high cost which is unrealistic. This is a general issue in tasks dealing with medical documents, and this study also could not perform human evaluations. On this research, informed consent and patient privacy are ensured in the following manner. Notices about their policy and the EHR data usage are posted at the hospitals. The patients who disagree with the policies can request opt-out and are excluded from the archive. In case of minors and their parents, followed the same manner. In the case of minors and their parents are same. To conduct a research on the archive, researchers must submit their research proposals to the institutional review board. After the proposal is approved, the data is anonymized to build a dataset for analysis. The data is accessible only in a secured room at the NHO headquarters, and only statistics are brought out of the secured room, for protection of patients' privacy. In the present research, the analysis was conducted under the IRB approval (IRB Approval No.: Wako3 2019-22) of the Institute of Physical and Chemical Research (RIKEN), Japan, which has a collaboration agreement with the National Hospital Organization. This data is not publicly available due to privacy restrictions. shows the detailed number of cases handled by physicians. In all hospitals, there is a large difference between the median and the maximum of cases/physician. This indicates that a few physicians handle a large number of cases and many physicians handle fewer cases. It is impossible to avoid physician IDs first seen at test time without some process that averages the number of cases a physician holds.
What is the future direction mentioned in the conclusion?
Verifying other meta-information such as patient's gender, age, race, etc.
2,947
multifieldqa_en
3k
\section{Introduction} \label{sec:introduction} Probabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation. On the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \cite{gilloire1992adaptive}, noise cancellation \cite{nelson1991active}, and channel equalization \cite{falconer2002frequency}. Although these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \cite{sayed1994state} and then by Haykin \emph{et al.} \cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes. A first attempt to approximate the LMS filter from a probabilistic perspective was presented in \cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm. In this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \cite{cid1994recurrent}, or Bayesian forecasting \cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering. The probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications. Experiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \cite{barber2012bayesian}, to adaptive filtering.\\ \section{Probabilistic Model} Throughout this work, we assume the observation model to be linear-Gaussian with the following distribution, \begin{equation} p(y_k|{\bf w}_k) = \mathcal{N}(y_k;{\bf x}_k^T {\bf w}_k , \sigma_n^2), \label{eq:mess_eq} \end{equation} where $\sigma_n^2$ is the variance of the observation noise, ${\bf x}_k$ is the regression vector and ${\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors. In a non-stationary scenario, ${\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\sigma_d^2$ for this parameter vector: \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;{\bf w}_{k-1}, \sigma_d^2 {\bf I}), \label{eq:trans_eq} \end{equation} where $\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\bf w}_k$ \begin{equation} p({\bf w}_0)= \mathcal{N}({\bf w}_0;0, \sigma_d^2{\bf I}).\nonumber \end{equation} \section{Exact inference in this model: Revisiting the RLS filter} Given the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\bf w}_k|y_{1:k})$. Since all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is \begin{equation} p({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k}, \boldsymbol\Sigma_{k}), \nonumber \end{equation} in which the mean vector ${\bf\boldsymbol\mu}_{k}$ is given by \begin{equation} {\bf\boldsymbol\mu}_k = {\bf\boldsymbol\mu}_{k-1} + {\bf K}_k (y_k - {\bf x}_k^T {\bf\boldsymbol\mu}_{k-1}){\bf x}_k, \nonumber \end{equation} where we have introduced the auxiliary variable \begin{equation} {\bf K}_k = \frac{ \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right)}{{\bf x}_k^T \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right) {\bf x}_k + \sigma_n^2}, \nonumber \end{equation} and the covariance matrix $\boldsymbol\Sigma_k$ is obtained as \begin{equation} \boldsymbol\Sigma_k = \left( {\bf I} - {\bf K}_k{\bf x}_k {\bf x}_k^T \right) ( \boldsymbol\Sigma_{k-1} +\sigma_d^2), \nonumber \end{equation} Note that the mode of $p({\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule \begin{equation} {{\bf w}}_k^{(RLS)} = {{\bf w}}_{k-1}^{(RLS)} + {\bf K}_k (y_k - {\bf x}_k^T {{\bf w}}_{k-1}^{(RLS)}){\bf x}_k . \label{eq:prob_rls} \end{equation} This rule is similar to the one introduced in \cite{haykin1997adaptive}. Finally, note that the covariance matrix $\boldsymbol\Sigma_k$ is a measure of the uncertainty of the estimate ${\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation. \section{Approximating the posterior distribution: LMS filter } The proposed approach consists in approximating the posterior distribution $p({\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \begin{equation} \label{eq:aprox_post} \hat{p}({\bf w}_{k}|y_{1:k})=\mathcal{N}({\bf w}_{k};{\bf \hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_{k}^2 {\bf I} ). \end{equation} In order to estimate the mean and covariance of the approximate distribution $\hat{p}({\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \begin{equation} \{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k\}=\arg \displaystyle{ \min_{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k}} \{ D_{KL}\left(p({\bf w}_{k}|y_{1:k}))\| \hat{p}({\bf w}_{k}|y_{1:k})\right) \}. \nonumber \end{equation} The derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as \begin{equation} {\hat{\boldsymbol\mu}}_{k} = {\boldsymbol\mu}_{k};~~~~~~ \hat{\sigma}_{k}^2 = \frac{{\sf Tr}\{ \boldsymbol\Sigma_k\} }{M}. \label{eq:sigma_hat} \end{equation} We now show that by using \eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\hat{p}({\bf w}_{k-1}|y_{1:k-1}) = \mathcal{N}({\bf w}_{k-1};\hat{\bf\boldsymbol\mu}_{k-1}, \hat{\sigma}_{k-1}^2 {\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution is obtained as % \begin{eqnarray} \hat{p}({\bf w}_k|y_{1:k-1}) &=& \int p({\bf w}_k|{\bf w}_{k-1}) \hat{p}({\bf w}_{k-1}|y_{1:k-1}) d{\bf w}_{k-1} \nonumber\\ &=& \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k|k-1}, \boldsymbol\Sigma_{k|k-1}), \label{eq:approx_pred} \end{eqnarray} where the mean vector and covariance matrix are given by \begin{eqnarray} \hat{\bf\boldsymbol\mu}_{k|k-1} &=& \hat{\bf\boldsymbol\mu}_{k-1} \nonumber \\ \hat{\boldsymbol\Sigma}_{k|k-1} &=& (\hat{\sigma}_{k-1}^2 + \sigma_d^2 ){\bf I}\nonumber. \end{eqnarray} From \eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\bf w}_k|y_{1:k})$ with an isotropic Gaussian, \begin{equation} \hat{p}({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k ; {\hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_k^2 {\bf I} ),\nonumber \end{equation} where \begin{eqnarray} {\hat{\boldsymbol\mu}}_{k} &= & {\hat{\boldsymbol\mu}}_{k-1}+ \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2} (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k \nonumber \\ &=& {\hat{\boldsymbol\mu}}_{k-1}+ \eta_k (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k . \label{eq:prob_lms} \end{eqnarray} Note that, instead of a gain matrix ${\bf K}_k$ as in Eq.~\eqref{eq:prob_rls}, we now have a scalar gain $\eta_k$ that operates as a variable step size. Finally, to obtain the posterior variance, which is our measure of uncertainty, we apply \eqref{eq:sigma_hat} and the trick ${\sf Tr}\{{\bf x}_k{\bf x}_k^T\}= {\bf x}_k^T{\bf x}_k= \|{\bf x}_k \|^2$, \begin{eqnarray} \hat{\sigma}_k^2 &=& \frac{{\sf Tr}(\boldsymbol\Sigma_k)}{M} \\ &=& \frac{1}{M}{\sf Tr}\left\{ \left( {\bf I} - \eta_k {\bf x}_k {\bf x}_k^T \right) (\hat{\sigma}_{k-1}^2 +\sigma_d^2)\right\} \\ &=& \left(1 - \frac{\eta_k \|{\bf x}_k\|^2}{M}\right)(\hat{\sigma}_{k-1}^2 +\sigma_d^2). \label{eq:sig_k} \end{eqnarray} If MAP estimation is performed, we obtain an adaptable step-size LMS estimation \begin{equation} {\bf w}_{k}^{(LMS)} = {\bf w}_{k-1}^{(LMS)} + \eta_k (y_k - {\bf x}_k^T {\bf w}_{k-1}^{(LMS)}){\bf x}_k, \label{eq:lms} \end{equation} with \begin{equation} \eta_k = \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2}.\nonumber \end{equation} At this point, several interesting remarks can be made: \begin{itemize} \item The adaptive rule \eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\boldsymbol\Sigma_k$. \item For a stationary model, we have $\sigma_d^2=0$ in \eqref{eq:prob_lms} and \eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\hat{\sigma}_{k}$, vanish over time $k$. \item Finally, the proposed adaptable step-size LMS has only two parameters, $\sigma_d^2$ and $\sigma_n^2$, (and only one, $\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\sigma_d^2$ and $\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \end{itemize} \section{Experiments} \label{sec:experiments} We evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\|{\bf w}^o\|=1$. Regressors $\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \cite{sayed2008adaptive}, VSS-LMS \cite{shin2004variable}.\footnote{The used parameters for each algorithm are: for RLS $\lambda=1$, $\epsilon^{-1}=0.01$; for LMS $\mu=0.01$; for NLMS $\mu=0.5$; and for VSS-LMS $\mu_{max}=1$, $\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments. In stationary environments, the proposed algorithm has only one parameter, $\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\sf MSD} = {\mathbb E} \| {\bf w}_0 - {\bf w}_k \|^2$), averaged out over $50$ independent simulations, is presented in Fig. \ref{fig:msd_statationary}. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{results_stationary_MSD}} \end{minipage} \caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.} \label{fig:msd_statationary} \end{figure} The performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\sigma^2_d=0$ in \eqref{eq:trans_eq}, both the uncertainty $\hat{\sigma}^2_k$, and the adaptive step size $\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\sigma^2_n$ that is $100$ times smaller than the optimal value. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{fig2_final}} \end{minipage} \caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.} \label{fig_2} \end{figure} \begin{table}[ht] \begin{footnotesize} \setlength{\tabcolsep}{2pt} \def1.5mm{1.5mm} \begin{center} \begin{tabular}{|l@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|} \hline Method & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\ \hline \hline MSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\ \hline \end{tabular} \end{center} \caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.} \label{tab:table_MSD} \end{footnotesize} \end{table} \newpage In a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \cite{gutierrez2011frequency}. Fig. \ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\hat{\mu}_k\pm2\hat{\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \hbox{Table \ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \section{Conclusions and Opened Extensions} \label{sec:conclusions} {We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:} \begin{itemize} \item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes. \item Similarly, if we substitute the transition model of \eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;\lambda {\bf w}_{k-1}, \sigma_d^2), \nonumber \label{eq:trans_eq_lambda} \end{equation} a similar algorithm is obtained but with a forgetting factor $\lambda$ multiplying ${\bf w}_{k-1}^{(LMS)}$ in \eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer. \item As in \cite{park2014probabilistic}, the measurement model \eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful. \item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios. \end{itemize} \begin{appendices} \section{KL divergence between a general gaussian distribution and an isotropic gaussian} \label{sec:kl} We want to approximate $p_{{\bf x}_1}(x) = \mathcal{N}({\bf x}; \boldsymbol\mu_1,\boldsymbol\Sigma_1)$ by $p_{{\bf x}_2}({\bf x}) = \mathcal{N}({\bf x}; \boldsymbol\mu_2,\sigma_2^2 {\bf I})$. In order to do so, we have to compute the parameters of $p_{{\bf x}_2}({\bf x})$, $\boldsymbol\mu_2$ and $\sigma_2^2$, that minimize the following Kullback-Leibler divergence, \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) &=&\int_{-\infty}^{\infty} p_{{\bf x}_1}({\bf x}) \ln{\frac{p_{{\bf x}_1}({\bf x})}{p_{{\bf x}_2}({\bf x})}}d{\bf x} \nonumber \\ &= & \frac{1}{2} \{ -M + {\sf Tr}(\sigma_2^{-2} {\bf I}\cdot \boldsymbol\Sigma_1^{-1}) \nonumber \\ & & + (\boldsymbol\mu_2 - \boldsymbol\mu_1 )^T \sigma^{-2}_2{\bf I} (\boldsymbol\mu_2 - \boldsymbol\mu_1 ) \nonumber \\ & & + \ln \frac{{\sigma_2^2}^M}{\det\boldsymbol\Sigma_1} \}. \label{eq:divergence} \end{eqnarray} Using symmetry arguments, we obtain \begin{equation} \boldsymbol\mu_2^{*} =\arg \displaystyle{ \min_{\boldsymbol\mu_2}} \{ D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) \} = \boldsymbol\mu_1. \end{equation} Then, \eqref{eq:divergence} gets simplified into \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) = \frac{1}{2}\lbrace { -M + {\sf Tr}(\frac{\boldsymbol\Sigma_1}{\sigma_2^{2}}) + \ln \frac{\sigma_2^{2M}}{\det\boldsymbol\Sigma_1}}\rbrace. \end{eqnarray} The variance $\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as \begin{eqnarray} \sigma_2^{2*} &=& \arg\min_{\sigma_2^2} D_{KL}(P_{x_1}\| P_{x_2}) \nonumber \\ &=& \arg\min_{\sigma_2^2}\{ \sigma_2^{-2}{\sf Tr}\{\boldsymbol\Sigma_1\} + M\ln \sigma_2^{2} \} . \end{eqnarray} Deriving and making it equal zero leads to \begin{equation} \frac{\partial}{\partial \sigma_2^2} \left[ \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{\sigma_2^{2}} + M \ln \sigma_2^{2} \right] = \left. {\frac{M}{\sigma_2^{2}}-\frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{(\sigma_2^{2})^2}}\right|_{\sigma_2^{2}=\sigma_2^{2*}}\left. =0 \right. . \nonumber \end{equation} Finally, since the divergence has a single extremum in $R_+$, \begin{equation} \sigma_2^{2*} = \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{M}. \end{equation} \end{appendices} \vfill \clearpage \bibliographystyle{IEEEbib}
What types of data did the authors use in their experiments?
The authors used simulated data and real data from a wireless MISO channel.
2,554
multifieldqa_en
3k
\section{INTRODUCTION} The Tevatron Collider Run II started in March 2002 and is expected to continue until the end of this decade. The Tevatron and the two detectors, CDF and D\O, have been performing well in 2004, each experiment is collecting data at the rate of $\approx$10 pb$^{-1}$ per week. The total luminosity accumulated by August 2004 is $\approx$500 pb$^{-1}$ per detector. The rich physics program includes the production and precision measurement of properties of standard model (SM) objects, as well as searches for phenomena beyond standard model. In this brief review we focus on areas of most interest to the lattice community. We present new results on the top quark mass and their implication for the mass of the SM Higgs boson, on searches for the SM Higgs boson, on evidence for the $X(3872)$ state, on searches for pentaquarks, and on $b$ hadron properties. All Run II results presented here are preliminary. \section{TOP QUARK MASS} The experiments CDF and D\O\ published several direct measurements of the top quark pole mass, $\ensuremath{M_{\mathrm{top}}}$, based on Run I data (1992-1996). The ``lepton $+$ jets'' channel yields the most precise determination of $\ensuremath{M_{\mathrm{top}}}$. Recently, the D\O\ collaboration published a new measurement~\cite{Mtop1-D0-l+j-new}, based on a powerful analysis technique yielding greatly improved precision. The differential probability that the measured variables in any event correspond to the signal is calculated as a function of $\ensuremath{M_{\mathrm{top}}}$. The maximum in the product of the individual event probabilities provides the best estimate of $\ensuremath{M_{\mathrm{top}}}$. The critical differences from previous analyses in the lepton $+$ jets decay channel lie in the assignment of more weight to events that are well measured or more likely to correspond to $t \bar t$ signal, and the handling of the combinations of final-state objects (lepton, jets, and imbalance in transverse momentum) and their identification with top-quark decay products in an event. The new combined value for the top-quark mass from Run I is $\ensuremath{M_{\mathrm{top}}} = 178.0\pm4.3~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$. In Run II, both collaborations have been exploring several different techniques for $\ensuremath{M_{\mathrm{top}}}$ measurements. The best single CDF result comes from a dynamic likelihood method (DLM). The method is similar to the technique used in Ref.~\cite{Mtop1-D0-l+j-new}. The result is $\ensuremath{M_{\mathrm{top}}} = 177.8^{+4.5}_{-5.0} (stat) \pm 6.2 (syst) ~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$. The joint likelihood of the selected events is shown in Fig. ~\ref{fig:cdf_tml}. The Run II goal is a 1\% uncertainty on $\ensuremath{M_{\mathrm{top}}}$. \begin{figure}[htb] \vspace*{-5mm} \includegraphics[height=5.8cm,width=8.1cm] {data_22ev_likelihood.eps} \vspace*{-1.2cm} \caption{The joint likelihood of top candidates(CDF).} \label{fig:cdf_tml} \end{figure} \section{SEARCH FOR SM HIGGS BOSON} The constraints on the SM Higgs ($H$) boson mass from published measurements, updated to include the new D\O\ top mass measurement~\cite{Mtop1-D0-l+j-new}, are $M_H = 117 ^{+67}_{-45}~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$, $M_H < 251~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$ at 95\% C.L. The new most likely value of $M_H$ is above the experimentally excluded range, and sufficiently low for $H$ to be observed at the Tevatron. \begin{figure}[htb] \vspace*{-5mm} \includegraphics[height=7.5cm,width=7.8cm] {d0_wbb_fig_3_err.eps} \vspace*{-1.1cm} \caption{Distribution of the dijet invariant mass for $W+2 b$-tagged jets events, compared to the expectation (D\O). } \label{fig:d0_wbb_2tag} \end{figure} D\O\ has conducted a search for $H$ at $M_H < 140~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$ in the production channel $p \bar{p} \rightarrow WH \rightarrow e \nu b \bar{b}$. The experimental signature of $WH \rightarrow e \nu b \bar{b}$ is a final state with one high $p_T$ electron, two $b$ jets, and large missing transverse energy resulting from the undetected neutrino. The dominant backgrounds to $WH$ production are $W b \bar{b}$, $t \bar{t}$ and single-top production. The distribution of the dijet mass for events with two $b$-tagged jets is shown in Fig.~\ref{fig:d0_wbb_2tag}. Also shown is the expected contribution ($0.06$ events) from the $b \bar{b}$ decay of a SM Higgs boson with $M_H =$ 115 $\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$. No events are observed in the dijet mass window of 85--135 $\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$. D\O\ sets a limit on the cross section for $\sigma( p\bar{p} \rightarrow WH) \times B(H \rightarrow b \bar{b}) $ of 9.0 pb at the 95\% C.L., for a 115 $\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$ Higgs boson. The results for mass points 105, 125, and 135 $\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$ are 11.0, 9.1 and 12.2 pb, respectively. \begin{figure}[htb] \vspace*{-1.2cm} \includegraphics[height=0.33\textheight,width=8.0cm]{whww_aps04_bw.eps} \vspace*{-1.2cm} \caption{95\% limits on the $H$ production (CDF).} \label{fig:cdf_whww} \end{figure} CDF has done a similar search, allowing either an electron or a muon in the final state. Both groups have also searched for $H$ produced in gluon-gluon fusion, with subsequent decay to a pair of $W$ bosons. The CDF results for both channels are shown in Fig.~\ref{fig:cdf_whww}. \section{THE STATE X(3872)} \begin{figure}[htb] \includegraphics[height=8.0cm,width=7.5cm] {X3872cdfPRL1FullM.eps} \vspace*{-1cm} \caption{The $X(3872)$ signal (CDF).} \label{fig:cdf_x} \end{figure} The existence of the $X(3872)$ state discovered by the Belle Collaboration~\cite{Belle-X} has been confirmed in $p \bar{p}$ collisions by CDF~\cite{cdf-X} (see Fig.~\ref{fig:cdf_x}) and D\O~\cite{d0-X}. It is still unclear whether this particle is a $c\bar{c}$ state, or a more complex object. When the data are separated according to production and decay variables, D\O\ finds no significant differences between the $X(3872)$ and the $c \bar{c}$ state $\psi(2S)$. CDF has analysed the ``lifetime'' distribution of the $X(3872)$ events in order to quantify what fraction of this state arises from decay of $B$ hadrons, as opposed to those produced promptly. The authors find that for the selected samples 28.3$\pm$1.0$(stat)\pm$0.7$(syst)$\% of $\psi(2S)$ candidates are from $b$ decays, whereas 16.1$\pm$4.9$(stat)\pm$2.0$(syst)$\% of $X$ mesons arise from such decays. \section{SEARCH FOR PENTAQUARKS} \begin{figure}[htb] \includegraphics[height=0.27\textheight,width=7.6cm] {mpks_1stminbias.eps} \vspace*{-1.2cm} \caption{Invariant mass distribution of an identified proton and a $K^0_s$ candidate. (CDF) } \label{fig:pqtheta} \end{figure} \begin{figure}[htb] \vspace*{-0.9cm} \includegraphics[height=0.25\textheight,width=8.0cm] {CM_xicst_cc_1.eps} \vspace*{-1.2cm} \caption{Invariant mass distribution of the $(\Xi^-,\pi^+)$ system. (CDF) } \label{fig:pqxi} \end{figure} \begin{figure}[htb] \vspace*{-0.9cm} \includegraphics[height=0.25\textheight,width=7.6cm] {theta_note_dstp_dedx_pt.eps} \vspace*{-1.2cm} \caption{Mass of the ($D^{*+}\bar p$) system. The arrow indicates the position of the $\Theta_c$ state (CDF).} \label{fig:pqthetac} \end{figure} Following reports of evidence for exotic baryons containing five quarks (pentaquarks), CDF has analysed its data for evidence of the following pentaquarks: $\Theta^+$ ($uud\bar d \bar s$), doubly strange states $\Xi_{3/2}$, charmed states $\Theta_c$, and, most recently, a state $(udus\bar b)$, dubbed $R^+_s$, through its weak decay to $(J/\psi, p)$. With its excellent particle indentification and mass resolution, CDF has a unique capability to search for pentaquark states. The signals of known states: $\phi$, $\Lambda$, $\Lambda(1520)$, $K^*$, $\Xi$, compare favorably with those provided by the authors of the pentaquark evidence. The group finds no evidence for pentaquark states, see Figs ~\ref{fig:pqtheta},{\ref{fig:pqxi},\ref{fig:pqthetac}. This can be interpreted as an indication that the pentaquark production in $p \bar p$ collisions is heavily suppressed compared to the conventional hadron production, or as an evidence against the existence of pentaquarks. \clearpage \section{RECENT B PHYSICS RESULTS} \subsection{Spectroscopy} CDF has measured the mass of $b$ hadrons in exclusive $J/\psi$ channels. The measurements of the $B_s$ and $\Lambda_b$ (Fig. \ref{fig:masslb}) masses are the current world's best.\\ $m(B^+)$ = 5279.10$\pm$0.41$(stat)\pm$0.36$(syst)$, $m(B^0)$ = 5279.63$\pm$0.53$(stat)\pm$0.33$(syst)$, $m(B_s)$ = 5366.01$\pm$0.73$(stat)\pm$0.33$(syst)$, $m(\Lambda_b)$ = 5619.7$\pm$1.2$(stat)\pm$1.2$(syst)$ MeV/$c^2$.\\ \begin{figure}[htb] \vspace*{-1mm} \includegraphics[height=0.30\textheight,width=7.5cm] {lambdav1c.eps} \vspace*{-1cm} \caption{The mass spectrum of $\Lambda_b$ candidates (CDF).} \label{fig:masslb} \end{figure} D\O\ reports the first observation of the excited $B$ mesons $B_1$ and $B^*_2$ as two separate states in fully reconstructed decays to $B^{(*)}\pi$. The mass of $B_1$ is measured to be 5724$\pm$4$\pm$7 MeV/c$^2$, and the mass difference $\Delta M$ between $B^*_2$ and $B_1$ is 23.6$\pm$7.7$\pm$3.9 MeV/c$^2$ (Fig. \ref{fig:d0_bexc}). D\O\ observes semileptonic $B$ decays to narrow $D^{**}$ states, the orbitally excited states of the $D$ meson seen as resonances in the $D^{*+}\pi^-$ invariant mass spectrum. The $D^*$ mesons are reconstructed through the decay sequence $D^{*+} \rightarrow D^0\pi^+$, $D^0\rightarrow K^-\pi^+$. The invariant mass of oppositely charged $(D^*,\pi)$ pairs is shown in Fig. \ref{fig:d0_dstst}. The mass peak between 2.4 and 2.5 GeV/$c^2$ can be interpreted as two merged narrow $D^{**}$ states, $D^0_1(2420)$ and $D^0_2(2460)$. The combined branching fraction is $ {\cal B}(B\rightarrow D^0_1,D^0_2)\cdot {\cal B}(D^0_1,D^0_2\rightarrow D^{*+}\pi^-)=(0.280\pm0.021(stat)\pm0.088(syst)$\%. The systematic error includes the unknown phase between the two resonances. Work is in progress on extracting the two Breit-Wigner amplitudes. \begin{figure}[htb] \vspace*{-2mm} \hspace*{-3mm} \includegraphics[height=0.28\textheight,width=8.3cm] {B08F02.eps} \vspace*{-1cm} \caption{Mass difference $\Delta M = M(B\pi)-M(B)$ for exclusive $B$ decays. The background-subtracted signal is a sum of $B^*_1 \rightarrow B^* \pi$, $B^* \rightarrow B \gamma $ (open area) and $B^*_2 \rightarrow B^*\pi$ $B^*\rightarrow B \gamma$ (lower peak in the shaded area) and $B^*_2 \rightarrow B \pi$ (upper peak in the shaded area) (D\O).} \label{fig:d0_bexc} \end{figure} \begin{figure}[htb] \includegraphics[height=0.25\textheight,width=7.5cm] {B05F03.eps} \vspace*{-1cm} \caption{The invariant mass distribution of $(D^*,\pi)$ pairs, opposite sign (points) and same-sign (solid histogram).} \label{fig:d0_dstst} \end{figure} \subsection{Lifetimes} CDF and D\O\ have measured lifetimes of $b$ hadrons through the exclusively reconstructed decays $B^+ \rightarrow J/\psi K^+$, $B^0 \rightarrow J/\psi K^{*0}$, $B_s \rightarrow J/\psi \phi$, and $\Lambda_b \rightarrow J/\psi \Lambda$ (Fig. \ref{fig:d0_lbctau}). The latest results are: \\ $\tau(B^+)$=1.65 $\pm$ 0.08 $^{+0.096}_{-0.123}$ ps ~(D\O\ 2003), $\tau(B^+)$=1.662 $\pm$ 0.033 $\pm$ 0.008 ps ~(CDF), $\tau(B^0_d)$=1.473 $^{+0.052}_{-0.050}$ $\pm$ 0.023 ps ~(D\O). $\tau(B^0_d)$=1.539 $\pm$ 0.051 $\pm$ 0.008 ps ~(CDF), $\tau(B^0_s)$=1.444 $^{+0.098}_{-0.090}$ $\pm$ 0.020 ps ~(D\O), $\tau(B^0_s)$=1.369 $\pm$ 0.100 $\pm$ $^{+0.008}_{0.010}$ ps ~(CDF), $\tau(\Lambda_b)$=1.221 $^{+0.217}_{-0.179}$ $\pm$ 0.043 ps ~(D\O), $\tau(\Lambda_b)$=1.25 $\pm$ 0.26 $\pm$ 0.10 ps ~(CDF 2003).\\ The measured lifetimes correspond to the following lifetime ratios:\\ $\tau(B^+)/\tau(B^0_d)$ = 1.080$\pm$0.042 ~(CDF), $\tau(B^0_s)/\tau(B^0_d)$ = 0.890$\pm$0.072 ~(CDF), $\tau(B^0_s)/\tau(B^0_d)$ = 0.980$ ^{+0.075}_{-0.070} \pm$0.003 ~(D\O), $\tau(\Lambda_b)/\tau(B^0_d)$ = 0.874$ ^{+0.169}_{-0.142} \pm$0.028 ~(D\O).\\ \begin{figure}[htb] \includegraphics[height=0.3\textheight,width=8.2cm] {d0_lbctau_B11F02.eps} \vspace*{-1cm} \caption{ Fit projection on $c\tau$ for the $\Lambda_b$ candidates. (D\O)} \label{fig:d0_lbctau} \end{figure} The $B_s$ lifetime measurements listed above are results of a single-lifetime fit to data, integrated over the decay angles. Because of the presence of final states common to ${B_s^0}$\ and its charge conjugate ${\overline{B}_s^0}$, the two meson states are expected to mix in such a way that the two CP eigenstates may have a relatively large lifetime difference. It is possible to separate the two CP components of ${B_s^0 \rightarrow J/\psi \phi}$\ and thus to measure the lifetime difference by studying the time evolution of the polarization states of the vector mesons in the final state. CDF has carried out a combined analysis of $B_s$ lifetimes and polarization amplitudes. The results for the lifetimes of the low mass (CP even) and high mass (CP odd) eigenstates, and the relative width difference are:\\ $\tau_L = 1.05 ^{+0.16}_{-0.13} \pm 0.02$ ~ps, $\tau_H = 2.07 ^{+0.58}_{-0.46} \pm 0.03$ ~ps, $\Delta \Gamma /\overline \Gamma = 0.65 ^{+0.25}_{-0.33} \pm 0.01$.\\ Figure \ref{fig:cdf_dg} shows the scan of the likelihood function for $\Delta \Gamma /\overline \Gamma$. Pseudoexperiments tossed with $\Delta \Gamma /\overline \Gamma =0$ yield the betting odds for observing the above results at 1/315. For $\Delta \Gamma /\overline \Gamma = 0.12$ (SM prediction, which has recently been updated to 0.14$\pm$0.05~\cite{dg_un}) the betting odds are 1/84. \begin{figure}[htb] \vspace*{-1mm} \includegraphics[height=0.3\textheight,width=8.2cm] {cdf_scan-dg-un.eps} \vspace*{-1cm} \caption{Scan of the likelihood function for $\Delta \Gamma /\overline \Gamma$ (CDF). } \label{fig:cdf_dg} \end{figure} D\O\ has used a novel technique to measure the lifetime ratio of the charged and neutral $B$ mesons, exploiting the large semileptonic sample. $B$ hadrons were reconstructed in the channels $B\rightarrow \mu^+ \nu D^*(2010)^-X$, which are dominated by $B^0$ decays, and $B\rightarrow \mu^+ \nu D^0X$, which are dominated by $B^+$ decays. The lifetime ratio was obtained from the variation of the ratio of the number of events in these two processes at different decay lengths. The result is \\ $\tau(B^+)/\tau(B^0_d)$ = 1.093$\pm$0.021$\pm$0.022. ~(D\O) \subsection{Towards $B_s$ mixing} Measurement of the $B_s$ oscillation frequency via ${B_s^0}$ -${\overline{B}_s^0}$ ~mixing will provide an important constraint on the CKM matrix. The oscillation frequency is proportional to the mass difference between the mass eigenstates, $\Delta m_s$, and is related to the CKM matrix through $\Delta m_s \propto |V_{tb}V_{ts}|$. When combined with the $B_d$ mass difference, $\Delta m_d$ it helps in extraction of $|V_{td}|$, and thereby the CP violating phase. As a benchmark for future $B_s$ oscillation measurement, both groups study $B_d$ mixing, gaining an understanding of the different components of a $B$ mixing analysis (sample composition, flavor tagging, vertexing, asymmetry fitting). For a sample of partially reconstructed decays $B\rightarrow D^*(2010)^+\mu^-X$, D\O\ obtains $\Delta m_d = 0.506 \pm 0.055 (stat) \pm 0.049 (syst))$ ps$^{-1}$ and $\Delta m_d = 0.488 \pm 0.066 (stat) \pm 0.044 (syst))$ ps$^{-1}$ when employing opposite side muon tagging and the same side tagging, respectively. The CDF result for semileptonic channels is $\Delta m_d = 0.536 \pm 0.037 (stat) \pm 0.009 (s.c.) \pm 0.015 (syst)$ ps$^{-1}$. CDF also reports a result on $B$ oscillations using fully reconstructed decays: $\Delta m_d = 0.526 \pm 0.056 (stat) \pm 0.005 (syst))$ ps$^{-1}$. Reconstructing $B_s$ decays into different final states is another important step in the ${B_s^0}$ -${\overline{B}_s^0}$ ~mixing analysis. Thanks to the large muon and tracking coverage, D\O\ is accumulating a high statistics sample of semileptonic $B_s$ decays. D\O\ reconstructs the $B_s \rightarrow D^+_s \mu^- X$ decays, with $D^+_s \rightarrow \phi \pi^+ $ and $D^+_s \rightarrow K^* K^+ $, at a rate of $\approx$ 40(25) events per pb$^{-1}$, respectively. Figure \ref{fig:d0_bsdsphipi} shows the mass distribution of the $D^+_s \rightarrow \phi \pi$ candidates. \begin{figure}[htb] \vspace*{-5mm} \includegraphics[height=0.3\textheight,width=8.0cm] {blds-250.eps} \vspace*{-1.2cm} \caption{ $D^+_s \rightarrow \phi \pi^+$ signal. (D\O)} \label{fig:d0_bsdsphipi} \end{figure} \begin{figure}[htb] \vspace*{-10mm} \hspace*{-4mm} \includegraphics[height=0.35\textheight,width=7.9cm] {cdf_Bs-DsPi-PhiPi.eps} \vspace*{-1.0cm} \caption{ $B_s \rightarrow D_s \pi$, $D_s \rightarrow \phi \pi$ signal. (CDF)} \label{fig:cdf_bsdsphipi} \end{figure} CDF has clean signals for fully hadronic, flavor-specific $B_s$ decays, providing the best sensitivity to $B_s$ oscillations at high $\Delta m_s$. Figure \ref{fig:cdf_bsdsphipi} shows the signal for the best channel, $B_s \rightarrow D_s \pi$, $D_s \rightarrow \phi \pi$. \clearpage \subsection{Rare decays} The purely leptonic decays $B_{d,s}^0 \rightarrow \mu^+ \mu^-$ are flavor-changing neutral current (FCNC) processes. In the standard model, these decays are forbidden at the tree level and proceed at a very low rate through higher-order diagrams. The latest SM prediction~\cite{sm_ref3} is ${\cal B}(B^0_s \rightarrow \mu^+ \mu^-)=(3.42\pm 0.54)\times 10^{-9}$, where the error is dominated by non-perturbative uncertainties. The leptonic branching fraction of the $B_d^0$ decay is suppressed by CKM matrix elements $|V_{td}/V_{ts}|^2$ leading to a predicted SM branching fraction of $(1.00\pm0.14)\times 10^{-10}$. The best published experimental bound (Fig.~\ref{fig:cdf_bsmumu}) for the branching fraction of $B^0_s$ $(B^0_d)$ is presently ${\cal B}(B^0_s \, (B^0_d) \rightarrow \mu^+\mu^-)<7.5\times 10^{-7}\, (1.9\times 10^{-7})$ at the 95\% C.L.~\cite{cdfII}. The decay amplitude of $B^0_{d,s} \rightarrow \mu^+ \mu^-$ can be significantly enhanced in some extensions of the SM. \begin{figure}[htb] \includegraphics[height=8.3cm,width=7.9cm] {cdfbsmumu_results_prl.eps} \vspace*{-1cm} \caption{Invariant mass for the events passing all requirements. (CDF)} \label{fig:cdf_bsmumu} \end{figure} Assuming no contributions from the decay $B^0_d\rightarrow \mu^+\mu^-$ in the signal region, D\O\ finds the conservative upper limit on the branching fraction to be ${\cal B}(B^0_s \rightarrow \mu^+ \mu^-) \leq 4.6\times 10^{-7}$ at the 95\% C.L. (Fig.~\ref{fig:d0_bsmumu}). \begin{figure}[htb] \includegraphics[height=5.0cm,width=8.0cm] {B06F03.eps} \vspace*{-1cm} \caption{Invariant mass for the events passing all requirements. (D\O)} \label{fig:d0_bsmumu} \end{figure}
When did the Tevatron Collider Run II start and when is it expected to end?
The Tevatron Collider Run II started in March 2002 and is expected to continue until the end of this decade.
2,431
multifieldqa_en
3k
Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and "the bewildering dispossession of an entire people from their ancestral land." Ngũgĩ wrote the novel while he was a student at Makerere University. The book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement. Plot summary Njoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty. One day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father. Mwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school. For a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population. Jacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods. One day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition. Several months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God. Njoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice. Characters in Weep Not, Child Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible. Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression. Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi. Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II). Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as "entering politics") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo. Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother. Jacobo: Mwihaki's father and an important landowner. Chief of the village. Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school. Themes and motifs Weep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt. The novel also ponders the role of saviours and salvation. The author notes in his The River Between: "Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people." Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, "Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel." Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government.
How many brother does Njoroge have?
Four.
1,414
multifieldqa_en
3k
\section{Introduction}\label{S1} The multiple access interferences (MAI) is the root of user limitation in CDMA systems \cite{R1,R3}. The parallel least mean square-partial parallel interference cancelation (PLMS-PPIC) method is a multiuser detector for code division multiple access (CDMA) receivers which reduces the effect of MAI in bit detection. In this method and similar to its former versions like LMS-PPIC \cite{R5} (see also \cite{RR5}), a weighted value of the MAI of other users is subtracted before making the decision for a specific user in different stages \cite{cohpaper}. In both of these methods, the normalized least mean square (NLMS) algorithm is engaged \cite{Haykin96}. The $m^{\rm th}$ element of the weight vector in each stage is the true transmitted binary value of the $m^{\rm th}$ user divided by its hard estimate value from the previous stage. The magnitude of all weight elements in all stages are equal to unity. Unlike the LMS-PPIC, the PLMS-PPIC method tries to keep this property in each iteration by using a set of NLMS algorithms with different step-sizes instead of one NLMS algorithm used in LMS-PPIC. In each iteration, the parameter estimate of the NLMS algorithm is chosen whose element magnitudes of cancelation weight estimate have the best match with unity. In PLMS-PPIC implementation it is assumed that the receiver knows the phases of all user channels. However in practice, these phases are not known and should be estimated. In this paper we improve the PLMS-PPIC procedure \cite{cohpaper} in such a way that when there is only a partial information of the channel phases, this modified version simultaneously estimates the phases and the cancelation weights. The partial information is the quarter of each channel phase in $(0,2\pi)$. The rest of the paper is organized as follows: In section \ref{S4} the modified version of PLMS-PPIC with capability of channel phase estimation is introduced. In section \ref{S5} some simulation examples illustrate the results of the proposed method. Finally the paper is concluded in section \ref{S6}. \section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\label{S4} We assume $M$ users synchronously send their symbols $\alpha_1,\alpha_2,\cdots,\alpha_M$ via a base-band CDMA transmission system where $\alpha_m\in\{-1,1\}$. The $m^{th}$ user has its own code $p_m(.)$ of length $N$, where $p_m(n)\in \{-1,1\}$, for all $n$. It means that for each symbol $N$ bits are transmitted by each user and the processing gain is equal to $N$. At the receiver we assume that perfect power control scheme is applied. Without loss of generality, we also assume that the power gains of all channels are equal to unity and users' channels do not change during each symbol transmission (it can change from one symbol transmission to the next one) and the channel phase $\phi_m$ of $m^{th}$ user is unknown for all $m=1,2,\cdots,M$ (see \cite{cohpaper} for coherent transmission). According to the above assumptions the received signal is \begin{equation} \label{e1} r(n)=\sum\limits_{m=1}^{M}\alpha_m e^{j\phi_m}p_m(n)+v(n),~~~~n=1,2,\cdots,N, \end{equation} where $v(n)$ is the additive white Gaussian noise with zero mean and variance $\sigma^2$. Multistage parallel interference cancelation method uses $\alpha^{s-1}_1,\alpha^{s-1}_2,\cdots,\alpha^{s-1}_M$, the bit estimates outputs of the previous stage, $s-1$, to estimate the related MAI of each user. It then subtracts it from the received signal $r(n)$ and makes a new decision on each user variable individually to make a new variable set $\alpha^{s}_1,\alpha^{s}_2,\cdots,\alpha^{s}_M$ for the current stage $s$. Usually the variable set of the first stage (stage $0$) is the output of a conventional detector. The output of the last stage is considered as the final estimate of transmitted bits. In the following we explain the structure of a modified version of the PLMS-PIC method \cite{cohpaper} with simultaneous capability of estimating the cancelation weights and the channel phases. Assume $\alpha_m^{(s-1)}\in\{-1,1\}$ is a given estimate of $\alpha_m$ from stage $s-1$. Define \begin{equation} \label{e6} w^s_{m}=\frac{\alpha_m}{\alpha_m^{(s-1)}}e^{j\phi_m}. \end{equation} From (\ref{e1}) and (\ref{e6}) we have \begin{equation} \label{e7} r(n)=\sum\limits_{m=1}^{M}w^s_m\alpha^{(s-1)}_m p_m(n)+v(n). \end{equation} Define \begin{subequations} \begin{eqnarray} \label{e8} W^s&=&[w^s_{1},w^s_{2},\cdots,w^s_{M}]^T,\\ \label{e9} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!X^{s}(n)\!\!\!&=&\!\!\![\alpha^{(s-1)}_1p_1(n),\alpha^{(s-1)}_2p_2(n),\cdots,\alpha^{(s-1)}_Mp_M(n)]^T. \end{eqnarray} \end{subequations} where $T$ stands for transposition. From equations (\ref{e7}), (\ref{e8}) and (\ref{e9}), we have \begin{equation} \label{e10} r(n)=W^{s^T}X^{s}(n)+v(n). \end{equation} Given the observations $\{r(n),X^{s}(n)\}^{N}_{n=1}$, in modified PLMS-PPIC, like the PLMS-PPIC \cite{cohpaper}, a set of NLMS adaptive algorithm are used to compute \begin{equation} \label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T, \end{equation} which is an estimate of $W^s$ after iteration $N$. To do so, from (\ref{e6}), we have \begin{equation} \label{e13} |w^s_{m}|=1 ~~~m=1,2,\cdots,M, \end{equation} which is equivalent to \begin{equation} \label{e14} \sum\limits_{m=1}^{M}||w^s_{m}|-1|=0. \end{equation} We divide $\Psi=\left(0,1-\sqrt{\frac{M-1}{M}}\right]$, a sharp range for $\mu$ (the step-size of the NLMS algorithm) given in \cite{sg2005}, into $L$ subintervals and consider $L$ individual step-sizes $\Theta=\{\mu_1,\mu_2,\cdots,\mu_L\}$, where $\mu_1=\frac{1-\sqrt{\frac{M-1}{M}}}{L}, \mu_2=2\mu_1,\cdots$, and $\mu_L=L\mu_1$. In each stage, $L$ individual NLMS algorithms are executed ($\mu_l$ is the step-size of the $l^{th}$ algorithm). In stage $s$ and at iteration $n$, if $W^{s}_k(n)=[w^s_{1,k},\cdots,w^s_{M,k}]^T$, the parameter estimate of the $k^{\rm th}$ algorithm, minimizes our criteria, then it is considered as the parameter estimate at time iteration $n$. In other words if the next equation holds \begin{equation} \label{e17} W^s_k(n)=\arg\min\limits_{W^s_l(n)\in I_{W^s} }\left\{\sum\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\right\}, \end{equation} where $W^{s}_l(n)=W^{s}(n-1)+\mu_l \frac{X^s(n)}{\|X^s(n)\|^2}e(n), ~~~ l=1,2,\cdots,k,\cdots,L-1,L$ and $I_{W^s}=\{W^s_1(n),\cdots,W^s_L(n)\}$, then we have $W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their weight estimate by $W^{s}_k(n)$. At time instant $n=N$, this procedure gives $W^s(N)$, the final estimate of $W^s$, as the true parameter of stage $s$. Now consider $R=(0,2\pi)$ and divide it into four equal parts $R_1=(0,\frac{\pi}{2})$, $R_2=(\frac{\pi}{2},\pi)$, $R_3=(\pi,\frac{3\pi}{2})$ and $R_4=(\frac{3\pi}{2},2\pi)$. The partial information of channel phases (given by the receiver) is in a way that it shows each $\phi_m$ ($m=1,2,\cdots,M$) belongs to which one of the four quarters $R_i,~i=1,2,3,4$. Assume $W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T$ is the weight estimate of the modified algorithm PLMS-PPIC at time instant $N$ of the stage $s$. From equation (\ref{e6}) we have \begin{equation} \label{tt3} \phi_m=\angle({\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m}). \end{equation} We estimate $\phi_m$ by $\hat{\phi}^s_m$, where \begin{equation} \label{ee3} \hat{\phi}^s_m=\angle{(\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m(N))}. \end{equation} Because $\frac{\alpha^{(s-1)}_m}{\alpha_m}=1$ or $-1$, we have \begin{eqnarray} \hat{\phi}^s_m=\left\{\begin{array}{ll} \angle{w^s_m(N)} & \mbox{if}~ \frac{\alpha^{(s-1)}_m}{\alpha_m}=1\\ \pm\pi+\angle{w^s_m(N)} & \mbox{if}~ \frac{\alpha^{(s-1)}_m}{\alpha_m}=-1\end{array}\right. \end{eqnarray} Hence $\hat{\phi}^s_m\in P^s=\{\angle{w^s_m(N)}, \angle{w^s_m(N)+\pi, \angle{w^s_m(N)}-\pi}\}$. If $w^s_m(N)$ sufficiently converges to its true value $w^s_m$, the same region for $\hat{\phi}^s_m$ and $\phi_m$ is expected. In this case only one of the three members of $P^s$ has the same region as $\phi_m$. For example if $\phi_m \in (0,\frac{\pi}{2})$, then $\hat{\phi}^s_m \in (0,\frac{\pi}{2})$ and therefore only $\angle{w^s_m(N)}$ or $\angle{w^s_m(N)}+\pi$ or $\angle{w^s_m(N)}-\pi$ belongs to $(0,\frac{\pi}{2})$. If, for example, $\angle{w^s_m(N)}+\pi$ is such a member between all three members of $P^s$, it is the best candidate for phase estimation. In other words, \[\phi_m\approx\hat{\phi}^s_m=\angle{w^s_m(N)}+\pi.\] We admit that when there is a member of $P^s$ in the quarter of $\phi_m$, then $w^s_m(N)$ converges. What would happen when non of the members of $P^s$ has the same quarter as $\phi_m$? This situation will happen when the absolute difference between $\angle w^s_m(N)$ and $\phi_m$ is greater than $\pi$. It means that $w^s_m(N)$ has not converged yet. In this case where we can not count on $w^s_m(N)$, the expected value is the optimum choice for the channel phase estimation, e.g. if $\phi_m \in (0,\frac{\pi}{2})$ then $\frac{\pi}{4}$ is the estimation of the channel phase $\phi_m$, or if $\phi_m \in (\frac{\pi}{2},\pi)$ then $\frac{3\pi}{4}$ is the estimation of the channel phase $\phi_m$. The results of the above discussion are summarized in the next equation \begin{eqnarray} \nonumber \hat{\phi}^s_m = \left\{\begin{array}{llll} \angle {w^s_m(N)} & \mbox{if}~ \angle{w^s_m(N)}, \phi_m\in R_i,~~i=1,2,3,4\\ \angle{w^s_m(N)}+\pi & \mbox{if}~ \angle{w^s_m(N)}+\pi, \phi_m\in R_i,~~i=1,2,3,4\\ \angle{w^n_m(N)}-\pi & \mbox{if}~ \angle{w^s_m(N)}-\pi, \phi_m\in R_i,~~i=1,2,3,4\\ \frac{(i-1)\pi+i\pi}{4} & \mbox{if}~ \phi_m\in R_i,~~\angle{w^s_m(N)},\angle {w^s_m(N)}\pm\pi\notin R_i,~~i=1,2,3,4.\\ \end{array}\right. \end{eqnarray} Having an estimation of the channel phases, the rest of the proposed method is given by estimating $\alpha^{s}_m$ as follows: \begin{equation} \label{tt4} \alpha^{s}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N} q^s_m(n)e^{-j\hat{\phi}^s_m}p_m(n)\right\}\right\}, \end{equation} where \begin{equation} \label{tt5} q^{s}_{m}(n)=r(n)-\sum\limits_{m^{'}=1,m^{'}\ne m}^{M}w^{s}_{m^{'}}(N)\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n). \end{equation} The inputs of the first stage $\{\alpha^{0}_m\}_{m=1}^M$ (needed for computing $X^1(n)$) are given by \begin{equation} \label{qte5} \alpha^{0}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N} r(n)e^{-j\hat{\phi}^0_m}p_m(n)\right\}\right\}. \end{equation} Assuming $\phi_m\in R_i$, then \begin{equation} \label{qqpp} \hat{\phi}^0_m =\frac{(i-1)\pi+i\pi}{4}. \end{equation} Table \ref{tab4} shows the structure of the modified PLMS-PPIC method. It is to be notified that \begin{itemize} \item Equation (\ref{qte5}) shows the conventional bit detection method when the receiver only knows the quarter of channel phase in $(0,2\pi)$. \item With $L=1$ (i.e. only one NLMS algorithm), the modified PLMS-PPIC can be thought as a modified version of the LMS-PPIC method. \end{itemize} In the following section some examples are given to illustrate the effectiveness of the proposed method. \section{Simulations}\label{S5} In this section we have considered some simulation examples. Examples \ref{ex2}-\ref{ex4} compare the conventional, the modified LMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced channels, unbalanced channels and time varying channels. In all examples, the receivers have only the quarter of each channel phase. Example \ref{ex2} is given to compare the modified LMS-PPIC and the PLMS-PPIC in the case of balanced channels. \begin{example}{\it Balanced channels}: \label{ex2} \begin{table} \caption{Channel phase estimate of the first user (example \ref{ex2})} \label{tabex5} \centerline{{ \begin{tabular}{|c|c|c|c|c|} \hline \multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\ &&&&\\ \cline{2-5} & \multirow{2}{*}{64}& s = 2 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\ \cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\ \cline{2-5} & \multirow{2}{*}{256}& s = 2 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\ \cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\ \cline{2-5} \hline \end{tabular} }} \end{table} Consider the system model (\ref{e7}) in which $M$ users synchronously send their bits to the receiver through their channels. It is assumed that each user's information consists of codes of length $N$. It is also assumd that the signal to noise ratio (SNR) is 0dB. In this example there is no power-unbalanced or channel loss is assumed. The step-size of the NLMS algorithm in modified LMS-PPIC method is $\mu=0.1(1-\sqrt{\frac{M-1}{M}})$ and the set of step-sizes of the parallel NLMS algorithms in modified PLMS-PPIC method are $\Theta=\{0.01,0.05,0.1,0.2,\cdots,1\}(1-\sqrt{\frac{M-1}{M}})$, i.e. $\mu_1=0.01(1-\sqrt{\frac{M-1}{M}}),\cdots, \mu_4=0.2(1-\sqrt{\frac{M-1}{M}}),\cdots, \mu_{12}=(1-\sqrt{\frac{M-1}{M}})$. Figure~\ref{Figexp1NonCoh} illustrates the bit error rate (BER) for the case of two stages and for $N=64$ and $N=256$. Simulations also show that there is no remarkable difference between results in two stage and three stage scenarios. Table~\ref{tabex5} compares the average channel phase estimate of the first user in each stage and over $10$ runs of modified LMS-PPIC and PLMS-PPIC, when the the number of users is $M=15$. \end{example} Although LMS-PPIC and PLMS-PPIC, as well as their modified versions, are structured based on the assumption of no near-far problem (examples \ref{ex3} and \ref{ex4}), these methods and especially the second one have remarkable performance in the cases of unbalanced and/or time varying channels. \begin{example}{\it Unbalanced channels}: \label{ex3} \begin{table} \caption{Channel phase estimate of the first user (example \ref{ex3})} \label{tabex6} \centerline{{ \begin{tabular}{|c|c|c|c|c|} \hline \multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\ &&&&\\ \cline{2-5} & \multirow{2}{*}{64}& s=2 & $\hat{\phi}^s_m=\frac{2.45\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.36\pi}{8}$ \\ \cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.71\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.80\pi}{8}$ \\ \cline{2-5} & \multirow{2}{*}{256}& s=2 & $\hat{\phi}^s_m=\frac{3.09\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.86\pi}{8}$ \\ \cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.93\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.01\pi}{8}$ \\ \cline{2-5} \hline \end{tabular} }} \end{table} Consider example \ref{ex2} with power unbalanced and/or channel loss in transmission system, i.e. the true model at stage $s$ is \begin{equation} \label{ve7} r(n)=\sum\limits_{m=1}^{M}\beta_m w^s_m\alpha^{(s-1)}_m c_m(n)+v(n), \end{equation} where $0<\beta_m\leq 1$ for all $1\leq m \leq M$. Both the LMS-PPIC and the PLMS-PPIC methods assume the model (\ref{e7}), and their estimations are based on observations $\{r(n),X^s(n)\}$, instead of $\{r(n),\mathbf{G}X^s(n)\}$, where the channel gain matrix is $\mathbf{G}=\mbox{diag}(\beta_1,\beta_2,\cdots,\beta_m)$. In this case we repeat example \ref{ex2}. We randomly get each element of $G$ from $[0,0.3]$. Figure~\ref{Figexp2NonCoh} illustrates the BER versus the number of users. Table~\ref{tabex6} compares the channel phase estimate of the first user in each stage and over $10$ runs of modified LMS-PPIC and modified PLMS-PPIC for $M=15$. \end{example} \begin{example} \label{ex4} {\it Time varying channels}: Consider example \ref{ex2} with time varying Rayleigh fading channels. In this case we assume the maximum Doppler shift of $40$HZ, the three-tap frequency-selective channel with delay vector of $\{2\times 10^{-6},2.5\times 10^{-6},3\times 10^{-6}\}$sec and gain vector of $\{-5,-3,-10\}$dB. Figure~\ref{Figexp3NonCoh} shows the average BER over all users versus $M$ and using two stages. \end{example} \section{Conclusion}\label{S6} In this paper, parallel interference cancelation using adaptive multistage structure and employing a set of NLMS algorithms with different step-sizes is proposed, when just the quarter of the channel phase of each user is known. In fact, the algorithm has been proposed for coherent transmission with full information on channel phases in \cite{cohpaper}. This paper is a modification on the previously proposed algorithm. Simulation results show that the new method has a remarkable performance for different scenarios including Rayleigh fading channels even if the channel is unbalanced.
What algorithm is engaged in the PLMS-PPIC method?
The normalized least mean square (NLMS) algorithm.
2,008
multifieldqa_en
3k
\section{Introduction} Spectral line surveys have revealed that high-mass star-forming regions are rich reservoirs of molecules from simple diatomic species to complex and larger molecules (e.g., \citealt{schilke1997b,hatchell1998b,comito2005,bisschop2007}). However, there have been rarely studies undertaken to investigate the chemical evolution during massive star formation from the earliest evolutionary stages, i.e., from High-Mass Starless Cores (HMSCs) and High-Mass Cores with embedded low- to intermediate-mass protostars destined to become massive stars, via High-Mass Protostellar Objects (HMPOs) to the final stars that are able to produce Ultracompact H{\sc ii} regions (UCH{\sc ii}s, see \citealt{beuther2006b} for a recent description of the evolutionary sequence). The first two evolutionary stages are found within so-called Infrared Dark Clouds (IRDCs). While for low-mass stars the chemical evolution from early molecular freeze-out to more evolved protostellar cores is well studied (e.g., \citealt{bergin1997,dutrey1997,pavlyuchenkov2006,joergensen2007}), it is far from clear whether similar evolutionary patterns are present during massive star formation. To better understand the chemical evolution of high-mass star-forming regions we initiated a program to investigate the chemical properties from IRDCs to UCH{\sc ii}s from an observational and theoretical perspective. We start with single-dish line surveys toward a large sample obtaining their basic characteristics, and then perform detailed studies of selected sources using interferometers on smaller scales. These observations are accompanied by theoretical modeling of the chemical processes. Long-term goals are the chemical characterization of the evolutionary sequence in massive star formation, the development of chemical clocks, and the identification of molecules as astrophysical tools to study the physical processes during different evolutionary stages. Here, we present an initial study of the reactive radical ethynyl (C$_2$H) combining single-dish and interferometer observations with chemical modeling. Although C$_2$H was previously observed in low-mass cores and Photon Dominated Regions (e.g., \citealt{millar1984,jansen1995}), so far it was not systematically investigated in the framework of high-mass star formation. \section{Observations} \label{obs} The 21 massive star-forming regions were observed with the Atacama Pathfinder Experiment (APEX) in the 875\,$\mu$m window in fall 2006. We observed 1\,GHz from 338 to 339\,GHz and 1\,GHz in the image sideband from 349 to 350\,GHz. The spectral resolution was 0.1\,km\,s$^{-1}$, but we smoothed the data to $\sim$0.9\,km\,s$^{-1}$. The average system temperatures were around 200\,K, each source had on-source integration times between 5 and 16 min. The data were converted to main-beam temperatures with forward and beam efficiencies of 0.97 and 0.73, respectively \citep{belloche2006}. The average $1\sigma$ rms was 0.4\,K. The main spectral features of interest are the C$_2$H lines around 349.4\,GHz with upper level excitation energies $E_u/k$ of 42\,K (line blends of C$_2$H$(4_{5,5}-3_{4,4})$ \& C$_2$H$(4_{5,4}-3_{4,3})$ at 349.338\,GHz, and C$_2$H$(4_{4,4}-3_{3,3})$ \& C$_2$H$(4_{4,3}-3_{3,2})$ at 349.399\,GHz). The beam size was $\sim 18''$. The original Submillimeter Array (SMA) C$_2$H data toward the HMPO\,18089-1732 were first presented in \citet{beuther2005c}. There we used the compact and extended configurations resulting in good images for all spectral lines except of C$_2$H. For this project, we re-worked on these data only using the compact configuration. Because the C$_2$H emission is distributed on larger scales (see \S\ref{results}), we were now able to derive a C$_2$H image. The integration range was from 32 to 35\,km\,s$^{-1}$, and the achieved $1\sigma$ rms of the C$_2$H image was 450\,mJy\,beam$^{-1}$. For more details on these observations see \citet{beuther2005c}. \section{Results} \label{results} The sources were selected to cover all evolutionary stages from IRDCs via HMPOs to UCH{\sc ii}s. We derived our target list from the samples of \citet{klein2005,fontani2005,hill2005,beltran2006}. Table \ref{sample} lists the observed sources, their coordinates, distances, luminosities and a first order classification into the evolutionary sub-groups IRDCs, HMPOs and UCH{\sc ii}s based on the previously available data. Although this classification is only based on a limited set of data, here we are just interested in general evolutionary trends. Hence, the division into the three main classes is sufficient. Figure \ref{spectra} presents sample spectra toward one source of each evolutionary group. While we see several CH$_3$OH lines as well as SO$_2$ and H$_2$CS toward some of the HMPOs and UCH{\sc ii}s but not toward the IRDCs, the surprising result of this comparison is the presence of the C$_2$H lines around 349.4\,GHz toward all source types from young IRDCs via the HMPOs to evolved UCH{\sc ii}s. Table \ref{sample} lists the peak brightness temperatures, the integrated intensities and the FWHM line-widths of the C$_2$H line blend at 349.399\,GHz. The separation of the two lines of 1.375\,MHz already corresponds to a line-width of 1.2\,km\,s$^{-1}$. We have three C$_2$H non-detections (2 IRDCs and 1 HMPO), however, with no clear trend with respect to the distances or the luminosities (the latter comparison is only possible for the HMPOs). While IRDCs are on average colder than more evolved sources, and have lower brightness temperatures, the non-detections are more probable due to the relatively low sensitivity of the short observations (\S\ref{obs}). Hence, the data indicate that the C$_2$H lines are detected independent of the evolutionary stage of the sources in contrast to the situation with other molecules. When comparing the line-widths between the different sub-groups, one finds only a marginal difference between the IRDCs and the HMPOs (the average $\Delta v$ of the two groups are 2.8 and 3.1\,km\,s$^{-1}$). However, the UCH{\sc ii}s exhibit significantly broader line-widths with an average value of 5.5\,km\,s$^{-1}$. Intrigued by this finding, we wanted to understand the C$_2$H spatial structure during the different evolutionary stages. Therefore, we went back to a dataset obtained with the Submillimeter Array toward the hypercompact H{\sc ii} region IRAS\,18089-1732 with a much higher spatial resolution of $\sim 1''$ \citep{beuther2005c}. Albeit this hypercompact H{\sc ii} region belongs to the class of HMPOs, it is already in a relatively evolved stage and has formed a hot core with a rich molecular spectrum. \citet{beuther2005c} showed the spectral detection of the C$_2$H lines toward this source, but they did not present any spatially resolved images. To recover large-scale structure, we restricted the data to those from the compact SMA configuration (\S\ref{obs}). With this refinement, we were able to produce a spatially resolved C$_2$H map of the line blend at 349.338\,GHz with an angular resolution of $2.9''\times 1.4''$ (corresponding to an average linear resolution of 7700\,AU at the given distance of 3.6\,kpc). Figure \ref{18089} presents the integrated C$_2$H emission with a contour overlay of the 860\,$\mu$m continuum source outlining the position of the massive protostar. In contrast to almost all other molecular lines that peak along with the dust continuum \citep{beuther2005c}, the C$_2$H emission surrounds the continuum peak in a shell-like fashion. \section{Discussion and Conclusions} To understand the observations, we conducted a simple chemical modeling of massive star-forming regions. A 1D cloud model with a mass of 1200\,M$_\sun$, an outer radius of 0.36\,pc and a power-law density profile ($\rho\propto r^p$ with $p=-1.5$) is the initially assumed configuration. Three cases are studied: (1) a cold isothermal cloud with $T=10$\,K, (2) $T=50$\,K, and (3) a warm model with a temperature profile $T\propto r^q$ with $q=-0.4$ and a temperature at the outer radius of 44\,K. The cloud is illuminated by the interstellar UV radiation field (IRSF, \citealt{draine1978}) and by cosmic ray particles (CRP). The ISRF attenuation by single-sized $0.1\mu$m silicate grains at a given radius is calculated in a plane-parallel geometry following \citet{vandishoeck1988}. The CRP ionization rate is assumed to be $1.3\times 10^{-17}$~s$^{-1}$ \citep{spitzer1968}. The gas-grain chemical model by \citet{vasyunin2008} with the desorption energies and surface reactions from \citet{garrod2006} is used. Gas-phase reaction rates are taken from RATE\,06 \citep{woodall2007}, initial abundances, were adopted from the ``low metal'' set of \citet{lee1998}. Figure \ref{model} presents the C$_2$H abundances for the three models at two different time steps: (a) 100\,yr, and (b) in a more evolved stage after $5\times10^4$\,yr. The C$_2$H abundance is high toward the core center right from the beginning of the evolution, similar to previous models (e.g., \citealt{millar1985,herbst1986,turner1999}). During the evolution, the C$_2$H abundance stays approximately constant at the outer core edges, whereas it decreases by more than three orders of magnitude in the center, except for the cold $T=10$~K model. The C$_2$H abundance profiles for all three models show similar behavior. The chemical evolution of ethynyl is determined by relative removal rates of carbon and oxygen atoms or ions into molecules like CO, OH, H$_2$O. Light ionized hydrocarbons CH$^+_{\rm n}$ (n=2..5) are quickly formed by radiative association of C$^+$ with H$_2$ and hydrogen addition reactions: C$^+$ $\rightarrow$ CH$_2^+$ $\rightarrow$ CH$_3^+$ $\rightarrow$ CH$_5^+$. The protonated methane reacts with electrons, CO, C, OH, and more complex species at later stage and forms methane. The CH$_4$ molecules undergo reactive collisions with C$^+$, producing C$_2$H$_2^+$ and C$_2$H$_3^+$. An alternative way to produce C$_2$H$_2^+$ is the dissociative recombination of CH$_5^+$ into CH$_3$ followed by reactions with C$^+$. Finally, C$_2$H$_2^+$ and C$_2$H$_3^+$ dissociatively recombine into CH, C$_2$H, and C$_2$H$_2$. The major removal for C$_2$H is either the direct neutral-neutral reaction with O that forms CO, or the same reaction but with heavier carbon chain ions that are formed from C$_2$H by subsequent insertion of carbon. At later times, depletion and gas-phase reactions with more complex species may enter into this cycle. At the cloud edge the interstellar UV radiation instantaneously dissociates CO despite its self-shielding, re-enriching the gas with elemental carbon. The transformation of C$_2$H into CO and other species proceeds efficiently in dense regions, in particular in the ``warm'' model where endothermic reactions result in rich molecular complexity of the gas (see Fig.~\ref{model}). In contrast, in the ``cold'' 10\,K model gas-grain interactions and surface reactions become important. As a result, a large fraction of oxygen is locked in water ice that is hard to desorb ($E_{\rm des} \sim 5500$~K), while half of the elemental carbon goes to volatile methane ice ($E_{\rm des} \sim 1300$~K). Upon CRP heating of dust grains, this leads to much higher gas-phase abundance of C$_2$H in the cloud core for the cold model compared to the warm model. The effect is not that strong for less dense regions at larger radii from the center. Since the C$_2$H emission is anti-correlated with the dust continuum emission in the case of IRAS\,18089-1732 (Fig.\,\ref{18089}), we do not have the H$_2$ column densities to quantitatively compare the abundance profiles of IRAS\,18089-1732 with our model. However, data and model allow a qualitative comparison of the spatial structures. Estimating an exact evolutionary time for IRAS\,18089-1732 is hardly possible, but based on the strong molecular line emission, its high central gas temperatures and the observed outflow-disk system \citep{beuther2004a,beuther2004b,beuther2005c}, an approximate age of $5\times10^4$\,yr appears reasonable. Although dynamical and chemical times are not necessarily exactly the same, in high-mass star formation they should not differ to much: Following the models by \citet{mckee2003} or \citet{krumholz2006b}, the luminosity rises strongly right from the onset of collapse which can be considered as a starting point for the chemical evolution. At the same time disks and outflows evolve, which should hence have similar time-scales. The diameter of the shell-like C$_2$H structure in IRAS\,18089-1732 is $\sim 5''$ (Fig.\,\ref{18089}), or $\sim$9000\,AU in radius at the given distance of 3.6\,kpc. This value is well matched by the modeled region with decreased C$_2$H abundance (Fig.\,\ref{model}). Although in principle optical depths and/or excitation effects could mimic the C$_2$H morphology, we consider this as unlikely because the other observed molecules with many different transitions all peak toward the central submm continuum emission in IRAS\,18089-1732 \citep{beuther2005c}. Since C$_2$H is the only exception in that rich dataset, chemical effects appear the more plausible explanation. The fact that we see C$_2$H at the earliest and the later evolutionary stages can be explained by the reactive nature of C$_2$H: it is produced quickly early on and gets replenished at the core edges by the UV photodissociation of CO. The inner ``chemical'' hole observed toward IRAS\,18089-1732 can be explained by C$_2$H being consumed in the chemical network forming CO and more complex molecules like larger carbon-hydrogen complexes and/or depletion. The data show that C$_2$H is not suited to investigate the central gas cores in more evolved sources, however, our analysis indicates that C$_2$H may be a suitable tracer of the earliest stages of (massive) star formation, like N$_2$H$^+$ or NH$_3$ (e.g., \citealt{bergin2002,tafalla2004,beuther2005a,pillai2006}). While a spatial analysis of the line emission will give insights into the kinematics of the gas and also the evolutionary stage from chemical models, multiple C$_2$H lines will even allow a temperature characterization. With its lowest $J=1-0$ transitions around 87\,GHz, C$_2$H has easily accessible spectral lines in several bands between the 3\,mm and 850\,$\mu$m. Furthermore, even the 349\,GHz lines presented here have still relatively low upper level excitation energies ($E_u/k\sim42$\,K), hence allowing to study cold cores even at sub-millimeter wavelengths. This prediction can further be proved via high spectral and spatial resolution observations of different C$_2$H lines toward young IRDCs. \acknowledgments{H.B. acknowledges financial support by the Emmy-Noether-Programm of the Deutsche Forschungsgemeinschaft (DFG, grant BE2578). }
What molecule was the focus of the study?
The focus of the study was on the reactive radical ethynyl (C$_2$H).
2,115
multifieldqa_en
3k
Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and "the bewildering dispossession of an entire people from their ancestral land." Ngũgĩ wrote the novel while he was a student at Makerere University. The book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement. Plot summary Njoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty. One day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father. Mwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school. For a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population. Jacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods. One day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition. Several months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God. Njoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice. Characters in Weep Not, Child Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible. Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression. Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi. Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II). Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as "entering politics") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo. Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother. Jacobo: Mwihaki's father and an important landowner. Chief of the village. Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school. Themes and motifs Weep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt. The novel also ponders the role of saviours and salvation. The author notes in his The River Between: "Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people." Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, "Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel." Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government. See also Things Fall Apart Death and the King's Horseman References External links Official homepage of Ngũgĩ wa Thiong'o BBC profile of Ngũgĩ wa Thiong'o Weep Not, Child at Google Books British Empire in fiction Novels set in colonial Africa Historical novels Kenyan English-language novels Novels by Ngũgĩ wa Thiong'o Novels set in Kenya 1964 novels Heinemann (publisher) books Postcolonial novels African Writers Series 1964 debut novels
When was Weep Not, Child first published?
Weep Not, Child was first published in 1964.
1,489
multifieldqa_en
3k
\section{Model equations} \label{sec:equations} In drift-fluid models the continuity equation \begin{align} \frac{\partial n}{\partial t} + \nabla\cdot\left( n \vec u_E \right) &= 0 \label{eq:generala} \end{align} describes the dynamics of the electron density $n$. Here $\vec u_E := (\hat{\vec b} \times \nabla \phi)/B$ gives the electric drift velocity in a magnetic field $\vec B := B \hat{\vec b}$ and an electric potential $\phi$. We neglect contributions of the diamagnetic drift~\cite{Kube2016}. Equation~\eqref{eq:generala} is closed by invoking quasineutrality, i.e. the divergence of the ion polarization, the electron diamagnetic and the gravitational drift currents must vanish \begin{align} \nabla\cdot\left( \frac{n}{\Omega} \left( \frac{\partial}{\partial t} + \vec u_E \cdot\nabla \right)\frac{\nabla_\perp \phi}{B} + n\vec u_d - n\vec u_g\right) &=0 . \label{eq:generalb} \end{align} Here we denote $\nabla_\perp\phi/B := - \hat{\vec b} \times \vec u_E$, the electron diamagnetic drift $\vec u_d := - T_e(\hat{\vec b} \times\nabla n ) /enB$ with the electron temperature $T_e$, the ion gravitational drift velocity $\vec u_g := m_i \hat{\vec b} \times \vec g /B$ with ion mass $m_i$, and the ion gyro-frequency $\Omega := eB/m_i$. Combining Eq.~\eqref{eq:generalb} with Eq.~\eqref{eq:generala} yields \begin{align} \frac{\partial \rho}{\partial t} + \nabla\cdot\left( \rho\vec u_E \right) + \nabla \cdot\left( n(\vec u_\psi + \vec u_d + \vec u_g) \right) &= 0\label{eq:vorticity} \end{align} with the polarization charge density $\rho = \nabla\cdot( n\nabla_\perp \phi / \Omega B)$ and $\vec u_\psi := \hat{\vec b}\times \nabla\psi /B$ with $\psi:= m_i\vec u_E^2 /2e$. We exploit this form of Eq.~\eqref{eq:generalb} in our numerical simulations. Equations~\eqref{eq:generala} and \eqref{eq:generalb} respectively \eqref{eq:vorticity} have several invariants. First, in Eq.~\eqref{eq:generala} the relative particle number $M(t) := \int \mathrm{dA}\, (n-n_0)$ is conserved over time $\d M(t)/\d t = 0$. Furthermore, we integrate $( T_e(1+\ln n) -T_e \ln B)\partial_t n$ as well as $-e\phi \partial_t\rho - (m_i\vec u_E^2/2+gm_ix - T_e\ln B)\partial_t n$ over the domain to get, disregarding boundary contributions, \begin{align} \frac{\d}{\d t}\left[T_eS(t) + H(t) \right] = 0, \label{eq:energya}\\ \frac{\d}{\d t} \left[ E(t) - G(t) - H(t)\right] = 0, \label{eq:energyb} \end{align} where we define the entropy $S(t):=\int \mathrm{dA}\, [n\ln(n/n_0) - (n-n_0)]$, the kinetic energy $E(t):=m_i \int \mathrm{dA}\, n\vec u_E^2/2$ and the potential energies $G(t) := m_i g\int \mathrm{dA}\, x(n-n_0)$ and $H(t) := T_e\int \mathrm{dA}\, (n-n_0) \ln (B^{-1})$. Note that $n\ln( n/n_0) - n + n_0 \approx (n-n_0)^2/2$ for $|(n-n_0)/n_0| \ll 1$ and $S(t)$ thus reduces to the local entropy form in Reference~\cite{Kube2016}. We now set up a gravitational field $\vec g = g\hat x$ and a constant homogeneous background magnetic field $\vec B = B_0 \hat z$ in a Cartesian coordinate system. Then the divergences of the electric and gravitational drift velocities $\nabla\cdot\vec u_E$ and $\nabla\cdot\vec u_g$ and the diamagnetic current $\nabla\cdot(n\vec u_d)$ vanish, which makes the flow incompressible. Furthermore, the magnetic potential energy vanishes $H(t) = 0$. In a second system we model the inhomogeneous magnetic field present in tokamaks as $\vec B := B_0 (1+ x/R_0)^{-1}\hat z$ and neglect the gravitational drift $\vec u_g = 0$. Then, the potential energy $G(t) = 0$. Note that $H(t) = m_i \ensuremath{C_\mathrm{s}}^2/R_0\int\mathrm{dA}\, x(n-n_0) +\mathcal O(R_0^{-2}) $ reduces to $G(t)$ with the effective gravity $g_\text{eff}:= \ensuremath{C_\mathrm{s}}^2/R_0$ with $\ensuremath{C_\mathrm{s}}^2 := T_e/m_i$. For the rest of this letter we treat $g$ and $g_\text{eff}$ as well as $G(t)$ and $H(t)$ on the same footing. The magnetic field inhomogeneity thus entails compressible flows, which is the only difference to the model describing dynamics in a homogeneous magnetic field introduced above. Since both $S(t)\geq 0$ and $E(t)\geq 0$ we further derive from Eq.~\eqref{eq:energya} and Eq.~\eqref{eq:energyb} that the kinetic energy is bounded by $E(t) \leq T_eS(t) + E(t) = T_e S(0)$; a feature absent from the gravitational system with incompressible flows, where $S(t) = S(0)$. We now show that the invariants Eqs.~\eqref{eq:energya} and \eqref{eq:energyb} present restrictions on the velocity and acceleration of plasma blobs. First, we define the blobs' center of mass (COM) via $X(t):= \int\mathrm{dA}\, x(n-n_0)/M$ and its COM velocity as $V(t):=\d X(t)/\d t$. The latter is proportional to the total radial particle flux~\cite{Garcia_Bian_Fundamensky_POP_2006, Held2016a}. We assume that $n>n_0$ and $(n-n_0)^2/2 \leq [ n\ln (n/n_0) - (n-n_0)]n $ to show for both systems \begin{align} (MV)^2 &= \left( \int \mathrm{dA}\, n{\phi_y}/{B} \right)^2 = \left( \int \mathrm{dA}\, (n-n_0){\phi_y}/{B} \right)^2\nonumber\\ &\leq 2 \left( \int \mathrm{dA}\, \left[n\ln (n/n_0) -(n-n_0)\right]^{1/2}\sqrt{n}{\phi_y}/{B}\right)^2\nonumber\\ &\leq 4 S(0) E(t)/m_i \label{eq:inequality} \end{align} Here we use the Cauchy-Schwartz inequality and $\phi_y:=\partial\phi/\partial y$. Note that although we derive the inequality Eq.~\eqref{eq:inequality} only for amplitudes $\triangle n >0$ we assume that the results also hold for depletions. This is justified by our numerical results later in this letter. If we initialize our density field with a seeded blob of radius $\ell$ and amplitude $\triangle n$ as \begin{align} n(\vec x, 0) &= n_0 + \triangle n \exp\left( -\frac{\vec x^2}{2\ell^2} \right), \label{eq:inita} \end{align} and $\phi(\vec x, 0 ) = 0$, we immediately have $M := M(0) = 2\pi \ell^2 \triangle n$, $E(0) = G(0) = 0$ and $S(0) = 2\pi \ell^2 f(\triangle n)$, where $f(\triangle n)$ captures the amplitude dependence of the integral for $S(0)$. The acceleration for both incompressible and compressible flows can be estimated by assuming a linear acceleration $V=A_0t$ and $X=A_0t^2/2$~\cite{Held2016a} and using $E(t) = G(t) = m_igMX(t)$ in Eq.~\eqref{eq:inequality} \begin{align} \frac{A_0}{g} = \mathcal Q\frac{2S(0)}{M} \approx \frac{\mathcal Q}{2} \frac{\triangle n }{n_0+2\triangle n/9}. \label{eq:acceleration} \end{align} Here, we use the Pad\'e approximation of order $(1/1)$ of $2S(0)/M $ and define a model parameter $\mathcal Q$ with $0<\mathcal Q\leq1$ to be determined by numerical simulations. Note that the Pad\'e approximation is a better approximation than a simple truncated Taylor expansion especially for large relative amplitudes of order unity. Eq.~\eqref{eq:acceleration} predicts that $A_0/g\sim \triangle n/n_0$ for small amplitudes $|\triangle n/n_0| < 1$ and $A_0 \sim g $ for very large amplitudes $\triangle n /n_0 \gg 1$, which confirms the predictions in~\cite{Pecseli2016} and reproduces the limits discussed in~\cite{Angus2014}. As pointed out earlier for compressible flows Eq.~\eqref{eq:inequality} can be further estimated \begin{align} (MV)^2 \leq 4 T_eS(0)^2/m_i. \label{} \end{align} We therefore have a restriction on the maximum COM velocity for compressible flows, which is absent for incompressible flows \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = {\mathcal Q}\frac{2S(0)}{M} \approx \frac{\mathcal Q}{2} \frac{|\triangle n| }{n_0+2/9 \triangle n } \approx \frac{\mathcal Q}{2} \frac{|\triangle n|}{n_0}. \label{eq:linear} \end{align} For $|\triangle n /n_0|< 1$ Eq.~\eqref{eq:linear} reduces to the linear scaling derived in~\cite{Kube2016}. Finally, a scale analysis of Eq.~\eqref{eq:vorticity} shows that~\cite{Ott1978, Garcia2005, Held2016a} \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = \mathcal R \left( \frac{\ell}{R_0}\frac{|\triangle n|}{n_0} \right)^{1/2}. \label{eq:sqrt} \end{align} This equation predicts a square root dependence of the center of mass velocity on amplitude and size. We now propose a simple phenomenological model that captures the essential dynamics of blobs and depletions in the previously stated systems. More specifically the model reproduces the acceleration Eq.~\eqref{eq:acceleration} with and without Boussinesq approximation, the square root scaling for the COM velocity Eq.~\eqref{eq:sqrt} for incompressible flows as well as the relation between the square root scaling Eq.~\eqref{eq:sqrt} and the linear scaling Eq.~\eqref{eq:linear} for compressible flows. The basic idea is that the COM of blobs behaves like the one of an infinitely long plasma column immersed in an ambient plasma. The dynamics of this column reduces to the one of a two-dimensional ball. This idea is similar to the analytical ``top hat'' density solution for blob dynamics recently studied in~\cite{Pecseli2016}. The ball is subject to buoyancy as well as linear and nonlinear friction \begin{align} M_{\text{i}} \frac{d V}{d t} = (M_{\text{g}} - M_\text{p}) g - c_1 V - \mathrm{sgn}(V ) \frac{1}{2}c_2 V^2. \label{eq:ball} \end{align} The gravity $g$ has a positive sign in the coordinate system; sgn$(f)$ is the sign function. The first term on the right hand side is the buoyancy, where $M_{\text{g}} := \pi \ell^2 (n_0 + \mathcal Q \triangle n/2)$ is the gravitational mass of the ball with radius $\ell$ and $M_\mathrm{p} := n_0 \pi \ell^2 $ is the mass of the displaced ambient plasma. Note that if $\triangle n<0$ the ball represents a depletion and the buoyancy term has a negative sign, i.e. the depletion will rise. We introduce an inertial mass $M_{\text{i}} := \pi\ell^2 (n_0 +2\triangle n/9)$ different from the gravitational mass $M_{\text{g}}$ in order to recover the initial acceleration in Eq.~\eqref{eq:acceleration}. We interpret the parameters $\mathcal Q$ and $2/9$ as geometrical factors that capture the difference of the actual blob form from the idealized ``top hat'' solution. Also note that the Boussinesq approximation appears in the model as a neglect of inertia, $M_{\text{i}} = \pi\ell^2n_0$. The second term is the linear friction term with coefficient $c_1(\ell)$, which depends on the size of the ball. If we disregard the nonlinear friction, $c_2=0$, Eq.~\eqref{eq:ball} directly yields a maximum velocity $c_1V^*=\pi \ell^2 n g \mathcal Q\triangle n/2$. From our previous considerations $\max V/\ensuremath{C_\mathrm{s}}=\mathcal Q \triangle n /2n_0$, we thus identify \begin{align} c_1 = \pi\ell^2 n_0 g/\ensuremath{C_\mathrm{s}}. \label{} \end{align} The linear friction coefficient thus depends on the gravity and the size of the ball. The last term in \eqref{eq:ball} is the nonlinear friction. The sign of the force depends on whether the ball rises or falls in the ambient plasma. If we disregard linear friction $c_1=0$, we have the maximum velocity $V^*= \sigma(\triangle n)\sqrt{\pi \ell^2|\triangle n| g\mathcal Q/c_2}$, which must equal $\max V= \sigma(\triangle n) \mathcal R \sqrt{g \ell |\triangle n/n_0|}$ and thus \begin{align} c_2 = {\mathcal Q\pi n_0\ell }/{\mathcal R^2}. \label{} \end{align} Inserting $c_1$ and $c_2$ into Eq.~\eqref{eq:ball} we can derive the maximum absolute velocity in the form \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = \left(\frac{\mathcal R^2}{\mathcal Q}\right) \frac{\ell}{R_0} \left( \left({1+\left( \frac{\mathcal Q}{\mathcal R} \right)^{2} \frac{|\triangle n|/n_0 }{\ell/R_0}}\right)^{1/2}-1 \right) \label{eq:vmax_theo} \end{align} and thus have a concise expression for $\max |V|$ that captures both the linear scaling \eqref{eq:linear} as well as the square root scaling \eqref{eq:sqrt}. With Eq.~\eqref{eq:acceleration} and Eq.~\eqref{eq:sqrt} respectively Eq.~\eqref{eq:vmax_theo} we finally arrive at an analytical expression for the time at which the maximum velocity is reached via $t_{\max V} \sim \max V/A_0$. Its inverse $\gamma:=t_{\max V}^{-1}$ gives the global interchange growth rate, for which an empirical expression was presented in Reference~\cite{Held2016a}. We use the open source library FELTOR to simulate Eqs.~\eqref{eq:generala} and \eqref{eq:vorticity} with and without drift compression. For numerical stabilty we added small diffusive terms on the right hand sides of the equations. The discontinuous Galerkin methods employ three polynomial coefficients and a minimum of $N_x=N_y=768$ grid cells. The box size is $50\ell$ in order to mitigate influences of the finite box size on the blob dynamics. Moreover, we used the invariants in Eqs. \eqref{eq:energya} and \eqref{eq:energyb} as consistency tests to verify the code and repeated simulations also in a gyrofluid model. No differences to the results presented here were found. Initial perturbations on the particle density field are given by Eq.~\eqref{eq:inita}, where the perturbation amplitude $\triangle n/n_0$ was chosen between $10^{-3}$ and $20$ for blobs and $-10^0$ and $ -10^{-3}$ for depletions. Due to computational reasons we show results only for $\triangle n/n_0\leq 20$. For compressible flows we consider two different cases $\ell/R_0 = 10^{-2}$ and $\ell /R_0 = 10^{-3}$. For incompressible flows Eq.~\eqref{eq:generala} and \eqref{eq:vorticity} can be normalized such that the blob radius is absent from the equations~\cite{Ott1978, Kube2012}. The simulations of incompressible flows can thus be used for both sizes. The numerical code as well as input parameters and output data can be found in the supplemental dataset to this contribution~\cite{Data2017}. \begin{figure}[htb] \includegraphics[width=\columnwidth]{com_blobs} \caption{ The maximum radial COM velocities of blobs for compressible and incompressible flows are shown. The continuous lines show Eq.~\eqref{eq:vmax_theo} while the dashed line shows the square root scaling Eq.~\eqref{eq:sqrt} with $\mathcal Q = 0.32$ and $\mathcal R=0.85$. } \label{fig:com_blobs} \end{figure} In Fig.~\ref{fig:com_blobs} we plot the maximum COM velocity for blobs with and without drift compression. For incompressible flows blobs follow the square root scaling almost perfectly. Only at very large amplitudes velocities are slightly below the predicted values. For small amplitudes we observe that the compressible blobs follow a linear scaling. When the amplitudes increase there is a transition to the square root scaling at around $\triangle n/n_0 \simeq 0.5$ for $\ell/R_0=10^{-2}$ and $\triangle n/n_0 \simeq 0.05$ for $\ell/R_0=10^{-3}$, which is consistent with Eq.~\eqref{eq:vmax_theo} and Reference~\cite{Kube2016}. In the transition regions the simulated velocities are slightly larger than the predicted ones from Eq.~\eqref{eq:vmax_theo}. Beyond these amplitudes the velocities of compressible and incompressible blobs align. \begin{figure}[htb] \includegraphics[width=\columnwidth]{com_holes} \caption{ The maximum radial COM velocities of depletions for compressible and incompressible flows are shown. The continuous lines show Eq.~\eqref{eq:vmax_theo} while the dashed line shows the square root scaling Eq.~\eqref{eq:sqrt} with $\mathcal Q = 0.32$ and $\mathcal R=0.85$. Note that small amplitudes are on the right and amplitudes close to unity are on the left side. } \label{fig:com_depletions} \end{figure} In Fig.~\ref{fig:com_depletions} we show the maximum radial COM velocity for depletions instead of blobs. For relative amplitudes below $|\triangle n|/n_0 \simeq 0.5$ (right of unity in the plot) the velocities coincide with the corresponding blob velocities in Fig.~\ref{fig:com_blobs}. For amplitudes larger than $|\triangle n|/n_0\simeq 0.5$ the velocities follow the square root scaling. We observe that for plasma depletions beyond $90$ percent the velocities in both systems reach a constant value that is very well predicted by the square root scaling. \begin{figure}[htb] \includegraphics[width=\columnwidth]{acc_blobs} \caption{ Average acceleration of blobs for compressible and incompressible flows are shown. The continuous line shows the acceleration in Eq.~\eqref{eq:acceleration} with $\mathcal Q=0.32$ while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. } \label{fig:acc_blobs} \end{figure} In Fig.~\ref{fig:acc_blobs} we show the average acceleration of blobs for compressible and incompressible flows computed by dividing the maximum velocity $\max V$ by the time to reach this velocity $t_{\max V}$. We compare the simulation results to the theoretical predictions Eq.~\eqref{eq:acceleration} of our model with and without inertia. The results of the compressible and incompressible systems coincide and fit very well to our theoretical values. For amplitudes larger than unity the acceleration deviates significantly from the prediction with Boussinesq approximation. \begin{figure}[htb] \includegraphics[width=\columnwidth]{acc_holes} \caption{ Average acceleration of depletions for compressible and incompressible flows are shown. The continuous line shows the acceleration in Eq.~\eqref{eq:acceleration} with $\mathcal Q=0.32$ while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. } \label{fig:acc_depletions} \end{figure} In Fig.~\ref{fig:acc_depletions} we show the simulated acceleration of depletions in the compressible and the incompressible systems. We compare the simulation results to the theoretical predictions Eq.~\eqref{eq:acceleration} of our model with and without inertia. Deviations from our theoretical prediction Eq.~\eqref{eq:acceleration} are visible for amplitudes smaller than $\triangle n/n_0 \simeq -0.5$ (left of unity in the plot). The relative deviations are small at around $20$ percent. As in Fig.~\ref{fig:com_depletions} the acceleration reaches a constant values for plasma depletions of more than $90$ percent. Comparing Fig.~\ref{fig:acc_depletions} to Fig.~\ref{fig:acc_blobs} the asymmetry between blobs and depletions becomes apparent. While the acceleration of blobs is reduced for large amplitudes compared to a linear dependence the acceleration of depletions is increased. In the language of our simple buoyancy model the inertia of depletions is reduced but increased for blobs. In conclusion we discuss the dynamics of seeded blobs and depletions in a compressible and an incompressible system. With only two fit parameters our theoretical results reproduce the numerical COM velocities and accelerations over five orders of magnitude. We derive the amplitude dependence of the acceleration of blobs and depletions from the conservation laws of our systems in Eq.~\eqref{eq:acceleration}. From the same inequality a linear regime is derived in the compressible system for ratios of amplitudes to sizes smaller than a critical value. In this regime the blob and depletion velocity depends linearly on the initial amplitude and is independent of size. The regime is absent from the system with incompressible flows. Our theoretical results are verified by numerical simulations for all amplitudes that are relevant in magnetic fusion devices. Finally, we suggest a new empirical blob model that captures the detailed dynamics of more complicated models. The Boussinesq approximation is clarified as the absence of inertia and a thus altered acceleration of blobs and depletions. The maximum blob velocity is not altered by the Boussinesq approximation. The authors were supported with financial subvention from the Research Council of Norway under grant 240510/F20. M.W. and M.H. were supported by the Austrian Science Fund (FWF) Y398. The computational results presented have been achieved in part using the Vienna Scientific Cluster (VSC). Part of this work was performed on the Abel Cluster, owned by the University of Oslo and the Norwegian metacenter for High Performance Computing (NOTUR), and operated by the Department for Research Computing at USIT, the University of Oslo IT-department. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
What is the relationship between the maximum velocity and the amplitude of the blob or depletion?
The maximum velocity scales with the square root of the amplitude.
2,748
multifieldqa_en
3k
Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War. Following the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command. Early life and career Hugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea. Although he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname "Huge" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea. Goodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner. He then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator. Goodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King. In June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States. He was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College. Goodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and West Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship . When his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940. World War II Following the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942. By the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best. During the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas. The air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her. Goodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat "V". He was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force. Goodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat "V". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation. Postwar service Following the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945. Goodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy. Revolt of the Admirals In April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force. Goodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus "rob" the branches of autonomy. The outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier. Later service Due to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS). While in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953. Goodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955. On December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States. Retirement Goodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States. Vice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training. Decorations Here is the ribbon bar of Vice admiral Hugh H. Goodwin: References 1900 births 1980 deaths People from Monroe, Louisiana Military personnel from Louisiana United States Naval Academy alumni Naval War College alumni United States Naval Aviators United States Navy personnel of World War I United States Navy World War II admirals United States Navy vice admirals United States submarine commanders Recipients of the Legion of Merit
When did Goodwin become a Naval aviator?
Goodwin became a Naval aviator in January 1929.
2,294
multifieldqa_en
3k
'无锡速芯微电子有限公司是一家集芯片 研发,销售和服务于一体的国家高新技 术企业,为客户提供高性能,高集成 度,极致体验的全协议快充芯片。 无锡速芯微电子有限公司 FastSOC Microelectronics Co.,Ltd. 销售联系方式: 联系人:顾先生 手机:1800 185 3071 邮箱:[email protected] 网址:www.fastsoc.com 地址:无锡市新吴区菱湖大道200号中国物联网国际创新园E-503室 顾工微信号 速芯微公众号 免责声明:本文所述方法、方案均供客户参考,用于提示或者展示芯片应用的一种或者多种方式,不作为最终产品的实际方案。文中所描述的功能和性能指标在实 验室环境下测试得到,部分可以提供第三方测试报告,但是不保证客户产品上能获得相同的数据。本文信息只作为芯片使用的指导,不授权用户使用本公司或者其 他公司的知识产权。本文信息只作为芯片使用的指导,不承担因为客户自身应用不当而造成的任何损失。 **文中信息仅供参考,详情请联系我司获取最新资料” 无锡速芯微电子有限公司 FastSOC Microelectronics Co.,Ltd. 产品手册 2023年 新品快览 FS312A:PD3.0 诱骗- FS312A支持PD2.0/PD3.0最高诱骗电压:20V - FS312AE支持PD2.0/PD3.0 最高诱骗电压:20V支持Emarker模拟功能 - 封装:SOT23-5 VBUS CC1 CC2 DM DP 用电电路 4.7K 0.47uF R C C 1 V D D F U N C C C 2F S 3 1 2 B D M D P EP GND 应用图 FS8628:A+C快充协议CC2 CC1 VBUS CC2 CC1 FS312A FUNC GND VDD 4.7K GND R 用电电路 1uF GND 应用图 多口极简方案 FS8611SP*2+CCM-8611SP-A+7533B-T 双C智能降功率方案 FS8611S USB-C AC-DC 双变压器 7533B-T CCM-8611SP-A FS8611S USB-C 采用2颗FS8611SP搭配CCM-8611SP-A (MCU),7533B-T配合工作 - 支持多种协议 - 支持I2C控制 - 任意单 C 的为 35W - 双 插 降 功 率 , 三 档 功 率 智 能 配 置:27.4W+7.4W;17.4W+17.4W; 27.4W - BOM极简,成本低 FS312B:PD3.1 诱骗FS8611K*2+CCM-8611K-A+7550B-T 双C方案 - FS312BL支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:20V - FS312BLE支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:20V支持Emarker模拟功能 - FS312BH支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:48V - FS312BHE支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:48V 支持Emarker模拟功能 - 封装:DFN2x2-6L - 兼容兼容BC1.2、Apple2.4A、 QC2.0 Class A、QC3.0 Class A/B、 FCP、SCP、AFC、低压直充等 - 兼容Type-C PD2.0、Type-C PD3.0、 Type-C PD3.0 PPS、QC4.0协议 - 支持两路DP/DM - 支持CV/CC(分段CC)功能 - 支持定制PDO - 支持A+C双口工作,电压自动回5V - 支持FB/OPTO反馈 - 封装:QFN3x3-20L VPWR FB PowerSystem 100K GND R1 GND 19 VIN 17 FB FUNC1 FUNC2 20 15 18 13 PLUGIND VFB FS8628 QFN3x3-20L AGATE 47K 7.5K 47K 7.5K 1 16 8 7 3 4 5 6 10 9 11 CGATE CVBUS CC2 CC1 CDP CDM AVBUS DM DP ISP ISN 12 应用图 2 V3P3 100Ω 1u EP GND GND CVBUS TYPE- C CC2 CC1 CDP CDM CGND TYPE-A AVBUS DM DP 10n 200 AGND 5mΩ GND FS8611K USB-C AC-DC DC-DC 7550B-T CCM-8611K-A FS8611K USB-C 采用2颗FS8611K搭配CCM-8611K-A (MCU)工作,7550B-T配合工作 - 支持PD2.0/PD3.0/QC2.0/AFC/FCP - 支持PDO定制 - 任意单 C 的为 35W(可定制) - 双插18W(可定制15W/20W) - BOM极简,成本低 FS212C+ACM-212C-A+7550B-T 双C方案 FS212C USB-C AC-DC DC-DC 7550B-T ACM-212C-A FS8623B-A+C方案 AC-DC DC-DC FS8623B USB-A USB-C USB-A 采 用 1 颗 F S 2 1 2 C 搭 配 ACM-212C-A 工 作,7550B-T配合工作 - 支持PD2.0/PD3.0 - 支持PDO定制 - 任意单 C 的为20W - 双插7.5W回5V - BOM极简,成本低 采用一颗FS8623B实现A+C方案 - 兼容兼容Apple2.4A/低压直充 QC2.0 Class A/QC3.0 Class A/B/ FCP/SCP等 - 兼 容Type -C PD2.0 / PD3.0 / PD3.0PPS/QC4.0协议 - 支持PDO定制 - 双插回5V 多口方案选型 产品选型 受电端芯片选型 速芯微现有多种多口的方案选择:A+C,C+C,C+C+A,C+C+C,C+C+A+A等方案。对于 A+C的方案,可使用1颗芯片实现,也可用多颗芯片来实现。 速芯微现有多种受电端诱骗芯片,客户可根据应用需求进行选择。 受电端诱骗芯片应用领域 筋膜枪 无线充 线材 无人机 产品型号 PD2.0 PD3.0 PD3.1 第三方协议 诱骗电压(V) 控制方式 内置Emarker 定制 封装 FS312A √ √ 5/9/12/15/20 电阻阻值 可变电压策略 SOT23-5 FS312AE √ √ 5/9/12/15/20 电阻阻值 √ (公头专用) 可变电压策略 SOT23-5 FS312BL √ √ √ √ 5/9/12/15/20 电阻阻值 可变电压策略 DFN2x2-6 FS312BLE √ √ √ √ 5/9/12/15/20 电阻阻值 √ (公头专用) 可变电压策略 DFN2x2-6 FS312BH √ √ √ √ 5/20/28/36/48 电阻阻值 可变电压策略 DFN2x2-6 FS312BHE √ √ √ √ 5/20/28/36/48 电阻阻值 √ (公头专用) 可变电压策略 DFN2x2-6 FS312LC √ √ √ 5/9/12 电阻阻值 可变第三方 协议 SSOP10 FS312HC √ √ √ 5/9/12/15/20 电阻阻值 可变第三方 协议 SSOP10 FS2711Q √ √ √ 任意设置 I2C √ QFN3x3-16 FS2711P √ √ √ 任意设置 I2C √ QFN3x3-16 FS2711PA √ √ 全协议 任意设置 I2C √ SSOP10 FS2711SW √ √ 全协议 SSOP10 FS512 √ √ 全协议 任意设置 I2C √ SSOP10 方案 类型 产品型号 单C 单A 双插 A+C方案 FS8623 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8623B 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8628 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8611RPC+FS116DB 65W(PPS)(可定制) A口全协议18w A口:5V/2.4A C口:45W FS8628RC+FS116DB 35W(可定制) A口全协议18w A口:5V(BC1.2,Apple 2.4) C口:20W 方案类型 产品型号 单C1 单C2 C1/C2 C+C方案 FS8611RPB*2 30W(可定制) 30W(可定制) C1/C2:5V/3A(或5V/2.4A) FS8611GH*2 35W(可定制) 35W(可定制) C1/C2:18W(可定制) FS8628P*2 35W(可定制) 35W(可定制) C1/C2:17.4W可定制) FS8611KL*2 20W(可定制) 20W(可定制) C1/C2:5V/1.5 A FS8611PC*2 35W 35W C1/C2:18W FS8611BH*2 65W(可定制) 65W(可定制) C1:45W(可定制)C2:20W(可定制) FS8628RPC+FS8611RB 45W(可定制)) 36W (可定制)) C1:30W(可定制)C2:5V/1.5A(可定制) 方案类型 产品型号 单C1 单C2 单A C1+C2 C1/C2+A C1+C2+A C+C+A FS8611S*2+FS116DB 65W(可定制) 65W( 可定制)) A口全协议18w 智能分配功率 45W+18W C1/C2:智能分配功率 A:18W(或5V1.5A) FS8612C+FS8628P 100W(可定制) 35W (可定制)) 20W C1:65W C2:20W C1+A:65W+20W C2+A:7.5W+7.5W C1:65W C2:7.5W A:7.5W 其他 Source-TYPE C协议芯片选型 Source-TYPE A协议芯片选型 速芯微现有多种TYPE-C的快充协议芯片,支持多种协议,支持客户定制,多样化,满 足客户对TYPE C的各种快充需求。 速芯微现有多种TYPE A快充协议芯片,支持全协议,支持定制,满足客户对A口协议的各种需 求。速芯微的TYPE-A快充协议芯片的协议丰富,FS112系列拥有多种的型号;FS116D 系列带插入指示,可搭配TYPE-C快充协议芯片,实现A+C,A+C+C,A+A+C+C等多口方 案,协议丰富,其中FS116A一般用于插入指示使用 Source-TYPE A协议芯片引脚封装图 D+ VSS FB 1 2 3 FS112 6 5 4 D- VDD FUNC GATE VIN FUNC FB LED/PLUG_IN 1 2 3 4 5 FS116D 10 DM 9 8 7 6 DP CSP CSN VSS速芯微的各TYPE-C快充协议芯片之间可搭配使用,实现多口方案,更多详情请咨 询我司工作人员。 多口降功率专用快充协议芯片:FS8611RB,FS8611RC,FS8611RPB,FS8611RPC, FS8612CP。 带I2C快充协议芯片:FS8611S,FS8611SP 产品型号 BC1.2 Apple 2.4 QC2.0 QC3.0 AFC FCP SCP HISCP 大电流直充 封装 FS112 √ √ √ √ √ √ √ SOT23-6 FS112H √ √ √ √ √ √ √ √ √ SOT23-6 FS113 √ v √ √ √ √ √ √ √ SOT23-6 FS116DP √ √ √ √ √ √ √ √ SSOP10 FS116DB √ √ √ √ √ √ √ √ SSOP10 FS116E √ √ √ √ √ √ √ √ √ SSOP10 FS116A √ √ SSOP10 其他 可定制 PD2.0 PD3.0 PD3.0 PPS 第三方协议 反馈方式 MOS CV/CC 定制 封装 FS212C √ √ FB √ SOT23-6 FS212CM √ √ FB PMOS(可省) √ SOT23-6 FS212D √ √ √ FB √ SOT23-6 FS212DH √ √ √ FB √ SOT23-6 FS212DP √ √ √ FB PMOS √ SOT23-6 FS212DG √ √ √ FB PMOS √ SOT23-6 FS8611G √ √ FB PMOS(可省) √ SOP-8 FS8611K √ √ QC2.0/AFC/FCP FB PMOS(可省) √ SOP8 FS8611J √ √ √ 全协议 FB PMOS(可省) √ SOP8 FS8611B √ √ √ 全协议 FB PMOS(可省) √ SSOP10 FS8611RB √ √ 全协议 FB PMOS √ SSOP10 FS8611RC √ √ 全协议 FB PMOS √ SSOP10 FS8611S √ √ √ 全协议 FB PMOS √ SSOP10 FS8611PP √ √ √ 全协议 FB PMOS √ SSOP10 FS8611BP √ √ √ 全协议 FB PMOS(可省) √ SSOP10 FS8611RPB √ √ √ 全协议 FB PMOS √ SSOP10 FS8611RPC √ √ √ 全协议 FB PMOS √ SSOP10 FS8611SP √ √ √ 全协议 FB PMOS(可省) SSOP10 FS8612 √ √ √ 全协议 OPTO PMOS √ √ SSOP16 FS8612B √ √ √ 全协议 FB PMOS √ √ SSOP16 FS8612BP √ √ √ 全协议 FB PMOS √ √ SSOP16 FS8612C √ √ √ 全协议 FB/OPTO PMOS √ √ QFN4x4-16 FS8612CP √ √ √ 全协议 FB/OPTO PMOS √ √ QFN4x4-16 '
对于PD3.0协议,FS312BH支持的最高诱骗电压是多少?
48V.
898
multifieldqa_en
3k
Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.<ref name="nytimes">Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008.</ref> Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives. In 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. Early life and education Born graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead. She then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the "Outstanding Senior" award and graduated as valedictorian of the class of 1964. Legal career Immediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz. Born's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice. Born was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first "Women and the Law" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench. During her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court. In 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated. In July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC). Born and the OTC derivatives market Born was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a "legal uncertainty" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would "stifle financial innovation" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies. In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, "I thought that LTCM was exactly what I had been worried about". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked "How many more failures do you think we'd have to have before some regulation in this area might be appropriate?" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that "the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: "Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did." Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999. The derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008. Born declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: "The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been." She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms. An October 2009 Frontline documentary titled "The Warning" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: "I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience." In 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. According to Caroline Kennedy, "Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right." One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated "I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know", adding that "I could have done much better. I could have made a difference" in response to her warnings. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk. Personal life Born is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time. References External links Attorney profile at Arnold & Porter Brooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video Profile at MarketsWiki Speeches and statements "Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market", before the House Committee On Banking And Financial Services, July 24, 1998. "The Lessons of Long Term Capital Management L.P.", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998. Interview: Brooksley Born for "PBS Frontline: The Warning", PBS, (streaming VIDEO 1 hour), October 20, 2009. Articles Manuel Roig-Franzia. "Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On", The Washington Post, May 26, 2009 Taibbi, Matt. "The Great American Bubble Machine", Rolling Stone'', July 9–23, 2009 1940 births American women lawyers Arnold & Porter people Clinton administration personnel Columbus School of Law faculty Commodity Futures Trading Commission personnel Heads of United States federal agencies Lawyers from San Francisco Living people Stanford Law School alumni 21st-century American women Stanford University alumni
When did Born resign as chairperson of the CFTC?
June 1, 1999.
2,088
multifieldqa_en
3k