aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
1811.11436
|
2903276340
|
We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive amount of dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI sign language dataset which consists of 11,578 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting line for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from a face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieves 94.6 (60.6 , respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compare several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.
|
Until recently, there have been many attempts to recognize and translate sign language using deep learning (DL). @cite_33 have introduced and evaluated several architectures for CNNs to predict the 3D joint locations of a hand given a depth map. @cite_0 have developed a sign language recognition system that is robust in different video backgrounds by extracting signers using boundary and prior shape information. Then, the feature vector is constructed from the segmented signer and used as input to artificial neural network. An end-to-end sequence modelling using CNN-BLSTM architecture usually used for gesture recognition was proposed for large vocabulary sign language recognition with RWTH-PHOENIC-Weather 2014 @cite_8 .
|
{
"abstract": [
"",
"We introduce and evaluate several architectures for Convolutional Neural Networks to predict the 3D joint locations of a hand given a depth map. We first show that a prior on the 3D pose can be easily introduced and significantly improves the accuracy and reliability of the predictions. We also show how to use context efficiently to deal with ambiguities between fingers. These two contributions allow us to significantly outperform the state-of-the-art on several challenging benchmarks, both in terms of accuracy and computation times.",
"This work presents an iterative re-alignment approach applicable to visual sequence labelling tasks such as gesture recognition, activity recognition and continuous sign language recognition. Previous methods dealing with video data usually rely on given frame labels to train their classifiers. However, looking at recent data sets, these labels often tend to be noisy which is commonly overseen. We propose an algorithm that treats the provided training labels as weak labels and refines the label-to-image alignment on-the-fly in a weakly supervised fashion. Given a series of frames and sequence-level labels, a deep recurrent CNN-BLSTM network is trained end-to-end. Embedded into an HMM the resulting deep model corrects the frame labels and continuously improves its performance in several re-alignments. We evaluate on two challenging publicly available sign recognition benchmark data sets featuring over 1000 classes. We outperform the state-of-the-art by up to 10 absolute and 30 relative."
],
"cite_N": [
"@cite_0",
"@cite_33",
"@cite_8"
],
"mid": [
"2318807843",
"1702419847",
"2755802490"
]
}
|
Neural Sign Language Translation based on Human Keypoint Estimation
|
Sign language recognition or translation is a study that interprets a visual language that has its independent grammar into a spoken language. The visual language combines various information on the hands and facial expression according to this grammar to present the exact meaning [12,51]. The issue is a challenging subject in computer vision and a significant topic for hearing-impaired people.
In recent years, Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) architecture [17], and Gated Recurrent Units (GRUs) [6] in particular, have been primarily employed as essential approaches to model a sequence and solve the sequence to sequence problems such as machine translation and image captioning [9,31,47,53]. Convolutional neural networks (CNNs) are powerful models that have archived excellent performance in various visual tasks such as image classification [19,20], object detection [14,42], semantic segmentation [32,54], and action recognition [10,34].
Sign language with a unique grammar express the linguistic meaning through the shape and movement of hands, moreover, the facial expression that present emotion and specific intentions [51]. Understanding sign languages that it requires a high level of spatial and temporal knowledge is difficult with the current level of computer vision techniques based on neural networks [11,12,15,25,27,28,46]. More importantly, the main difficulty comes from the lack of dataset for training neural networks. Many sign languages represent different words and sentences of spoken languages with gestures sequences comprising continuous pose of hands and facial expressions while 'hand (finger) languages' only represent each letter in an alphabet with the shape of a single hand [7]. This implies that there are uncountably many combinations of the cases even to describe a single human intention with the sign language.
Hence, we restrict ourselves to a specific domain which is related to various emergencies. We build the Korean sign language dataset collected from eleven Korean professional signers who are hearing-impaired people. The dataset consists of high-resolution videos that recorded Korean sign languages corresponding to 419 words and 105 sentences related to various emergency situations. We, then, present our sign language translation system based on human keypoints of hands, pose, and face. Our system is trained and tested with the sign language dataset built by our corpus, and we show a robust performance considering that the scale of dataset is not large enough.
KETI Sign Language Dataset
The KETI dataset is constructed to understand the Korean sign language of hearing-impaired people in various emergencies because they are challenging to cope with the situations and sometimes are in severe conditions. In that cases, even when they are aware of that situations, it is very hard to report the situations and receive help from government agencies due to the communication problem. Therefore, we have carefully examined the relatively general conversation of emergency cases and chosen useful 105 sentences and 419 words used in such situations.
The KETI sign language dataset consists of 11,578 full high definition (HD) videos, that are recorded at 30 frames per second and from two camera angles; front and side. The dataset is recorded by the designed corpus and contains sentences and words performed by eleven different hearingimpaired signers to eliminate the expression error by signers who are non-disabled people. Moreover, the meanings of the sentences and words are delivered to hearing-impaired signers through the expert's sign languages in order to induce the correct expression. Each signer records a total of 1,048 Human Keypoint Estimation Figure 1. An overall architecture of our approach that translates a sign language video into a natural language sentence using sequence to sequence model based on GRU cells.
videos for the dataset. For the training and validation sets, we have chosen ten signers from eleven signers and chosen nine sign videos for each sign for the training set. The remaining sign videos are assigned to the validation set. The Test Set consists of a single signer whose sign video does not exist in the training set or the validation set. Several statistics of the dataset are given in Table 3 and an example frame from the dataset is presented in Figure 3.
In particular, we have annotated each of the 105 signs that correspond to the useful sentences in emergencies mentioned above with five different natural language sentences in Korean. Moreover, we have annotated all sign videos with the corresponding sequences of glosses [29], where a gloss is a unique word that corresponds to a unit sign and used to transcribe sign language. For instance, a sign implying 'I am burned.' can be annotated with the following sequence of glosses: ('FIRE', 'SCAR'). Similarly, a sentence 'A house is on fire.' is annotated by ('HOUSE', 'FIRE'). Apparently, glosses are more appropriate to annotate a sign because it is possible to be expressed in various natural sentences or words with the same meaning. For this reason, we have annotated all signs with the glosses with the help of Korean sign language experts.
For the communication with hearing-impaired people in the situations, the KETI dataset is used to develop an artificial intelligence-based sign language recognizer or translator. All videos are recorded in a blue screen studio to minimize any undesired influence and learn how to recognize or translate the signs with insufficient data.
Our Approach
We propose a sign recognition system based on the human keypoints that are estimated by pre-existing libraries such as OpenPose [5,43,52]. Here we develop our system based on OpenPose, an open source toolkit for real-time multiperson keypoint detection. OpenPose can estimate in total 130 keypoints where 18 keypoints are from body pose, 21 keypoints are from each hand, and 70 keypoints from a face. The primary reason of choosing OpenPose as a feature extractor for sign language recognition is that it is robust to many types of variations.
Human Keypoint Detection by OpenPose
First, our recognition system is robust in different cluttered backgrounds as it only detects the human body. Second, the system based on the human keypoint detection works well regardless of signer since the variance of extracted keypoints is negligible. Moreover, we apply the vector standardization technique to further reduce the variance which is dependent on signer. Third, our system can enjoy the benefits of the improvement on the keypoint detection system which has a great potential in the future because of its versatility. For instance, the human keypoint detection system can be used for recognizing different human behaviors and actions given that the relevant dataset is secured. Lastly, the use of high level features is necessary when the scale of the dataset is not large enough. In the case of sign language dataset, it is more difficult to collect than the other dataset as many professional signers should be utilized for recording sign language videos of high quality.
Feature Vector Normalization
There have been many successful attempts to employ various types of normalization methods in order to achieve the stability and speed-up of the training process [1,21,48]. One of the main difficulty in sign language translation with the small dataset is the large visual variance as the same sign can look very different depending on the signer. Even if we utilize the feature vector which is obtained by estimating the keypoints of human body, the absolute positions of the keypoints or the scale of the body parts in the frame can be very different. For this reason, we apply a special normalization method called the object 2D normalization that suits well in our purpose.
After extracting high-level human keypoints, we normalize the feature vector using the mean and standard deviation of the vector to reduce the variance of the data. Let us denote a 2D feature vector by
V = (v 1 , v 2 , . . . , v n ) ∈ N n×2 that consists of n elements where each element v i ∈ N 2 , 1 ≤ i ≤ n stands for a single keypoint of human part. Each ele- ment v i = (v x i , v y i ) consists of two integers v x i
and v y i that imply the xand the y-coordinates of the keypoint v i in the video frame, respectively. From the given feature vector V , we can extract the two feature vectors as follows:
V x = (v x 1 , v x 2 , . . . , v x n ) and V y = (v y 1 , v y 2 , . . . , v y n ).
Simply speaking, we collect the x and y-coordinates of keypoints separately while keeping the order. Then, we normal-ize the x-coordinate vector V x as follows:
V * x = V x −V x σ(V x ) , whereV x is the mean of V x and σ(V x ) is the standard devia- tion of V x .
Note that V * y is calculated analogously. Finally, it remains to concatenate the two normalized vectors to form the final feature vector V * = [V *
x ; V * y ] ∈ N 2n which will be used as the input vector of our neural network.
It should be noted that we assume that the keypoints of lower body parts are not necessary for sign language recognition. Therefore, we only use 124 keypoints from 137 keypoints detected by OpenPose since six keypoints of human pose correspond to lower body parts such as both feet, knees and pelvises as you can see in Figure 2. We randomly sample 10 to 50 keyframes from each sign video. Hence, the dimension of input feature vector is 248 × |V |, where |V | ∈ {10, 20, 30, 40, 50}.
Frame Skip Sampling for Data Augmentation
The main difficulty of training neural networks with small datasets is that the trained models do not generalize well with data from the validation and the Test Sets. As the size of dataset is even smaller than the usual cases in our problem, we utilize the random frame skip sampling that is commonly used to process video data such as video classification [22] for augmenting training data. The effectiveness if data augmentation has been proved in many tasks including image classification [40]. Here, we randomly extract multiple representative features of a video.
Given a sign video S = (f 1 , f 2 , . . . , f l ) that contains l frames from f 1 to f l , we randomly select a fixed number of frames, say n. Then, we first compute the average length of gaps between frames as follows:
z = l n − 1 .
We first extract a sequence of frames with indices from the following sequence Y = (y, y + Z, y + 2z . . . , y + (n − 1)z) ∈ N n , where y = l−z(n−1) 2 and call it a baseline sequence. Then, we generate a random integer sequence R = (r 1 , r 2 , . . . , r n ) ∈ [1, z] n and compute the sum of the random sequence and the baseline sequence. Note that the value of the last index is clipped to the value in the range of [1, l]. We start from the baseline sequence instead of choosing any random sequence of length l to avoid generating random sequences of frames that are possibly not containing 'key' moments of signs.
Attention-based Encoder-Decoder Network
The encoder-decoder framework based on RNN architectures such as LSTMs or GRUs is gaining its popularity for neural machine translation [2,33,47,49] as it successfully replaces the statistical machine translation methods.
Given an input sentence x = (x 1 , x 2 , . . . , x Tx ), an encoder RNN plays its role as follows:
h t = RNN(x t , h t−1 )
where h t ∈ R n is a hidden state at time t. After processing the whole input sentence, the encoder generates a fixed-size context vector that represents the sequence as follows:
c = q(h 1 , h 2 , . . . , h Tx ),
For instance, the RNN is an LSTM cell and q simply returns the last hidden state h Tx in one of the original sequence to sequence paper by Sutskever et al. [47]. Now suppose that y = (y 1 , y 2 , . . . , y Ty ) is an output sentence that corresponds to the input sentence x in training set. Then, the decoder RNN is trained to predict the next word conditioned on all the previously predicted words and the context vector from the encoder RNN. In other words, the decoder computes a probability of the translation y by decomposing the joint probability into the ordered conditional probabilities as follows:
p(y) = Ty i=1
p(y i |{y 1 , y 2 , . . . , y i−1 }, c). Now our RNN decoder computes each conditional probability as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , c) = softmax(g(s i )), where s i is the hidden state of decoder RNN at time i and g is a linear transformation that outputs a vocabulary-sized vector. Note that the hidden state s i is computed by
s i = RNN(y i−1 , s i−1 , c),
where y i−1 is the previously predicted word, s i−1 is the last hidden state of decoder RNN, and c is the context vector computed from encoder RNN.
Bahdanau attention. Bahdanau et al. [2] conjectured that the fixed-length context vector c is a bottleneck in improving the performance of the translation model and proposed to compute the context vector by automatically searching for relevant parts from the hidden states of encoder. Indeed, this 'attention' mechanism has proven really useful in various tasks including but not limited to machine translation. They proposed a new model that defines each conditional probability at time i depending on a dynamically computed context vector c i as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , x) = softmax(g(s i )), where s i is the hidden state of the decoder RNN at time i which is computed by
s i = RNN(y i−1 , s i−1 , c i ).
The context vector c i is computed as a weighted sum of the hidden states from encoder:
c i = Tx j=1 α ij h j , where α ij = exp(score(s i−1 , h j )) Tx k=1 exp(score(s i−1 , h k ))
.
Here the function 'score' is called an alignment function that computes how well the two hidden states from the encoder and the decoder, respectively, match. For example, score(s i , h j ), where s i is the hidden state of the encoder at time i and h j is the hidden state of the decoder at time j implies the probability of aligning the part of the input sentence around position i and the part of the output sentence around position j.
Luong attention. Later, Luong et al. [33] examined a novel attention mechanism which is very similar to the attention mechanism by Bahdananu et al. but different in some details. First, only the hidden states of the top RNN layers in both the encoder and decoder are used instead of using the concatenation of the forward and backward hidden states of the bi-directional encoder and the hidden states of the uni-directional non-stacking decoder. Second, the computation path is simplified by computing the attention matrix after computing the hidden state of the decoder at current time step. They also proposed the following three scoring functions to compute the degree of alignment between the hidden states as follows:
score(h t , h s ) = h t h s , (Dot) h t W h s , (General) V tanh(W [h t ; h s ]), (Concat.)
where V and W are learned weights. Note that the third one based on the concatenation is originally proposed by Bahdanau et al. [2]. Moreover, as the Transformer uses neither recurrence nor convolution, the model requires some information about the order of the sequence. To cope with this problem, the Transformer uses positional encoding which contains the information about the relative or absolute position of the words in the sequence using sine and cosine functions.
Experimental Results
We implemented our networks using PyTorch [39], which is an open source machine learning library for Python. The Adam optimizer [24] was used to train the network weights and biases for 50 epochs with an initial learning rate 0.001. During the training, we changed the learning rate every 20 epochs by the exponential decay scheduler with discount factor 0.5. We also used the dropout regularization with a probability of 0.8 and the gradient clipping with a threshold 5. Note that the dropout regularization is necessarily high as the size and the variation of the dataset is small compared to other datasets specialized for deep learning training. For the sequence-to-sequence models including the vanilla seq2seq model and two attention-based models, the dimension of hidden states is 256. For the Transformer model, we use the dimension for input and output (d model in [49]) of 256. The other hyper-parameters used for the Transformer are the same as in the original model including the scheduled Adam optimizer in their own setting. Moreover, the batch size is 128, the augmentation factor is 100, the number of chosen frames is 50, and the object 2D normalization is used unless otherwise specified.
As our dataset is annotated in Korean which is an agglutinative language, the morphological analysis on the annotated sentences should be performed because the size of dictionary can be arbitrarily large if we split sentences into words simply by white-spaces in such languages. For this reason, we used the Kkma part-of-speech (POS) tagger in the KoNLPy package which is a Python package developed for natural language processing of the Korean language to tokenize the sentences into the POS level [37].
In order to evaluate the performance of our translation model, we basically calculate 'accuracy' which means the ratio of correctly translated words and sentences. Besides, we also utilized three types of metrics that are commonly used for measuring the performance of machine translation models such as BLEU [36], ROUGE-L [30], METEOR [3], and CIDEr [50] scores.
Sentence-level vs Gloss-level training. As in [7], we conduct an experiment to compare the translation performance depending on the type of annotations. Because each sign corresponds to a unique sequence of glosses while it corresponds to multiple natural language sentences, it is easily predictable that the the gloss-level translation shows better performance. Indeed, we can confirm the anticipation from the summary of results provided in Table 10.
This also leads us to the future work for translating sequences of glosses into natural language sentences. We expect that the sign language translation can be a more feasible task by separating the task of annotating sign videos with natural language sentences by two sub-tasks where we annotate sign videos with glosses and annotate each sequence of glosses with natural language sentences.
Effect of feature normalization methods. In order to evaluate the effect of the feature normalization method on the keypoints estimated by OpenPose, we compare the following five cases: 1) no normalization, 2) feature normalization, 3) object normalization, 4) 2-dimensional (2D) normalization, and 5) object 2D normalization. In the first case, we do not perform any normalization step on the keypoint feature generated by concatenating the coordinate values of all keypoints. In the feature normalization, we create a keypoint feature as in 1) and normalize the feature with the mean and standard deviation of the whole feature. In the object normalization, we normalize the keypoint features obtained from two hands, body, and face, respectively, and concatenate them to generate a feature that represents the frame. We also consider the case of 2D normalization in which we normalize the xand y-coordinates separately. Lastly, the object 2D normalization is the normalization method that we propose in the paper. Table 4 summarizes the result of our experiments. The table does not contain the results of the case without any normalization as it turns out that the proposed object 2D normalization method is superior to the other normalization methods we considered. Especially, when we train our neural network with the keypoint feature vector which is obtained by simply concatenating the x and y coordinates of keypoints without any normalization, the validation loss never decreases. While any kind of normalization seems working positively, it is quite interesting to see that there is an additional boost in translation performance when the object-wise normalization and the 2D normalization are used together.
Effect of augmentation factor. We examine the effect of data augmentation by random frame skip sampling and summarize the experimental results in Table 5. We call the number of training samples randomly sampled from a single sign video the augmentation factor. It should be noted that we do not include the result when we do not augment data by random frame sampling because the validation loss does not decrease at all due to overfitting. The result shows that the optimal augmentation factor is indeed 50. Considering the fact that the average number of frames in a sign video is larger than 200, the average length of gaps between frames is larger than 4. Then, there are 4 50 possible random sequences on average and consequently the probability of having exactly same training sample is really low. However, the result implies that increasing the augmentation factor has a limit at some point.
Effect of attention mechanisms. Here we compare four types of encoder-decoder architectures that are specialized in various machine translation tasks. Table 2 demonstrates the clear contrast between the attention-based model by Luong et al. [33] and the Transformer [49]. While the model of Luong et al. shows better performance than the Transformer on the validation set that contains more similar data to the training set, the Transformer generalizes much better to the test set which consists of sign videos of an independent signer.
Effect of the number of sampled frames. It is useful to know the optimal number of frames if we plan to develop a real-time sign language translation system because we can reduce the computational cost of the inference engine by efficiently skipping unnecessary frames. Table 6 shows how the number of sampled frames affects the translation performance. As the sequence-to-sequence model works for any variable-length input sequences, we do not necessarily fix the number of sampled frames. However, it is useful to know the optimal number of frames as the translation performance of the sequence-to-sequence models tends to decline due to the vanishing gradient problem [38].
Interestingly, our experimental result shows that the optimal number of frames for the best translation performance is 30 for the validation set and 50 for the test set.
Effect of batch size. Recently, it is increasingly accepted that training with small batch often generalizes better to the test set than training with large batch [18,45]. However, our experimental results provided in Table 7 shows the opposite phenomenon. We suspect that this is due to the scale of the original dataset because large batch is known to be useful to prevent overfitting to some extent.
Ablation study
We also study the effect of the use of keypoint information from two hands, body, and face. The experimental results summarized in Table 8 imply that the keypoint information from both hands is the most important among all the keypoint information from hands, face, and body. Interestingly, the experimental result tells us that the keypoint information from face does not help to improve the performance in general. The performance even drops when we add face keypoints in all cases. We suspect that the reason is partly due to the imbalanced number of keypoints from different parts. Recall that the number of keypoints from face is 70 and this is much larger than the number of the other keypoints.
While the keypoints from both hands are definitely the most important features to understand signs, it is worth noting that the 12 keypoints from body are boosting up the performance. Actually, we lose the information about relative positions of parts from each other as we normalize the coordinates of each part separately. For instance, there is no way to infer the relative positions of two hands with the normalize feature vectors from both hands. However, it is possible to know the relative position from the keypoints of body as there also exist keypoints corresponding to the hands.
Conclusions
In this work, we have introduced a new sign language dataset which is manually annotated in Korean spoken language sentences and proposed a neural sign language translation model based on the sequence-to-sequence translation models. It is well-known that the lack of large sign language dataset significantly hinders the full utilization of neural network based algorithms for the task of sign language translation that are already proven very useful in many tasks. Moreover, it is really challenging to collect a sufficient amount of sign language data as we need helps from sign language experts.
For this reason, we claim that it is inevitable to extract high-level features from sign language video with a sufficiently lower dimension. We are able to successfully train a novel sign language translation system based on the human keypoints that are estimated by a famous open source project called OpenPose developed by Hidalgo et al.
In the future, we aim at improving our sign language translation system by exploiting various data augmentation techniques using the spatial properties of videos. It is also important to expand the KETI sign language dataset to sufficiently larger scale by recording videos of more signers in different environments.
Supplemental Material
Keypoint information used in sign language translation. We use the estimated coordinates of 124 keypoints of a signer to understand the sign language of the signer, where 12 keypoints are from human body, 21 keypoints are from each hand, and 70 keypoints are from face. See Figure 3 for example. 1 Note that the number of keypoints from human body is 25 but we select 12 keypoints that correspond to upper body parts. The chosen indices and the names of the parts are as follows:
• 0 (nose),
• 1 (neck),
• 2 (right shoulder),
• 3 (right elbow),
• 4 (right wrist), In future, we plan to plug in an additional attention module to learn which keypoint contributes more to understand the sign video Figure 3. The human keypoints used for sign language recognition. Note that the figures are borrowed from the public web page of the OpenPose project [5,43,52].
Comparison with CNN-based approaches. In Table 9, we compare our approach to the classical methods based on CNN features extracted from well-known architectures such as ResNet [16] and VGGNet [44].
Since the size of sign video frames (1, 920 × 1, 080) is different to the size of input of CNN models (224 × 224), we first crop the central area of frames in 1, 080 × 1, 080 and resize the frames to 224 × 224.
The experimental results show that ResNet-101 exhibits the best translation performance on the validation set and the VGGNet-19 demonstrates the best performance on the test set. In general, the performance difference on the validation set is not large but it is apparent that the VGGNet models are much better in generalizing to the test set compared to the ResNet models.
Expectably, the translation models using the CNN extracted features show significantly worse translation performance than the models using the human keypoint features. It is still interesting to know whether the combination of any CNN-based features and human keypoint features works better than when we solely rely on the human keypoint features. As the size of sign language dataset grows, we expect that the CNN-based models improve their performances and generalize much better. Table 9. Performance comparison with translation models based on CNN-based feature extraction techniques. Note that the augmentation factor in this experiment is all set to 50.
Attention maps. In Figure 4, we depict attention maps of the sentence-level translation model on several successful and unsuccessful cases. We can see that the attention weights are more well-distributed on the important frames of the video in the successful case when generating the natural language sentence compared to the failure case. However, the order of the attentions is quite irregular in Figure 4 as there is no direct mapping between sign video frames and tokens of the output sentence.
We also describe the attention maps of the gloss-level translation model in Figure 5. In the attention map of the successful case, we can see that the order of the attentions are more regular than the successful one on the sentence-level. This is because there is a more clear mapping between the continuous frames in the video and the sign gloss on the gloss-level translation. Sign language annotation. We annotate each sign video with five different natural language sentences in Korean. Table 10 contains ten examples from 105 examples in total.
Moreover, we annotate a sign video with a unique sign gloss as presented in Table 10. Table 10. Ten examples of our sign language annotations. We annotate each sign with five natural language sentences in Korean and a unique sign gloss. We only provide two sentences in the table due to space limitations.
| 4,918 |
1811.11436
|
2903276340
|
We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive amount of dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI sign language dataset which consists of 11,578 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting line for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from a face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieves 94.6 (60.6 , respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compare several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.
|
At the same time, one of the most interesting breakthroughs in neural machine translation or even in the entire DL was introduced under the name of sequence-to-sequence (seq2seq)' @cite_14 . The seq2seq model relies on a common framework called an encoder-decoder model with RNN cells such as LSTMs or GRUs. The seq2seq model proved its effectiveness in many sequence generation tasks by achieving almost the human-level performance @cite_14 . Despite its effectiveness, the seq2seq model still has some drawbacks such as the input sequences of varying lengths being represented in fixed-size vectors and the vanishing gradient due to the long-term dependency between distant parts.
|
{
"abstract": [
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2130942839"
]
}
|
Neural Sign Language Translation based on Human Keypoint Estimation
|
Sign language recognition or translation is a study that interprets a visual language that has its independent grammar into a spoken language. The visual language combines various information on the hands and facial expression according to this grammar to present the exact meaning [12,51]. The issue is a challenging subject in computer vision and a significant topic for hearing-impaired people.
In recent years, Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) architecture [17], and Gated Recurrent Units (GRUs) [6] in particular, have been primarily employed as essential approaches to model a sequence and solve the sequence to sequence problems such as machine translation and image captioning [9,31,47,53]. Convolutional neural networks (CNNs) are powerful models that have archived excellent performance in various visual tasks such as image classification [19,20], object detection [14,42], semantic segmentation [32,54], and action recognition [10,34].
Sign language with a unique grammar express the linguistic meaning through the shape and movement of hands, moreover, the facial expression that present emotion and specific intentions [51]. Understanding sign languages that it requires a high level of spatial and temporal knowledge is difficult with the current level of computer vision techniques based on neural networks [11,12,15,25,27,28,46]. More importantly, the main difficulty comes from the lack of dataset for training neural networks. Many sign languages represent different words and sentences of spoken languages with gestures sequences comprising continuous pose of hands and facial expressions while 'hand (finger) languages' only represent each letter in an alphabet with the shape of a single hand [7]. This implies that there are uncountably many combinations of the cases even to describe a single human intention with the sign language.
Hence, we restrict ourselves to a specific domain which is related to various emergencies. We build the Korean sign language dataset collected from eleven Korean professional signers who are hearing-impaired people. The dataset consists of high-resolution videos that recorded Korean sign languages corresponding to 419 words and 105 sentences related to various emergency situations. We, then, present our sign language translation system based on human keypoints of hands, pose, and face. Our system is trained and tested with the sign language dataset built by our corpus, and we show a robust performance considering that the scale of dataset is not large enough.
KETI Sign Language Dataset
The KETI dataset is constructed to understand the Korean sign language of hearing-impaired people in various emergencies because they are challenging to cope with the situations and sometimes are in severe conditions. In that cases, even when they are aware of that situations, it is very hard to report the situations and receive help from government agencies due to the communication problem. Therefore, we have carefully examined the relatively general conversation of emergency cases and chosen useful 105 sentences and 419 words used in such situations.
The KETI sign language dataset consists of 11,578 full high definition (HD) videos, that are recorded at 30 frames per second and from two camera angles; front and side. The dataset is recorded by the designed corpus and contains sentences and words performed by eleven different hearingimpaired signers to eliminate the expression error by signers who are non-disabled people. Moreover, the meanings of the sentences and words are delivered to hearing-impaired signers through the expert's sign languages in order to induce the correct expression. Each signer records a total of 1,048 Human Keypoint Estimation Figure 1. An overall architecture of our approach that translates a sign language video into a natural language sentence using sequence to sequence model based on GRU cells.
videos for the dataset. For the training and validation sets, we have chosen ten signers from eleven signers and chosen nine sign videos for each sign for the training set. The remaining sign videos are assigned to the validation set. The Test Set consists of a single signer whose sign video does not exist in the training set or the validation set. Several statistics of the dataset are given in Table 3 and an example frame from the dataset is presented in Figure 3.
In particular, we have annotated each of the 105 signs that correspond to the useful sentences in emergencies mentioned above with five different natural language sentences in Korean. Moreover, we have annotated all sign videos with the corresponding sequences of glosses [29], where a gloss is a unique word that corresponds to a unit sign and used to transcribe sign language. For instance, a sign implying 'I am burned.' can be annotated with the following sequence of glosses: ('FIRE', 'SCAR'). Similarly, a sentence 'A house is on fire.' is annotated by ('HOUSE', 'FIRE'). Apparently, glosses are more appropriate to annotate a sign because it is possible to be expressed in various natural sentences or words with the same meaning. For this reason, we have annotated all signs with the glosses with the help of Korean sign language experts.
For the communication with hearing-impaired people in the situations, the KETI dataset is used to develop an artificial intelligence-based sign language recognizer or translator. All videos are recorded in a blue screen studio to minimize any undesired influence and learn how to recognize or translate the signs with insufficient data.
Our Approach
We propose a sign recognition system based on the human keypoints that are estimated by pre-existing libraries such as OpenPose [5,43,52]. Here we develop our system based on OpenPose, an open source toolkit for real-time multiperson keypoint detection. OpenPose can estimate in total 130 keypoints where 18 keypoints are from body pose, 21 keypoints are from each hand, and 70 keypoints from a face. The primary reason of choosing OpenPose as a feature extractor for sign language recognition is that it is robust to many types of variations.
Human Keypoint Detection by OpenPose
First, our recognition system is robust in different cluttered backgrounds as it only detects the human body. Second, the system based on the human keypoint detection works well regardless of signer since the variance of extracted keypoints is negligible. Moreover, we apply the vector standardization technique to further reduce the variance which is dependent on signer. Third, our system can enjoy the benefits of the improvement on the keypoint detection system which has a great potential in the future because of its versatility. For instance, the human keypoint detection system can be used for recognizing different human behaviors and actions given that the relevant dataset is secured. Lastly, the use of high level features is necessary when the scale of the dataset is not large enough. In the case of sign language dataset, it is more difficult to collect than the other dataset as many professional signers should be utilized for recording sign language videos of high quality.
Feature Vector Normalization
There have been many successful attempts to employ various types of normalization methods in order to achieve the stability and speed-up of the training process [1,21,48]. One of the main difficulty in sign language translation with the small dataset is the large visual variance as the same sign can look very different depending on the signer. Even if we utilize the feature vector which is obtained by estimating the keypoints of human body, the absolute positions of the keypoints or the scale of the body parts in the frame can be very different. For this reason, we apply a special normalization method called the object 2D normalization that suits well in our purpose.
After extracting high-level human keypoints, we normalize the feature vector using the mean and standard deviation of the vector to reduce the variance of the data. Let us denote a 2D feature vector by
V = (v 1 , v 2 , . . . , v n ) ∈ N n×2 that consists of n elements where each element v i ∈ N 2 , 1 ≤ i ≤ n stands for a single keypoint of human part. Each ele- ment v i = (v x i , v y i ) consists of two integers v x i
and v y i that imply the xand the y-coordinates of the keypoint v i in the video frame, respectively. From the given feature vector V , we can extract the two feature vectors as follows:
V x = (v x 1 , v x 2 , . . . , v x n ) and V y = (v y 1 , v y 2 , . . . , v y n ).
Simply speaking, we collect the x and y-coordinates of keypoints separately while keeping the order. Then, we normal-ize the x-coordinate vector V x as follows:
V * x = V x −V x σ(V x ) , whereV x is the mean of V x and σ(V x ) is the standard devia- tion of V x .
Note that V * y is calculated analogously. Finally, it remains to concatenate the two normalized vectors to form the final feature vector V * = [V *
x ; V * y ] ∈ N 2n which will be used as the input vector of our neural network.
It should be noted that we assume that the keypoints of lower body parts are not necessary for sign language recognition. Therefore, we only use 124 keypoints from 137 keypoints detected by OpenPose since six keypoints of human pose correspond to lower body parts such as both feet, knees and pelvises as you can see in Figure 2. We randomly sample 10 to 50 keyframes from each sign video. Hence, the dimension of input feature vector is 248 × |V |, where |V | ∈ {10, 20, 30, 40, 50}.
Frame Skip Sampling for Data Augmentation
The main difficulty of training neural networks with small datasets is that the trained models do not generalize well with data from the validation and the Test Sets. As the size of dataset is even smaller than the usual cases in our problem, we utilize the random frame skip sampling that is commonly used to process video data such as video classification [22] for augmenting training data. The effectiveness if data augmentation has been proved in many tasks including image classification [40]. Here, we randomly extract multiple representative features of a video.
Given a sign video S = (f 1 , f 2 , . . . , f l ) that contains l frames from f 1 to f l , we randomly select a fixed number of frames, say n. Then, we first compute the average length of gaps between frames as follows:
z = l n − 1 .
We first extract a sequence of frames with indices from the following sequence Y = (y, y + Z, y + 2z . . . , y + (n − 1)z) ∈ N n , where y = l−z(n−1) 2 and call it a baseline sequence. Then, we generate a random integer sequence R = (r 1 , r 2 , . . . , r n ) ∈ [1, z] n and compute the sum of the random sequence and the baseline sequence. Note that the value of the last index is clipped to the value in the range of [1, l]. We start from the baseline sequence instead of choosing any random sequence of length l to avoid generating random sequences of frames that are possibly not containing 'key' moments of signs.
Attention-based Encoder-Decoder Network
The encoder-decoder framework based on RNN architectures such as LSTMs or GRUs is gaining its popularity for neural machine translation [2,33,47,49] as it successfully replaces the statistical machine translation methods.
Given an input sentence x = (x 1 , x 2 , . . . , x Tx ), an encoder RNN plays its role as follows:
h t = RNN(x t , h t−1 )
where h t ∈ R n is a hidden state at time t. After processing the whole input sentence, the encoder generates a fixed-size context vector that represents the sequence as follows:
c = q(h 1 , h 2 , . . . , h Tx ),
For instance, the RNN is an LSTM cell and q simply returns the last hidden state h Tx in one of the original sequence to sequence paper by Sutskever et al. [47]. Now suppose that y = (y 1 , y 2 , . . . , y Ty ) is an output sentence that corresponds to the input sentence x in training set. Then, the decoder RNN is trained to predict the next word conditioned on all the previously predicted words and the context vector from the encoder RNN. In other words, the decoder computes a probability of the translation y by decomposing the joint probability into the ordered conditional probabilities as follows:
p(y) = Ty i=1
p(y i |{y 1 , y 2 , . . . , y i−1 }, c). Now our RNN decoder computes each conditional probability as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , c) = softmax(g(s i )), where s i is the hidden state of decoder RNN at time i and g is a linear transformation that outputs a vocabulary-sized vector. Note that the hidden state s i is computed by
s i = RNN(y i−1 , s i−1 , c),
where y i−1 is the previously predicted word, s i−1 is the last hidden state of decoder RNN, and c is the context vector computed from encoder RNN.
Bahdanau attention. Bahdanau et al. [2] conjectured that the fixed-length context vector c is a bottleneck in improving the performance of the translation model and proposed to compute the context vector by automatically searching for relevant parts from the hidden states of encoder. Indeed, this 'attention' mechanism has proven really useful in various tasks including but not limited to machine translation. They proposed a new model that defines each conditional probability at time i depending on a dynamically computed context vector c i as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , x) = softmax(g(s i )), where s i is the hidden state of the decoder RNN at time i which is computed by
s i = RNN(y i−1 , s i−1 , c i ).
The context vector c i is computed as a weighted sum of the hidden states from encoder:
c i = Tx j=1 α ij h j , where α ij = exp(score(s i−1 , h j )) Tx k=1 exp(score(s i−1 , h k ))
.
Here the function 'score' is called an alignment function that computes how well the two hidden states from the encoder and the decoder, respectively, match. For example, score(s i , h j ), where s i is the hidden state of the encoder at time i and h j is the hidden state of the decoder at time j implies the probability of aligning the part of the input sentence around position i and the part of the output sentence around position j.
Luong attention. Later, Luong et al. [33] examined a novel attention mechanism which is very similar to the attention mechanism by Bahdananu et al. but different in some details. First, only the hidden states of the top RNN layers in both the encoder and decoder are used instead of using the concatenation of the forward and backward hidden states of the bi-directional encoder and the hidden states of the uni-directional non-stacking decoder. Second, the computation path is simplified by computing the attention matrix after computing the hidden state of the decoder at current time step. They also proposed the following three scoring functions to compute the degree of alignment between the hidden states as follows:
score(h t , h s ) = h t h s , (Dot) h t W h s , (General) V tanh(W [h t ; h s ]), (Concat.)
where V and W are learned weights. Note that the third one based on the concatenation is originally proposed by Bahdanau et al. [2]. Moreover, as the Transformer uses neither recurrence nor convolution, the model requires some information about the order of the sequence. To cope with this problem, the Transformer uses positional encoding which contains the information about the relative or absolute position of the words in the sequence using sine and cosine functions.
Experimental Results
We implemented our networks using PyTorch [39], which is an open source machine learning library for Python. The Adam optimizer [24] was used to train the network weights and biases for 50 epochs with an initial learning rate 0.001. During the training, we changed the learning rate every 20 epochs by the exponential decay scheduler with discount factor 0.5. We also used the dropout regularization with a probability of 0.8 and the gradient clipping with a threshold 5. Note that the dropout regularization is necessarily high as the size and the variation of the dataset is small compared to other datasets specialized for deep learning training. For the sequence-to-sequence models including the vanilla seq2seq model and two attention-based models, the dimension of hidden states is 256. For the Transformer model, we use the dimension for input and output (d model in [49]) of 256. The other hyper-parameters used for the Transformer are the same as in the original model including the scheduled Adam optimizer in their own setting. Moreover, the batch size is 128, the augmentation factor is 100, the number of chosen frames is 50, and the object 2D normalization is used unless otherwise specified.
As our dataset is annotated in Korean which is an agglutinative language, the morphological analysis on the annotated sentences should be performed because the size of dictionary can be arbitrarily large if we split sentences into words simply by white-spaces in such languages. For this reason, we used the Kkma part-of-speech (POS) tagger in the KoNLPy package which is a Python package developed for natural language processing of the Korean language to tokenize the sentences into the POS level [37].
In order to evaluate the performance of our translation model, we basically calculate 'accuracy' which means the ratio of correctly translated words and sentences. Besides, we also utilized three types of metrics that are commonly used for measuring the performance of machine translation models such as BLEU [36], ROUGE-L [30], METEOR [3], and CIDEr [50] scores.
Sentence-level vs Gloss-level training. As in [7], we conduct an experiment to compare the translation performance depending on the type of annotations. Because each sign corresponds to a unique sequence of glosses while it corresponds to multiple natural language sentences, it is easily predictable that the the gloss-level translation shows better performance. Indeed, we can confirm the anticipation from the summary of results provided in Table 10.
This also leads us to the future work for translating sequences of glosses into natural language sentences. We expect that the sign language translation can be a more feasible task by separating the task of annotating sign videos with natural language sentences by two sub-tasks where we annotate sign videos with glosses and annotate each sequence of glosses with natural language sentences.
Effect of feature normalization methods. In order to evaluate the effect of the feature normalization method on the keypoints estimated by OpenPose, we compare the following five cases: 1) no normalization, 2) feature normalization, 3) object normalization, 4) 2-dimensional (2D) normalization, and 5) object 2D normalization. In the first case, we do not perform any normalization step on the keypoint feature generated by concatenating the coordinate values of all keypoints. In the feature normalization, we create a keypoint feature as in 1) and normalize the feature with the mean and standard deviation of the whole feature. In the object normalization, we normalize the keypoint features obtained from two hands, body, and face, respectively, and concatenate them to generate a feature that represents the frame. We also consider the case of 2D normalization in which we normalize the xand y-coordinates separately. Lastly, the object 2D normalization is the normalization method that we propose in the paper. Table 4 summarizes the result of our experiments. The table does not contain the results of the case without any normalization as it turns out that the proposed object 2D normalization method is superior to the other normalization methods we considered. Especially, when we train our neural network with the keypoint feature vector which is obtained by simply concatenating the x and y coordinates of keypoints without any normalization, the validation loss never decreases. While any kind of normalization seems working positively, it is quite interesting to see that there is an additional boost in translation performance when the object-wise normalization and the 2D normalization are used together.
Effect of augmentation factor. We examine the effect of data augmentation by random frame skip sampling and summarize the experimental results in Table 5. We call the number of training samples randomly sampled from a single sign video the augmentation factor. It should be noted that we do not include the result when we do not augment data by random frame sampling because the validation loss does not decrease at all due to overfitting. The result shows that the optimal augmentation factor is indeed 50. Considering the fact that the average number of frames in a sign video is larger than 200, the average length of gaps between frames is larger than 4. Then, there are 4 50 possible random sequences on average and consequently the probability of having exactly same training sample is really low. However, the result implies that increasing the augmentation factor has a limit at some point.
Effect of attention mechanisms. Here we compare four types of encoder-decoder architectures that are specialized in various machine translation tasks. Table 2 demonstrates the clear contrast between the attention-based model by Luong et al. [33] and the Transformer [49]. While the model of Luong et al. shows better performance than the Transformer on the validation set that contains more similar data to the training set, the Transformer generalizes much better to the test set which consists of sign videos of an independent signer.
Effect of the number of sampled frames. It is useful to know the optimal number of frames if we plan to develop a real-time sign language translation system because we can reduce the computational cost of the inference engine by efficiently skipping unnecessary frames. Table 6 shows how the number of sampled frames affects the translation performance. As the sequence-to-sequence model works for any variable-length input sequences, we do not necessarily fix the number of sampled frames. However, it is useful to know the optimal number of frames as the translation performance of the sequence-to-sequence models tends to decline due to the vanishing gradient problem [38].
Interestingly, our experimental result shows that the optimal number of frames for the best translation performance is 30 for the validation set and 50 for the test set.
Effect of batch size. Recently, it is increasingly accepted that training with small batch often generalizes better to the test set than training with large batch [18,45]. However, our experimental results provided in Table 7 shows the opposite phenomenon. We suspect that this is due to the scale of the original dataset because large batch is known to be useful to prevent overfitting to some extent.
Ablation study
We also study the effect of the use of keypoint information from two hands, body, and face. The experimental results summarized in Table 8 imply that the keypoint information from both hands is the most important among all the keypoint information from hands, face, and body. Interestingly, the experimental result tells us that the keypoint information from face does not help to improve the performance in general. The performance even drops when we add face keypoints in all cases. We suspect that the reason is partly due to the imbalanced number of keypoints from different parts. Recall that the number of keypoints from face is 70 and this is much larger than the number of the other keypoints.
While the keypoints from both hands are definitely the most important features to understand signs, it is worth noting that the 12 keypoints from body are boosting up the performance. Actually, we lose the information about relative positions of parts from each other as we normalize the coordinates of each part separately. For instance, there is no way to infer the relative positions of two hands with the normalize feature vectors from both hands. However, it is possible to know the relative position from the keypoints of body as there also exist keypoints corresponding to the hands.
Conclusions
In this work, we have introduced a new sign language dataset which is manually annotated in Korean spoken language sentences and proposed a neural sign language translation model based on the sequence-to-sequence translation models. It is well-known that the lack of large sign language dataset significantly hinders the full utilization of neural network based algorithms for the task of sign language translation that are already proven very useful in many tasks. Moreover, it is really challenging to collect a sufficient amount of sign language data as we need helps from sign language experts.
For this reason, we claim that it is inevitable to extract high-level features from sign language video with a sufficiently lower dimension. We are able to successfully train a novel sign language translation system based on the human keypoints that are estimated by a famous open source project called OpenPose developed by Hidalgo et al.
In the future, we aim at improving our sign language translation system by exploiting various data augmentation techniques using the spatial properties of videos. It is also important to expand the KETI sign language dataset to sufficiently larger scale by recording videos of more signers in different environments.
Supplemental Material
Keypoint information used in sign language translation. We use the estimated coordinates of 124 keypoints of a signer to understand the sign language of the signer, where 12 keypoints are from human body, 21 keypoints are from each hand, and 70 keypoints are from face. See Figure 3 for example. 1 Note that the number of keypoints from human body is 25 but we select 12 keypoints that correspond to upper body parts. The chosen indices and the names of the parts are as follows:
• 0 (nose),
• 1 (neck),
• 2 (right shoulder),
• 3 (right elbow),
• 4 (right wrist), In future, we plan to plug in an additional attention module to learn which keypoint contributes more to understand the sign video Figure 3. The human keypoints used for sign language recognition. Note that the figures are borrowed from the public web page of the OpenPose project [5,43,52].
Comparison with CNN-based approaches. In Table 9, we compare our approach to the classical methods based on CNN features extracted from well-known architectures such as ResNet [16] and VGGNet [44].
Since the size of sign video frames (1, 920 × 1, 080) is different to the size of input of CNN models (224 × 224), we first crop the central area of frames in 1, 080 × 1, 080 and resize the frames to 224 × 224.
The experimental results show that ResNet-101 exhibits the best translation performance on the validation set and the VGGNet-19 demonstrates the best performance on the test set. In general, the performance difference on the validation set is not large but it is apparent that the VGGNet models are much better in generalizing to the test set compared to the ResNet models.
Expectably, the translation models using the CNN extracted features show significantly worse translation performance than the models using the human keypoint features. It is still interesting to know whether the combination of any CNN-based features and human keypoint features works better than when we solely rely on the human keypoint features. As the size of sign language dataset grows, we expect that the CNN-based models improve their performances and generalize much better. Table 9. Performance comparison with translation models based on CNN-based feature extraction techniques. Note that the augmentation factor in this experiment is all set to 50.
Attention maps. In Figure 4, we depict attention maps of the sentence-level translation model on several successful and unsuccessful cases. We can see that the attention weights are more well-distributed on the important frames of the video in the successful case when generating the natural language sentence compared to the failure case. However, the order of the attentions is quite irregular in Figure 4 as there is no direct mapping between sign video frames and tokens of the output sentence.
We also describe the attention maps of the gloss-level translation model in Figure 5. In the attention map of the successful case, we can see that the order of the attentions are more regular than the successful one on the sentence-level. This is because there is a more clear mapping between the continuous frames in the video and the sign gloss on the gloss-level translation. Sign language annotation. We annotate each sign video with five different natural language sentences in Korean. Table 10 contains ten examples from 105 examples in total.
Moreover, we annotate a sign video with a unique sign gloss as presented in Table 10. Table 10. Ten examples of our sign language annotations. We annotate each sign with five natural language sentences in Korean and a unique sign gloss. We only provide two sentences in the table due to space limitations.
| 4,918 |
1811.11436
|
2903276340
|
We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive amount of dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI sign language dataset which consists of 11,578 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting line for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from a face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieves 94.6 (60.6 , respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compare several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.
|
@cite_25 formalized a sign language translation based on the pre-existing framework of Neural Machine Translation (NMT) with word and spatial embeddings for target sequences and sign videos, respectively. The extracted non-linear frame from a sign video is converted into the spatial representation through @math CNN, and then it is tokenized. The sequence-to-sequence (seq2seq) based deep learning methods learns how to translate the spatio-temproal representation of signs into the spoken or written language. Recently, researchers developed a simple sign language recognition system based on bidirectional GRUs which just classifies a given sign language video into one of the classes that are predetermined @cite_23
|
{
"abstract": [
"Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.",
"We study the sign language recognition problem which is to translate the meaning of signs from visual input such as videos. It is well-known that many problems in the field of computer vision require a huge amount of dataset to train deep neural network models. We introduce the KETI sign language dataset which consists of 10,480 videos of high resolution and quality. Since different sign languages are used in different countries, the KETI sign language dataset can be the starting line for further research on the Korean sign language recognition. Using the sign language dataset, we develop a sign language recognition system by utilizing the human keypoints extracted from face, hand, and body parts. The extracted human keypoint vector is standardized by the mean and standard deviation of the keypoints and used as input to recurrent neural network (RNN). We show that our sign recognition system is robust even when the size of training data is not sufficient. Our system shows 89.5 classification accuracy for 100 sentences that can be used in emergency situations."
],
"cite_N": [
"@cite_25",
"@cite_23"
],
"mid": [
"2799020610",
"2897208343"
]
}
|
Neural Sign Language Translation based on Human Keypoint Estimation
|
Sign language recognition or translation is a study that interprets a visual language that has its independent grammar into a spoken language. The visual language combines various information on the hands and facial expression according to this grammar to present the exact meaning [12,51]. The issue is a challenging subject in computer vision and a significant topic for hearing-impaired people.
In recent years, Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) architecture [17], and Gated Recurrent Units (GRUs) [6] in particular, have been primarily employed as essential approaches to model a sequence and solve the sequence to sequence problems such as machine translation and image captioning [9,31,47,53]. Convolutional neural networks (CNNs) are powerful models that have archived excellent performance in various visual tasks such as image classification [19,20], object detection [14,42], semantic segmentation [32,54], and action recognition [10,34].
Sign language with a unique grammar express the linguistic meaning through the shape and movement of hands, moreover, the facial expression that present emotion and specific intentions [51]. Understanding sign languages that it requires a high level of spatial and temporal knowledge is difficult with the current level of computer vision techniques based on neural networks [11,12,15,25,27,28,46]. More importantly, the main difficulty comes from the lack of dataset for training neural networks. Many sign languages represent different words and sentences of spoken languages with gestures sequences comprising continuous pose of hands and facial expressions while 'hand (finger) languages' only represent each letter in an alphabet with the shape of a single hand [7]. This implies that there are uncountably many combinations of the cases even to describe a single human intention with the sign language.
Hence, we restrict ourselves to a specific domain which is related to various emergencies. We build the Korean sign language dataset collected from eleven Korean professional signers who are hearing-impaired people. The dataset consists of high-resolution videos that recorded Korean sign languages corresponding to 419 words and 105 sentences related to various emergency situations. We, then, present our sign language translation system based on human keypoints of hands, pose, and face. Our system is trained and tested with the sign language dataset built by our corpus, and we show a robust performance considering that the scale of dataset is not large enough.
KETI Sign Language Dataset
The KETI dataset is constructed to understand the Korean sign language of hearing-impaired people in various emergencies because they are challenging to cope with the situations and sometimes are in severe conditions. In that cases, even when they are aware of that situations, it is very hard to report the situations and receive help from government agencies due to the communication problem. Therefore, we have carefully examined the relatively general conversation of emergency cases and chosen useful 105 sentences and 419 words used in such situations.
The KETI sign language dataset consists of 11,578 full high definition (HD) videos, that are recorded at 30 frames per second and from two camera angles; front and side. The dataset is recorded by the designed corpus and contains sentences and words performed by eleven different hearingimpaired signers to eliminate the expression error by signers who are non-disabled people. Moreover, the meanings of the sentences and words are delivered to hearing-impaired signers through the expert's sign languages in order to induce the correct expression. Each signer records a total of 1,048 Human Keypoint Estimation Figure 1. An overall architecture of our approach that translates a sign language video into a natural language sentence using sequence to sequence model based on GRU cells.
videos for the dataset. For the training and validation sets, we have chosen ten signers from eleven signers and chosen nine sign videos for each sign for the training set. The remaining sign videos are assigned to the validation set. The Test Set consists of a single signer whose sign video does not exist in the training set or the validation set. Several statistics of the dataset are given in Table 3 and an example frame from the dataset is presented in Figure 3.
In particular, we have annotated each of the 105 signs that correspond to the useful sentences in emergencies mentioned above with five different natural language sentences in Korean. Moreover, we have annotated all sign videos with the corresponding sequences of glosses [29], where a gloss is a unique word that corresponds to a unit sign and used to transcribe sign language. For instance, a sign implying 'I am burned.' can be annotated with the following sequence of glosses: ('FIRE', 'SCAR'). Similarly, a sentence 'A house is on fire.' is annotated by ('HOUSE', 'FIRE'). Apparently, glosses are more appropriate to annotate a sign because it is possible to be expressed in various natural sentences or words with the same meaning. For this reason, we have annotated all signs with the glosses with the help of Korean sign language experts.
For the communication with hearing-impaired people in the situations, the KETI dataset is used to develop an artificial intelligence-based sign language recognizer or translator. All videos are recorded in a blue screen studio to minimize any undesired influence and learn how to recognize or translate the signs with insufficient data.
Our Approach
We propose a sign recognition system based on the human keypoints that are estimated by pre-existing libraries such as OpenPose [5,43,52]. Here we develop our system based on OpenPose, an open source toolkit for real-time multiperson keypoint detection. OpenPose can estimate in total 130 keypoints where 18 keypoints are from body pose, 21 keypoints are from each hand, and 70 keypoints from a face. The primary reason of choosing OpenPose as a feature extractor for sign language recognition is that it is robust to many types of variations.
Human Keypoint Detection by OpenPose
First, our recognition system is robust in different cluttered backgrounds as it only detects the human body. Second, the system based on the human keypoint detection works well regardless of signer since the variance of extracted keypoints is negligible. Moreover, we apply the vector standardization technique to further reduce the variance which is dependent on signer. Third, our system can enjoy the benefits of the improvement on the keypoint detection system which has a great potential in the future because of its versatility. For instance, the human keypoint detection system can be used for recognizing different human behaviors and actions given that the relevant dataset is secured. Lastly, the use of high level features is necessary when the scale of the dataset is not large enough. In the case of sign language dataset, it is more difficult to collect than the other dataset as many professional signers should be utilized for recording sign language videos of high quality.
Feature Vector Normalization
There have been many successful attempts to employ various types of normalization methods in order to achieve the stability and speed-up of the training process [1,21,48]. One of the main difficulty in sign language translation with the small dataset is the large visual variance as the same sign can look very different depending on the signer. Even if we utilize the feature vector which is obtained by estimating the keypoints of human body, the absolute positions of the keypoints or the scale of the body parts in the frame can be very different. For this reason, we apply a special normalization method called the object 2D normalization that suits well in our purpose.
After extracting high-level human keypoints, we normalize the feature vector using the mean and standard deviation of the vector to reduce the variance of the data. Let us denote a 2D feature vector by
V = (v 1 , v 2 , . . . , v n ) ∈ N n×2 that consists of n elements where each element v i ∈ N 2 , 1 ≤ i ≤ n stands for a single keypoint of human part. Each ele- ment v i = (v x i , v y i ) consists of two integers v x i
and v y i that imply the xand the y-coordinates of the keypoint v i in the video frame, respectively. From the given feature vector V , we can extract the two feature vectors as follows:
V x = (v x 1 , v x 2 , . . . , v x n ) and V y = (v y 1 , v y 2 , . . . , v y n ).
Simply speaking, we collect the x and y-coordinates of keypoints separately while keeping the order. Then, we normal-ize the x-coordinate vector V x as follows:
V * x = V x −V x σ(V x ) , whereV x is the mean of V x and σ(V x ) is the standard devia- tion of V x .
Note that V * y is calculated analogously. Finally, it remains to concatenate the two normalized vectors to form the final feature vector V * = [V *
x ; V * y ] ∈ N 2n which will be used as the input vector of our neural network.
It should be noted that we assume that the keypoints of lower body parts are not necessary for sign language recognition. Therefore, we only use 124 keypoints from 137 keypoints detected by OpenPose since six keypoints of human pose correspond to lower body parts such as both feet, knees and pelvises as you can see in Figure 2. We randomly sample 10 to 50 keyframes from each sign video. Hence, the dimension of input feature vector is 248 × |V |, where |V | ∈ {10, 20, 30, 40, 50}.
Frame Skip Sampling for Data Augmentation
The main difficulty of training neural networks with small datasets is that the trained models do not generalize well with data from the validation and the Test Sets. As the size of dataset is even smaller than the usual cases in our problem, we utilize the random frame skip sampling that is commonly used to process video data such as video classification [22] for augmenting training data. The effectiveness if data augmentation has been proved in many tasks including image classification [40]. Here, we randomly extract multiple representative features of a video.
Given a sign video S = (f 1 , f 2 , . . . , f l ) that contains l frames from f 1 to f l , we randomly select a fixed number of frames, say n. Then, we first compute the average length of gaps between frames as follows:
z = l n − 1 .
We first extract a sequence of frames with indices from the following sequence Y = (y, y + Z, y + 2z . . . , y + (n − 1)z) ∈ N n , where y = l−z(n−1) 2 and call it a baseline sequence. Then, we generate a random integer sequence R = (r 1 , r 2 , . . . , r n ) ∈ [1, z] n and compute the sum of the random sequence and the baseline sequence. Note that the value of the last index is clipped to the value in the range of [1, l]. We start from the baseline sequence instead of choosing any random sequence of length l to avoid generating random sequences of frames that are possibly not containing 'key' moments of signs.
Attention-based Encoder-Decoder Network
The encoder-decoder framework based on RNN architectures such as LSTMs or GRUs is gaining its popularity for neural machine translation [2,33,47,49] as it successfully replaces the statistical machine translation methods.
Given an input sentence x = (x 1 , x 2 , . . . , x Tx ), an encoder RNN plays its role as follows:
h t = RNN(x t , h t−1 )
where h t ∈ R n is a hidden state at time t. After processing the whole input sentence, the encoder generates a fixed-size context vector that represents the sequence as follows:
c = q(h 1 , h 2 , . . . , h Tx ),
For instance, the RNN is an LSTM cell and q simply returns the last hidden state h Tx in one of the original sequence to sequence paper by Sutskever et al. [47]. Now suppose that y = (y 1 , y 2 , . . . , y Ty ) is an output sentence that corresponds to the input sentence x in training set. Then, the decoder RNN is trained to predict the next word conditioned on all the previously predicted words and the context vector from the encoder RNN. In other words, the decoder computes a probability of the translation y by decomposing the joint probability into the ordered conditional probabilities as follows:
p(y) = Ty i=1
p(y i |{y 1 , y 2 , . . . , y i−1 }, c). Now our RNN decoder computes each conditional probability as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , c) = softmax(g(s i )), where s i is the hidden state of decoder RNN at time i and g is a linear transformation that outputs a vocabulary-sized vector. Note that the hidden state s i is computed by
s i = RNN(y i−1 , s i−1 , c),
where y i−1 is the previously predicted word, s i−1 is the last hidden state of decoder RNN, and c is the context vector computed from encoder RNN.
Bahdanau attention. Bahdanau et al. [2] conjectured that the fixed-length context vector c is a bottleneck in improving the performance of the translation model and proposed to compute the context vector by automatically searching for relevant parts from the hidden states of encoder. Indeed, this 'attention' mechanism has proven really useful in various tasks including but not limited to machine translation. They proposed a new model that defines each conditional probability at time i depending on a dynamically computed context vector c i as follows:
p(y i |y 1 , y 2 , . . . , y i−1 , x) = softmax(g(s i )), where s i is the hidden state of the decoder RNN at time i which is computed by
s i = RNN(y i−1 , s i−1 , c i ).
The context vector c i is computed as a weighted sum of the hidden states from encoder:
c i = Tx j=1 α ij h j , where α ij = exp(score(s i−1 , h j )) Tx k=1 exp(score(s i−1 , h k ))
.
Here the function 'score' is called an alignment function that computes how well the two hidden states from the encoder and the decoder, respectively, match. For example, score(s i , h j ), where s i is the hidden state of the encoder at time i and h j is the hidden state of the decoder at time j implies the probability of aligning the part of the input sentence around position i and the part of the output sentence around position j.
Luong attention. Later, Luong et al. [33] examined a novel attention mechanism which is very similar to the attention mechanism by Bahdananu et al. but different in some details. First, only the hidden states of the top RNN layers in both the encoder and decoder are used instead of using the concatenation of the forward and backward hidden states of the bi-directional encoder and the hidden states of the uni-directional non-stacking decoder. Second, the computation path is simplified by computing the attention matrix after computing the hidden state of the decoder at current time step. They also proposed the following three scoring functions to compute the degree of alignment between the hidden states as follows:
score(h t , h s ) = h t h s , (Dot) h t W h s , (General) V tanh(W [h t ; h s ]), (Concat.)
where V and W are learned weights. Note that the third one based on the concatenation is originally proposed by Bahdanau et al. [2]. Moreover, as the Transformer uses neither recurrence nor convolution, the model requires some information about the order of the sequence. To cope with this problem, the Transformer uses positional encoding which contains the information about the relative or absolute position of the words in the sequence using sine and cosine functions.
Experimental Results
We implemented our networks using PyTorch [39], which is an open source machine learning library for Python. The Adam optimizer [24] was used to train the network weights and biases for 50 epochs with an initial learning rate 0.001. During the training, we changed the learning rate every 20 epochs by the exponential decay scheduler with discount factor 0.5. We also used the dropout regularization with a probability of 0.8 and the gradient clipping with a threshold 5. Note that the dropout regularization is necessarily high as the size and the variation of the dataset is small compared to other datasets specialized for deep learning training. For the sequence-to-sequence models including the vanilla seq2seq model and two attention-based models, the dimension of hidden states is 256. For the Transformer model, we use the dimension for input and output (d model in [49]) of 256. The other hyper-parameters used for the Transformer are the same as in the original model including the scheduled Adam optimizer in their own setting. Moreover, the batch size is 128, the augmentation factor is 100, the number of chosen frames is 50, and the object 2D normalization is used unless otherwise specified.
As our dataset is annotated in Korean which is an agglutinative language, the morphological analysis on the annotated sentences should be performed because the size of dictionary can be arbitrarily large if we split sentences into words simply by white-spaces in such languages. For this reason, we used the Kkma part-of-speech (POS) tagger in the KoNLPy package which is a Python package developed for natural language processing of the Korean language to tokenize the sentences into the POS level [37].
In order to evaluate the performance of our translation model, we basically calculate 'accuracy' which means the ratio of correctly translated words and sentences. Besides, we also utilized three types of metrics that are commonly used for measuring the performance of machine translation models such as BLEU [36], ROUGE-L [30], METEOR [3], and CIDEr [50] scores.
Sentence-level vs Gloss-level training. As in [7], we conduct an experiment to compare the translation performance depending on the type of annotations. Because each sign corresponds to a unique sequence of glosses while it corresponds to multiple natural language sentences, it is easily predictable that the the gloss-level translation shows better performance. Indeed, we can confirm the anticipation from the summary of results provided in Table 10.
This also leads us to the future work for translating sequences of glosses into natural language sentences. We expect that the sign language translation can be a more feasible task by separating the task of annotating sign videos with natural language sentences by two sub-tasks where we annotate sign videos with glosses and annotate each sequence of glosses with natural language sentences.
Effect of feature normalization methods. In order to evaluate the effect of the feature normalization method on the keypoints estimated by OpenPose, we compare the following five cases: 1) no normalization, 2) feature normalization, 3) object normalization, 4) 2-dimensional (2D) normalization, and 5) object 2D normalization. In the first case, we do not perform any normalization step on the keypoint feature generated by concatenating the coordinate values of all keypoints. In the feature normalization, we create a keypoint feature as in 1) and normalize the feature with the mean and standard deviation of the whole feature. In the object normalization, we normalize the keypoint features obtained from two hands, body, and face, respectively, and concatenate them to generate a feature that represents the frame. We also consider the case of 2D normalization in which we normalize the xand y-coordinates separately. Lastly, the object 2D normalization is the normalization method that we propose in the paper. Table 4 summarizes the result of our experiments. The table does not contain the results of the case without any normalization as it turns out that the proposed object 2D normalization method is superior to the other normalization methods we considered. Especially, when we train our neural network with the keypoint feature vector which is obtained by simply concatenating the x and y coordinates of keypoints without any normalization, the validation loss never decreases. While any kind of normalization seems working positively, it is quite interesting to see that there is an additional boost in translation performance when the object-wise normalization and the 2D normalization are used together.
Effect of augmentation factor. We examine the effect of data augmentation by random frame skip sampling and summarize the experimental results in Table 5. We call the number of training samples randomly sampled from a single sign video the augmentation factor. It should be noted that we do not include the result when we do not augment data by random frame sampling because the validation loss does not decrease at all due to overfitting. The result shows that the optimal augmentation factor is indeed 50. Considering the fact that the average number of frames in a sign video is larger than 200, the average length of gaps between frames is larger than 4. Then, there are 4 50 possible random sequences on average and consequently the probability of having exactly same training sample is really low. However, the result implies that increasing the augmentation factor has a limit at some point.
Effect of attention mechanisms. Here we compare four types of encoder-decoder architectures that are specialized in various machine translation tasks. Table 2 demonstrates the clear contrast between the attention-based model by Luong et al. [33] and the Transformer [49]. While the model of Luong et al. shows better performance than the Transformer on the validation set that contains more similar data to the training set, the Transformer generalizes much better to the test set which consists of sign videos of an independent signer.
Effect of the number of sampled frames. It is useful to know the optimal number of frames if we plan to develop a real-time sign language translation system because we can reduce the computational cost of the inference engine by efficiently skipping unnecessary frames. Table 6 shows how the number of sampled frames affects the translation performance. As the sequence-to-sequence model works for any variable-length input sequences, we do not necessarily fix the number of sampled frames. However, it is useful to know the optimal number of frames as the translation performance of the sequence-to-sequence models tends to decline due to the vanishing gradient problem [38].
Interestingly, our experimental result shows that the optimal number of frames for the best translation performance is 30 for the validation set and 50 for the test set.
Effect of batch size. Recently, it is increasingly accepted that training with small batch often generalizes better to the test set than training with large batch [18,45]. However, our experimental results provided in Table 7 shows the opposite phenomenon. We suspect that this is due to the scale of the original dataset because large batch is known to be useful to prevent overfitting to some extent.
Ablation study
We also study the effect of the use of keypoint information from two hands, body, and face. The experimental results summarized in Table 8 imply that the keypoint information from both hands is the most important among all the keypoint information from hands, face, and body. Interestingly, the experimental result tells us that the keypoint information from face does not help to improve the performance in general. The performance even drops when we add face keypoints in all cases. We suspect that the reason is partly due to the imbalanced number of keypoints from different parts. Recall that the number of keypoints from face is 70 and this is much larger than the number of the other keypoints.
While the keypoints from both hands are definitely the most important features to understand signs, it is worth noting that the 12 keypoints from body are boosting up the performance. Actually, we lose the information about relative positions of parts from each other as we normalize the coordinates of each part separately. For instance, there is no way to infer the relative positions of two hands with the normalize feature vectors from both hands. However, it is possible to know the relative position from the keypoints of body as there also exist keypoints corresponding to the hands.
Conclusions
In this work, we have introduced a new sign language dataset which is manually annotated in Korean spoken language sentences and proposed a neural sign language translation model based on the sequence-to-sequence translation models. It is well-known that the lack of large sign language dataset significantly hinders the full utilization of neural network based algorithms for the task of sign language translation that are already proven very useful in many tasks. Moreover, it is really challenging to collect a sufficient amount of sign language data as we need helps from sign language experts.
For this reason, we claim that it is inevitable to extract high-level features from sign language video with a sufficiently lower dimension. We are able to successfully train a novel sign language translation system based on the human keypoints that are estimated by a famous open source project called OpenPose developed by Hidalgo et al.
In the future, we aim at improving our sign language translation system by exploiting various data augmentation techniques using the spatial properties of videos. It is also important to expand the KETI sign language dataset to sufficiently larger scale by recording videos of more signers in different environments.
Supplemental Material
Keypoint information used in sign language translation. We use the estimated coordinates of 124 keypoints of a signer to understand the sign language of the signer, where 12 keypoints are from human body, 21 keypoints are from each hand, and 70 keypoints are from face. See Figure 3 for example. 1 Note that the number of keypoints from human body is 25 but we select 12 keypoints that correspond to upper body parts. The chosen indices and the names of the parts are as follows:
• 0 (nose),
• 1 (neck),
• 2 (right shoulder),
• 3 (right elbow),
• 4 (right wrist), In future, we plan to plug in an additional attention module to learn which keypoint contributes more to understand the sign video Figure 3. The human keypoints used for sign language recognition. Note that the figures are borrowed from the public web page of the OpenPose project [5,43,52].
Comparison with CNN-based approaches. In Table 9, we compare our approach to the classical methods based on CNN features extracted from well-known architectures such as ResNet [16] and VGGNet [44].
Since the size of sign video frames (1, 920 × 1, 080) is different to the size of input of CNN models (224 × 224), we first crop the central area of frames in 1, 080 × 1, 080 and resize the frames to 224 × 224.
The experimental results show that ResNet-101 exhibits the best translation performance on the validation set and the VGGNet-19 demonstrates the best performance on the test set. In general, the performance difference on the validation set is not large but it is apparent that the VGGNet models are much better in generalizing to the test set compared to the ResNet models.
Expectably, the translation models using the CNN extracted features show significantly worse translation performance than the models using the human keypoint features. It is still interesting to know whether the combination of any CNN-based features and human keypoint features works better than when we solely rely on the human keypoint features. As the size of sign language dataset grows, we expect that the CNN-based models improve their performances and generalize much better. Table 9. Performance comparison with translation models based on CNN-based feature extraction techniques. Note that the augmentation factor in this experiment is all set to 50.
Attention maps. In Figure 4, we depict attention maps of the sentence-level translation model on several successful and unsuccessful cases. We can see that the attention weights are more well-distributed on the important frames of the video in the successful case when generating the natural language sentence compared to the failure case. However, the order of the attentions is quite irregular in Figure 4 as there is no direct mapping between sign video frames and tokens of the output sentence.
We also describe the attention maps of the gloss-level translation model in Figure 5. In the attention map of the successful case, we can see that the order of the attentions are more regular than the successful one on the sentence-level. This is because there is a more clear mapping between the continuous frames in the video and the sign gloss on the gloss-level translation. Sign language annotation. We annotate each sign video with five different natural language sentences in Korean. Table 10 contains ten examples from 105 examples in total.
Moreover, we annotate a sign video with a unique sign gloss as presented in Table 10. Table 10. Ten examples of our sign language annotations. We annotate each sign with five natural language sentences in Korean and a unique sign gloss. We only provide two sentences in the table due to space limitations.
| 4,918 |
1906.11768
|
2955598033
|
In many machine learning applications, it is necessary to meaningfully aggregate, through alignment, different but related datasets. Optimal transport (OT)-based approaches pose alignment as a divergence minimization problem: the aim is to transform a source dataset to match a target dataset using the Wasserstein distance as a divergence measure. We introduce a hierarchical formulation of OT which leverages clustered structure in data to improve alignment in noisy, ambiguous, or multimodal settings. To solve this numerically, we propose a distributed ADMM algorithm that also exploits the Sinkhorn distance, thus it has an efficient computational complexity that scales quadratically with the size of the largest cluster. When the transformation between two datasets is unitary, we provide performance guarantees that describe when and how well aligned cluster correspondences can be recovered with our formulation, as well as provide worst-case dataset geometry for such a strategy. We apply this method to synthetic datasets that model data as mixtures of low-rank Gaussians and study the impact that different geometric properties of the data have on alignment. Next, we applied our approach to a neural decoding application where the goal is to predict movement directions and instantaneous velocities from populations of neurons in the macaque primary motor cortex. Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.
|
Various probability divergences have been proposed in the literature, such as Euclidean least-squares (when data ordering is known) @cite_33 @cite_24 @cite_17 , Kullbeck-Liebler (KL) @cite_16 , maximum mean discrepancy (MMD) @cite_40 @cite_20 @cite_14 @cite_7 , and the Wasserstein distance @cite_45 , where trade-offs are often statistical (e.g., consistency, sample complexity) versus computational. Alignment problems are ill-posed since the space of @math is large, so structure is often necessary to constrain @math based on geometric assumptions. Compact manifolds like the Grassmann or Stiefel @cite_15 @cite_18 are primary choices when little information is present, as they preserve isometry. Non-isometric transformations, though richer, demand much more structure (e.g., manifold or graph structure) @cite_38 @cite_31 @cite_19 @cite_21 @cite_45 .
|
{
"abstract": [
"In this paper we introduce a novel approach to manifold alignment, based on Procrustes analysis. Our approach differs from \"semi-supervised alignment\" in that it results in a mapping that is defined everywhere - when used with a suitable dimensionality reduction method - rather than just on the training data points. We describe and evaluate our approach both theoretically and experimentally, providing results showing useful knowledge transfer from one domain to another. Novel applications of our method including cross-lingual information retrieval and transfer learning in Markov decision processes are presented.",
"",
"",
"",
"Labeled examples are often expensive and time-consuming to obtain. One practically important problem is: can the labeled data from other related sources help predict the target task, even if they have (a) different feature spaces (e.g., image vs. text data), (b) different data distributions, and (c) different output spaces? This paper proposes a solution and discusses the conditions where this is possible and highly likely to produce better results. It works by first using spectral embedding to unify the different feature spaces of the target and source data sets, even when they have completely different feature spaces. The principle is to cast into an optimization objective that preserves the original structure of the data, while at the same time, maximizes the similarity between the two. Second, a judicious sample selection strategy is applied to select only those related source examples. At last, a Bayesian-based approach is applied to model the relationship between different output spaces. The three steps can bridge related heterogeneous sources in order to learn the target task. Among the 12 experiment data sets, for example, the images with wavelet-transformed-based features are used to predict another set of images whose features are constructed from color-histogram space. By using these extracted examples from heterogeneous sources, the models can reduce the error rate by as much as 50 , compared with the methods using only the examples from the target task.",
"",
"",
"",
"",
"Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.",
"",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Simulations illustrate the usefulness of our approach.",
"",
""
],
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_31",
"@cite_33",
"@cite_7",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_45",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2123261262",
"",
"",
"",
"2124961556",
"",
"",
"",
"",
"2115403315",
"",
"2128053425",
"2103851188",
"",
""
]
}
| 0 |
||
1906.11768
|
2955598033
|
In many machine learning applications, it is necessary to meaningfully aggregate, through alignment, different but related datasets. Optimal transport (OT)-based approaches pose alignment as a divergence minimization problem: the aim is to transform a source dataset to match a target dataset using the Wasserstein distance as a divergence measure. We introduce a hierarchical formulation of OT which leverages clustered structure in data to improve alignment in noisy, ambiguous, or multimodal settings. To solve this numerically, we propose a distributed ADMM algorithm that also exploits the Sinkhorn distance, thus it has an efficient computational complexity that scales quadratically with the size of the largest cluster. When the transformation between two datasets is unitary, we provide performance guarantees that describe when and how well aligned cluster correspondences can be recovered with our formulation, as well as provide worst-case dataset geometry for such a strategy. We apply this method to synthetic datasets that model data as mixtures of low-rank Gaussians and study the impact that different geometric properties of the data have on alignment. Next, we applied our approach to a neural decoding application where the goal is to predict movement directions and instantaneous velocities from populations of neurons in the macaque primary motor cortex. Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.
|
Principal components analysis (PCA), one of the most popular methods in data science, assumes a model where the top- @math principal components of a dataset provide the optimal rank- @math approximation under an Euclidean loss. This has been extended to robust (sparse errors) settings @cite_13 , and multi- (union of) subspaces settings where data can be partitioned into disjoint subsets where each subset of data is locally low-rank @cite_39 . Transfer learning methods based on subspace alignment @cite_43 @cite_10 @cite_26 work well with zero-mean unimodal datasets, but struggle on more complicated modalities (e.g., Gaussian mixtures or union of subspaces) due to a mixing of covariances. Related to our work, @cite_5 performs multi-subspace alignment by greedily assigning correspondences between subspaces using chordal distances; this however neglects sign ambiguities in principal directions since subspaces inadequately describe a distribution's shape.
|
{
"abstract": [
"Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.",
"",
"Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2 lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.",
"In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.",
"We present a novel unsupervised domain adaptation (DA) method for cross-domain visual recognition. Though subspace methods have found success in DA, their performance is often limited due to the assumption of approximating an entire dataset using a single low-dimensional subspace. Instead, we develop a method to effectively represent the source and target datasets via a collection of low-dimensional subspaces, and subsequently align them by exploiting the natural geometry of the space of subspaces, on the Grassmann manifold. We demonstrate the effectiveness of this approach, using empirical studies on two widely used benchmarks, with state of the art domain adaptation performance",
""
],
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_39",
"@cite_43",
"@cite_5",
"@cite_10"
],
"mid": [
"1993962865",
"",
"2147276092",
"2104068492",
"2949100517",
""
]
}
| 0 |
||
1811.10865
|
2963086661
|
Astronomy is well recognized as big data driven science. As the novel observation infrastructures are developed, the sky survey cycles have been shortened from a few days to a few seconds, causing data processing pressure to shift from offline to online. However, existing scientific databases focus on offline analysis of long-term historical data, not real-time and low latency analysis of large-scale newly arriving data.In this paper, a cloud based method is proposed to efficiently analyze scientific events on large-scale newly arriving data. The solution is implemented as a highly efficient system, namely Aserv. A set of compact data store and index structures are proposed to describe the proposed scientific events and a typical analysis pattern is formulized as a set of query operations. Domain aware filter, accuracy aware data partition, highly efficient index and frequently used statistical data designs are four key methods to optimize the performance of Aserv. Experimental results under the typical cloud environment show that the presented optimization mechanism can meet the low latency demand for both large data insertion and scientific event analysis. Aserv can insert 3.5 million rows of data within 3 seconds and perform the heaviest query on 6.7 billion rows of data also within 3 seconds. Furthermore, a performance model is given to help Aserv choose the right cloud resource setup to meet the guaranteed real-time performance requirement.
|
. Real-time databases have been studied since 1980s, and the key goal is to enable as many real-time transactions as possible to meet their respective time constraints @cite_19 . Real-time databases are more concern with timeliness, not system speed @cite_33 , due to a basis hypothesis that catastrophic consequences do not happen in the real world if a transaction is finished within the deadline. Hence, many of works focus on scheduling @cite_19 @cite_4 and transaction processing @cite_10 @cite_28 . Storing and processing all the data under periodic time constraint can avoid data loss and ensure temporal data consistency maybe enough for traditional real-time databases, but the transient feature of scientific event requires that the online query latency should be as low as possible. Only in this way, we can exploit the value of scientific data. Periodic survey cycle and the unpredictability of scientific event propose the new challenge for online big scientific data analysis.
|
{
"abstract": [
"In a real-time database system, an application may assign a value to a transaction to reflect the return it expects to receive if the transaction commits before its deadline. Most research on real-time database systems has focused on systems where all transactions are assigned the same value, the performance goal being to minimize the number of missed deadlines. When transactions are assigned different values, the goal of the system shifts to maximizing the sum of the values of those transactions that commit by their deadlines. Minimizing the number of missed deadlines becomes a secondary concern. In this article, we address the problem of establishing a priority ordering among transactions characterized by both values and deadlines that results in maximizing the realized value. Of particular interest is the tradeoff established between these values and deadlines in constructing the priority ordering. Using a detailed simulation model, we evaluate the performance of several priority mappings that make this tradeoff in different, but fixed, ways. In addition, a \"bucket\" priority mechanism that allows the relative importance of values and deadlines to be controlled is introduced and studied. The notion of associating a penalty with transactions whose deadlines are not met is also briefly considered.",
"",
"In data-intensive real-time embedded applications, it is desirable to process data service requests in a timely manner using fresh data, consuming less power. However, related work is relatively scarce. In this paper, we present an effective approach to decrease both the deadline miss ratio and power consumption by merging similar real-time transactions, while systematically adapting the data freshness. In a simulation study, our approach considerably reduces deadline misses and power consumptions compared to the state-of-the-art baselines, supporting the required data freshness.",
"Scheduling transactions with real-time requirements presents many new problems. In this paper we discuss solutions for two of these problems: what is a reasonable method for modeling real-time constraints for database transactions? Traditional hard real-time constraints (e.g., deadlines) may be too limited. May transactions have soft deadlines and a more flexible model is needed to capture these soft time constraints. The second problem we address is scheduling. Time constraints add a new dimension to concurrency control. Not only must a schedule be serializable but it also should meet the time constraints of all the transactions in the schedule.",
"A real-time database system (RTDBS) is a database system designed to handle workloads whose state is constantly changing. This system differs from traditional databases containing persistent data, mostly unaffected by time. A real-time database is a database in which transactions have deadlines or timing constraints. Real-time databases are commonly used in real-time computing applications that require timely access to data. In this article, we will discuss the most important concepts of real-time database systems. Keywords: real-time database system; transaction processing; concurrency control"
],
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_28",
"@cite_19",
"@cite_10"
],
"mid": [
"2116241626",
"49498096",
"2568868599",
"2007225136",
"1595878682"
]
}
|
Cloud based Real-Time and Low Latency Scientific Event Analysis
|
In astronomy, short astronomical phenomena mean grand scientific discoveries. Up to now, only 10 astronomical phenomena lasting within 1 day are found. Actually, existing astronomy projects cannot effectively search optical transient sources who are during a few hours, due to the long sky survey cycles. For example, the survey cycle of both SDSS [1] and LSST [2] are 3-5 days. For searching short and unknown astronomical phenomena, the fast sky survey projects have become a new trend. For example, GWAC (Ground-based Wide Angle Camera) [3], having the shortest sky survey cycle in the world, continuously observes the fixed 1/4 Northern Hemisphere within 15 seconds. The new instruments lead a new kind of big scientific data and different analysis needs.
The new survey data provides scientists a completely new way to achieve scientific discovery. Scientists often need to launch an analytical query on newly arriving data to confirm a scientific event and issue an alert as soon as possible. This requirement is basic since short astronomical phenomena are transient and hard to reproduce, such as microlensing. Thus, it tends to the online analysis and the analysis methods are totally different from offline analysis. We extract three typical analysis methods: probing, listing and stretching. They formulize the analysis behavior of scientists on newly arriving data from the general view to the deep insight.
To support the wanted analysis methods on cloud, data analysis system behind these instruments desires both realtime and low latency ability, as shown in Figure 1. As the survey data is periodically collected from scientific objects, data analysis system must take real-time processing to guarantee the data temporal consistency [4] and no data loss. Since scientific events discussed here are unpredictable and transient, low latency data analysis is necessary for scientists to identify the events as early as possible. In addition, the expense of large scientific projects often exceeds the budget because the projects often last long and more time and many unpredicted difficulties will happen. So suitable cloud resource setup is necessary to reduce the fixed expense of data analysis system.
To recap, on the premise of the low resource overhead, the data insertion and query time must be less than the survey cycle and the lower it is, the better the performance is. For the data insertion operation, challenges mainly involve: (1) the cost of distributed processing, (2) data size and (3) the latency trade-off between insertion and query. The large-scale data insertion will take up much network and storage resources. Compression will incur more computational cost and sampling may lose some key data that is often unacceptable for scientists when using these techniques to reduce the data size. If we simply insert data collected at a survey cycle as a catalog file, the query latency on unstructured data will be very high. To enable low latency query, index on scientific events is necessary. However, the expensive cost in index update and insertion often prevents the low latency query.
Based on the above, we propose Aserv, a lightweight system for real-time and low latency scientific event analysis. The key idea to improve the performance is cutting down unnecessary cost as much as possible. Without losing the availability of our system, three policies are developed to improve the overall performance: (1) removing irrelevant data; (2) adjusting the query accuracy to an acceptable range instead of as high as possible and (3) eliminating too expensive operations. Furthermore, we also develop a performance model to help Aserv determine resource setup to meet the performance constraint.
A set of exquisite optimization methods are employed on the two major components of Aserv: (1) the data insertion part is a real-time processing pipeline to ingest scientific data and load them into key-value store, and (2) the query engine supports low latency scientific event analysis. The insertion component includes three major modules: filter, data organization and pre-analysis. We only select highly related information from original data stream to achieve a significant data reduction that can save both computation and storage cost. Data organization module physically partitions scientific data into different sections so Aserv can greatly reduce network requirement and improve the insertion performance by ingesting partition data, instead of independent data tuples. Correspondingly, query engine in Aserv also implements an accuracy aware search strategy to improve the query performance. In addition, Aserv builds a highly effective index in data organization module and produces statistical data for scientific events in pre-analysis module. Both of them can avoid the access of original data and they also have the insertion-friendly structure.
We evaluate Aserv in a real astronomical project [3]. Experimental results show that Aserv can really work. In summary, the major contributions are as follows:
• We propose the real-time and low latency analysis problem in fast sky survey and formulize it as three typical query operations. • We develop a cloud based distributed system Aserv for real-time and low latency scientific event analysis. Aserv employs several efficient policies to improve the system performance, including a filter strategy DAfilter, a partitioner EPgrid, SEPI index and an approach PCAG to generate frequently used statistical data. • We present a performance model for Aserv. It can help Aserv meet performance constraints under cloud scenario. The rest of the paper is organized as follows. The problem description is in Section II. Aserv framework is in Section III. The performance model is in Section IV. Our experimental results are in Section V. The summary is in Section VII.
II. SCIENTIFIC EVENT ANALYSIS
In this paper, we focus on the fast sky survey which has been very popular in astronomy recently. Here, the observation instruments consist of multiple observation units, Time oid 1 oid 2 oid 3 t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 t 10 <eid 1 ,oid 1 ,t 3 ,t 5 which continuously observe a fixed region per survey cycle ct. We assume that the fixed region area is es in Euclidean 2-space and each unit deals with a square sub-area s in es. This hypothesis makes sense in many cases. For example, observation instruments collect data by taking images, so that Euclidean distance is usually used to distinguish objects in astronomy [5]. Noting that observation units in our assumption are logical. In the extreme case, instruments may only have one physical observation unit, but we can still partition the observed area to simulate multiple logical observation units. Thus, it does not affect our assumption. Data model. During the observation (i.e., one night), we assume that the instruments can observe n objects. When there are enough objects in the observation region, the resolution limitation of observation units causes the collected objects to be almost constant, such as ∼175,600 objects for GWAC [3]. We assume that the observed objects are well distributed in space. For each cycle, the data collected by each observation unit is organized as the catalog file with the same size. Data tuples in catalogs are in the form of < oid, x, y, t, d 1 , ..., d m >, where x and y represent locations in es. Oid and t are the object name and timestamp, respectively. The rest are data items. Tuples of different objects in the same catalog have the same timestamp. Along the time dimension, tuples of the same object in different catalogs form an amount of time series data.
Scientific event. In addition, each observation unit contains an event detector which can recognize objects who may subject to scientific phenomena from each catalog. Finally, emit a scientific event set Eset including the candidate object oids. Thus, we define a scientific event in the form of < eid, oid, stime, etime > where eid, stime and etime are the scientific event ID, event start and end time, respectively. This definition suggests that an object may appear multiple scientific phenomena during the observation.
For a given query, Aserv must handle two basic operators: region(x, y, r) and timeinterval(t s , t e ). The space constraint operator region(x, y, r) means to search scientific events within a circle where x and y are the center locations and r is the radius. Region(x, y, r) is suitable for the neighborhood search. In some cases, the search region may be the rectangle. Relatively speaking, the circle region is more commonly used and more complex to implement. Thus, we do not discuss the rectangle region. T imeinterval(t s , t e ) is a time interval Fig. 3. Aserv (Left) includes insertion component, query engine and key-value store. The insertion component (Right) follows "master/slave" mode. The master is exploited to register and monitor the workers, and each worker node works for an observation unit. Workers ingest the catalog file and Eset from pre-processing module and event detector and load them into key-value store. Query engine supports three typical analysis methods cited above.
between t s and t e .
In real-time scientific discovery, we propose three analysis methods for scientific events. An example is shown in Figure 2, where oid 1−3 are observed by the same observation unit from t 1 to t 10 . They formulize the analysis behavior of scientists.
Probing analysis. It mainly returns overview information that can help scientists to have a quick view on scientific events, such as some aggregation operations. This analytical method is useful in the alert of scientific events. In this paper, we focus on scientific event count, which answers how many scientific events in timeinterval(t s , t e ), because scientists are only interested in the occurrence of scientific events. For example, it returns 0 in timeinterval(t 1 , t 2 ) meaning no scientific events and 3 in timeinterval(t 4 , t 7 ). It, as the most frequent query, must have a low latency.
Listing analysis. It returns the complete information of scientific events. When scientists find the alert by probing analysis, they can use listing analysis to return the complete time series of scientific events in timeinterval(t s , t e ). We use interval query [6] to implement listing analysis. Assuming that data items in a data set are time-evolving, as long as the data items appear within the given time interval, interval query will return all the items in this set. For example, it returns the time series corresponding to eid 1 , eid 3 and eid 4 in timeinterval(t 4 , t 7 ), i.e., for scientific event eid 1 returning the time series of oid 1 between t 3 and t 5 , etc. Although the durations of scientific events may only intersect the given time interval, we still return the complete time series, because scientists are always more concerned with a complete change in a scientific phenomenon.
Stretching analysis. It mainly returns the extend information of any scientific event. Given a scientific event found by listing analysis, scientists might be interested in its surroundings, such as a larger range of time or space. Therefore, it is a complement to listing analysis. We employ temporal range query to implement the time stretch, because scientists through this query could have a deeper insight on what happens before and after a scientific event. For a scientific event, this query returns a time series range in timeinterval(stime−∆t 1 , etime+∆t 2 ). As an example, for eid 4 this query returns the time series range of oid 3 between t 4 and t 7 using ∆t 1 = ∆t 2 = 1.
Probing analysis and listing analysis can run with a space restriction. For example, scientists could perform listing analysis with both region(x, y, r) and timeinterval(t s , t e ).
Aserv needs to meet the following performance requirements: (1) for each data survey, the insertion latency is required to be less than ct, and (2) the query latency over data tuples collected during the observation is required to be less than ct.
III. ASERV FRAMEWORK
Aserv includes two major components and a key-value cloud store, as shown in Figure 3. The pre-processing module mainly includes the necessary scientific precessing, such as cross-match in astronomy [7]. The event detector is used to search Eset in each catalog. The pre-processing module and event detector have been implemented successfully in existing work [8], [9], so here we focus on the unsolved parts. The major modules of Aserv are discussed as follows.
Filter. For the catalogs, the data dimension m is usually large, such as 25 columns per data tuple collected by GWAC. It is important to reduce the data size, especially for in-memory store. Actually, in our online analysis scenario scientists only focus on scientific events and several major attributes. To significantly improve the system performance, we must filter out unnecessary information from complete data using the domain knowledge [10] and simultaneously achieve the efficient data reduction. We develop a Domain Aware Filter (or DAfilter) to support our optimization. object oid ∈ Eset, and additionally append its original data tuple (i.e., < oid, x, y, t, d 1 , ..., d m >) into key-list structure as scientific event data, because the complete information is also useful for scientists to analyze scientific events.
Data organization. In this module, we generate the metadta, partition the valid data and build the index. At the beginning, we use EPgrid (Section III-A) to partition the observed region es once to generate partition metadata. For key-value stores, the number of keys significantly impacts the insertion performance [11], so that we also partition the valid data into partition data to reduce keys. We organize the partition data with the same ID into key-list structure along the time dimension. We do not partition scientific event data due to less numbers, but we still record their partition IDs into scientific event data for easy querying. In addition, we use Eset to update SEPI index (Section III-B).
Pre-analysis. Probing analysis is important for scientific event alert. However, it has a poor performance to scan scientific event data or index. Thus, this module receives partition data and Eset from data organization module and generates the intermediate statistical data to speed up frequently used probing analysis (Section III-C).
Insertion and key-value store. The data, produced by the aforementioned modules, will be transformed into key-value pairs and ingest them into the key-value cloud store. On the one hand, the linear scalability of key-value stores is helpful to meet the performance constraints [12]. On the other hand, scientific event data can be well described as key-list structure, natively supported by key-value store.
Query engine. Aserv in advance loads partition metadata from key-value store and caches them in memory to speed up the region search. When a query request comes, Aserv will approximatively parse region operator to read the right partition data for stretching analysis, intermediate statistical data for probing analysis or scientific event data for listing analysis. Finally, the invoked query will run on them. We introduce these techniques as follows.
A. Accuracy Aware Data Partition
In this section, we employ the grid scheme to partition the observation region of each observation unit. When region operator is parsed, query engine will analyze partition metadata in local memory and filter partitions who are covered by search circle, no matter how much the covered area is. Finally, return partition IDs to index objects in the corresponding partitions. As shown in Figure 4, four partitions can be found. Obviously, the total area covered by our strategy is always greater than the given search circle. Although scientists can tolerate that the irrelevant area is covered, it must be within a reasonable range. Thus, it is important to make sense of relation between the region search accuracy and the number of partitions.
Theorem 1: Given the acceptable minimum accuracy α, the search radius r and the observation sub-area s of an observation unit, when the grid size is equal to each other, the minimum grid number gn satisfies the equality defined as
gn = 64sα 2 (πr(1 − α)) 2 ,(1)
which can ensure that the region search accuracy with radius r is not lower than α. Proof 3.1: We first define l and w as the length and width for each grid, respectively. Further, the total area of grids covered by the approximate strategy is defined as gs. The area of grids which are covered by the search circle boundary is less than the area of grids which are covered by the circumscribed square boundary. Thus, gs satisfies the inequality defined as
gs ≤ 4r(l + w) + πr 2 ,
where 4r(l + w) is the area of grids which are covered by the circumscribed square boundary. We have known gs ≥ πr 2 which is the size of search circle. Therefore, the region search accuracy acc(region) = πr 2 /gs satisfies the following inequality defined as acc(region) ≥ πr 4(l + w) + πr .
It is consistent with the intuition that the search radius is positively correlated with the region search accuracy when grids are fixed. The acceptable minimum accuracy is known as α, so we can solve the inequality as
l + w ≤ πr(1 − α) 4α .
For obtaining the minimum grid number, we set l = w and then l = πr(1 − α)/8α. Thus, solve gn = s/l 2 as Eq. (1), and the minimum grid number increases with the reduction of the search radius. When giving the acceptable minimum radius, we can solve the final grid number. The occurrence of scientific events is usually independent to each other and almost can be seen as the uniform distribution. Therefore, the searched scientific events only depend on the search area, causing acc(region) to be actually equal to the query accuracy. When Eq. (1) is met, the query accuracy should be also greater than α.
According to the suggestion of Eq. (1), we implement EPgrid as an even grid scheme to partition the sub-area of each observation unit. For the given number of partitions, EPgrid tries to ensure that each grid is a square. Then, give every grid a unique partition ID. In addition, we record the lower left and upper right positions as partition metadata. Therefore, EPgrid can determine the partition ID of an object by a simple hash function, omitted due to space constraints. In addition, we design a map-only job to parse region, which only filters partitions and no data is transmitted over the network. Therefore, the performance to parse region is easily improved with the extension of the cluster scale.
B. SEPI Index
SEPI index is employed to support listing analysis. The listing analysis searches eids of scientific events using SEPI and loads the time-series of the corresponding scientific events. SEPI can be efficiently inserted and updated, and the index tier supports the efficiently distributed query. The most similar index to SEPI is EPI [13]. Compared with EPI, indexing on SEPI is faster by eliminating expensive operations, and its size is half of EPI.
If an object oid just appears in Eset at the current time t, it means a new scientific event. We set eid = oid|t where "|" means the concatenation of two strings, and emit a key-value pair < eid, t > to key-value store. Otherwise, we update the value of existing eid into t. We will keep updating the SEPI index until scientific events end. In other words, in key-value store we only keep the key-value pairs < oid|stime, etime >. Because we only store etime as the value, we call the index as Single Endpoint Index (or SEPI). SEPI is actually a set of key-value pairs, so that we can insert and update items very fast.
Giving a timeinterval(t s , t e ) to listing analysis, the query using SEPI is as follows. First, we execute one scan in parallel on SEPI inside key-value store: a scan() for all key-value pairs of SEPI with values in [t s , +∞) and loading them into the heap space of query engine. Second, we also execute one filter in parallel on key-value pairs loaded out: a f ilter() for key-value pairs with stime ≤ t e where stime can be extracted from the key string (i.e., oid|stime). After the filter, scientific events whose time intervals intersect timeinterval(t s , t e ) will be found. As shown in Figure 4, in timeinterval(t 4 , t 7 ) three scientific events can be found.
More specifically, even though t s is little causing a large part of SEPI to be spanned, it does not have a dramatic impact on the performance. First, Aserv is designed for real-time analysis of scientific events during the observation in which data size is large but the observation duration is not long. Second, we directly use the key-value store's scan() primitive to load data, not resulting in more overhead. Finally, query engine only scans SEPI once and the filter operation is a may-only job which is inherently fast on cloud systems.
EPI [13] keeps two key-value pairs for each scientific event in key-value store, where one records start endpoint and another records end endpoint. Given a timeinterval(t s , t e ), two scans need to be executed. One scan is to find scientific events whose endpoints appear in [t s , t e ]. In addition, for scientific event whose both endpoints are contained in [t s , t e ], distinct() operation is performed to remove duplicates. Another scan to find Scientific events who pierce [t s , t e ]. Finally, return the union of the two result sets. Actually, SEPI, compared with EPI, excludes distinct() and an extra scan operation. In addition, SEPI's size is only half of EPI, since one key-value pair is kept for each scientific event.
C. Partition Count Aggregation
Partition count aggregation (or PCAG) can support efficient probing analysis. The main idea is that we in advance generate the count for each partition. These counts will be merged to solve the final result to avoid the scan of original data and index when launching probing analysis.
For a given time T , we partition Eset using EPgrid with the same parameters. In every partition, we count the total number T otal of oids in Eset and the number N ew of new scientific events whose stime = T . For example, as shown in Figure 4 T otal = 2 and N ew = 1 at t 3 mean that two scientific events run through t 3 and one of them is just emerging. We generate a key-value pair as Intermediate Count Result (or ICR) where the key contains the partition ID and the value is T otal|N ew|T . The new ICR is appended into key-value store. ICRs of the same partition are organized as a key-list structure.
count(p) = T otal(t s ) + te i=ts+1 N ew(i)(2)
Query engine first parses region and timeinterval(t s , t e ) operators to obtain the partition IDs and the time range, and then loads the corresponding ICRs. For a partition p, probing analysis satisfies Eq. (2). For example, as shown in Figure 4 in timeinterval(t 4 , t 7 ) we first load one partition ICRs between t 4 and t 7 . T otal(t 4 ) is 2 and the sum of N ew between t 5 and t 7 is 1, so that the count is 3. The final count is the sum of counts of all searched partitions. For cloud systems, our method is also easy to implement. For each partition, we employ one map task to process one partition ICRs. Finally, employ one reduce task to add up counts of different partitions.
Obviously, ICRs will impact not only the Aserv's insertion performance but also the accuracy of probing analysis due to the approximate search strategy of region operator. However, the number of ICRs at T is equal to the number of partitions. An acceptable query accuracy can be got through adjusting the number of partitions using Eq. (1).
IV. PERFORMANCE CONSTRAINT
Aserv can meet the performance constraints in two ways. First, we try to design map-only jobs to enable the predictable performance. All tasks in map-only jobs only process local data, so that the scale overhead is very little. For example, both parsing region and scanning SEPI are map-only jobs. In addition, probing analysis only has a reduce task. Actually, the insertion component in essence can be treated as a maponly job, because workers have no data transmission to each other over the network. Second, on cloud we can scale out the cluster size to adjust the Aserv's performance. Therefore, consider a problem whether the performance constraints can be met for a given cluster size K.
The insertion latency consists of two parts: processing time and storage time spent to ingest data into key-value store. Aserv's insertion component can be seen as load-balance due to the same catalog size. Due to the map-only feature, the processing time is equal to the time f p (V n /K) spent on processing V n /K data by one worker, where V n is the data size of n objects collected per cycle. Using a similar derivation, the storage time is nearly equal to f s (V s /K) due to the linear scalability of key-value cloud store [12] where V s is the size of the stored data. Then, we define the insertion latency with the cluster size K as
f p (V n /K) + f s (V s /K) ≤ ct,(3)
where ct is the survey cycle. In addition, f p + f s is easy to be estimated. We only run the insertion component using one worker with the data size V n /K and capture the actual insertion latency as f p + f s . Similarly, the query latency also involves the reading time f r (V r /K) spent on loading data from key-value store and the query execution time f q (V r /K) + f o (K), defined as
f r (V r /K) + f q (V r /K) + f o (K) ≤ ct.(4)
More specifically, f r and f q are the parallel time being like to f p , but the distributed workloads will incur the scale overhead f o . We find that query workloads in Aserv only exchange data over the cluster by the shuffle pattern, so that f o includes shuffle I/O overhead and some fixed overhead, such as setting up processes or time spent in serial computation, etc. Actually, we can assign f o = θ 1 K + θ 2 where θ 1 and θ 2 are the constants, because shuffle phases satisfy allto-one communication pattern [14]. For estimating θ 1 and θ 2 , we can set a cluster size K (K < K) and run the query workload over the data size K V r /K to capture the execution time f a (K V r /K). Therefore, we can set f o (K ) = f a (K V r /K) − f r (V r /K) − f q (V r /K). We also assign K with different values to solve f o as the training data, so that θ 1 and θ 2 can be solved by the linear regression. We consider that Aserv is to nearly meet the performance constraints, when K satisfies both Eq. (3) and (4). In essence, our approach is to measure the parallel time and predict the communication time. It has important reference value for predicting the performance of short-running tasks. Many models, such as Ernest [14], are effective to fit the performance of long-running tasks due to the clear computation pattern. However, the computation pattern in short-running tasks is hard to be fitted due to the strong noise. Thus, our approach is more suitable for short-running tasks. The strategy how to measure Aserv's parallel time is shown in Section V-C.
V. EXPERIMENTS
We evaluate Aserv from four views: insertion latency, data reduction rate, query latency and query accuracy under a typical astronomical scenario GWAC [3] in which each observation unit can collect ∼175,600 objects per 15 seconds and one observation lasts 8 hours (about 1,920 time points). We simulate GWAC with a data generator.
Data generator. Our data generator follows the "master/slave" mode, where each sub-generator simulates an observation unit. A sub-generator produces a catalog file per cycle, including ∼175,600 lines and 25 columns. The object locations are referenced from the standard UCAC4 catalog [15]. In addition, we simulate scientific event signals by setting the Eset size to subject to the geometric distribution and the locations of scientific events to subject to the uniform distribution. The duration of each scientific event is random.
Cluster setup. We take our experiment on 20 cloud instances supported by Computer Network Information Center, Chinese Academy of Sciences. Each instance has 12 CPU cores (1.6 GHz per core) and 96 GB RAM. The network bandwidth is 10 Gbps. For data generator, we launch 19 subgenerators (i.e., one per instance) and use the last instance as the master. Finally, our cluster will generate 3.5 million rows of catalog data per 15 seconds, and 6.7 billion rows of catalog data per 8 hours. We build the Aserv's cluster on the same 20 instances where each worker in insertion component loads catalogs produced by a sub-generator on the local machine. We use C++ to implement the insertion component and exploit Redis cluster 3.2.11 [16] as the storage system. Spark 1.6.3
[17] is used for query processing. Experiments consist of three parts as follows.
• We compare insertion latency and data reduction rate under three different numbers of partitions. The insertion latency is 2.35 seconds and Aserv can achieve 2.23x data reduction rate under the optimal number of partitions (i.e., 10,000). • We show the performance of three analysis methods and demonstrate query accuracy. They in Aserv can satisfy the interactive performance. Probing analysis using PCAG is 1.57x-2.28x faster than the existing implementations. Listing analysis by SEPI is 2.22x faster than it by EPI. Stretching analysis can respond in milliseconds. The query accuracy is achieved to 0.9. • We use a few machines to predict Aserv's performance over the larger cluster size. Our performance model is effective to Aserv. The accuracy of predicted insertions latency is 0.96, and it is 0.86 for query latency.
A. Insertion Performance Evaluation
In general, the minimum region search accuracy α, which can be accepted by scientists, is 0.8. The minimum search circle is about 3% of the area observed by an observation unit. Based on these conditions, the available number of partitions is solved to be about 10,000 for every observation unit using Eq. (1). In the experiment, we do not only show the Aserv's performance under 10,000 partitions, but also demonstrate it under both 1,000 and 100,000 partitions as the comparisons.
As shown in Figure 5, at the first two cases Aserv can finish 1,920 times of data collection. However, we only collect data 1,738 times under 100,000 partitions because too many keys (i.e., partitions) can cause Redis cluster to fail frequently as the survey data continues to be ingested. Insertion latency. At the three cases, the insertion performance constraint can be met. However, as the number of partitions increases, the insertion latency becomes longer. Especially, the average latency under 100,000 partitions is 2.35x higher than it (2.35 seconds) under 10,000 partitions. Too many partitions incur lots of key-value pairs to be ingested into Redis cluster. The status information of each key-value pair is required to return to Aserv synchronously from Redis cluster to determine the successful ingestion. This procedure depends on the network response delay. Thus, our partition strategy can improve the insertion performance.
Data reduction rate. Although we use DAfilter to reduce data size, but the large number of partitions will also result in a poor data reduction. We use M D/OD as the data reduction rate. OD is the size of original data which is about 1,176 GB with 1,920 times of data collection and 1,064 GB with 1,738 times. M D is the memory size consumed by Redis cluster. As shown in Figure 6, the number of partitions is negatively correlated with the data reduction rate, because Redis cluster needs to take more overhead to keep more key-value pairs, such as the extra overhead of data structure, etc. With the reduction of partitions, Aserv can achieve a satisfactory data reduction rate. For example, only 23 GB RAM per instance is consumed when the number of partitions is 10,000. It suggests less budget to pay for cloud resources. However, as the number of partitions comes to 100,000, the consumed memory is even greater than the size of original data causing DAfilter failure. Thus, less partitions in Aserv can reduce the memory consumption.
B. Query Performance Evaluation
We refer to the LSST telescope discovery ability [18] and assume that the accuracy of event detector is 0.5. Finally, we decide to simulate the generation of 200,000 scientific events one night. We randomly set 10 different sets of parameters for region and timeinterval operators to ensure that probing analysis and listing analysis can find about 50∼5,000 scientific events. The result size is often concerned by scientists in interactive analysis. We evaluate the query latency on the maximum data size (i.e., 8 hour data for GWAC). It can represent the worst query performance in Aserv, because the data size processed by a query who is issued during the observation is less than the maximum data size.
Probing analysis. As shown in Figure 8, we implement probing analysis by three different approaches and compare their performance. Probing analysis using PCAG and SEPI can get the approximate count. PCSE 1 can get the precise count. At the three cases, the average query latency using PCAG is about 1.65 seconds which is 1.57x and 2.28x faster than it using SEPI and PCSE, respectively. It is obvious that scanning SEPI and parsing region operator for every scientific event have more overhead than count by merging ICRs. In addition, as partitions becomes more, the query performance significantly reduces. For example, the query performance under 100,000 partitions reduces by 22% compared with it under 10,000 partitions. The main reason is that it will incur more overhead to load more partitions and parse region operator on them. However, it does not mean that the number of partitions should be as few as possible. At the extreme case, all objects are assigned into one partition, and it is hard to use PCAG to determine the approximate count.
Query accuracy. Although the reduction of partitions can improve the insertion and query performance, the query accuracy would reduce due to the failure of approximate region search. Actually, the query accuracy using PCAG and SEPI is the same due to the same way to parse region operator so that we only use count P CSE /count P CAG as the query accuracy. We evaluate the accuracy of 10 probing analytical queries, respectively and demonstrate the minimum accuracy as the final result in Figure 7. It is within expectations that the actual query accuracy is 0.9 greater than the acceptable accuracy 0.8 under 10,000 partitions. Compared with 100,000 partitions, through 6.5% accuracy loss we achieve the significant performance improvement.
Listing analysis. As shown in Figure 9, we demonstrate the performance of listing analysis implemented by SEPI index and EPI index [13] in Redis cluster, respectively. Two different implementations parse the same region operator, so that the necessary region search dose not lead to performance differences. The average query latency of three partition cases by SEPI is average 2.72 seconds that can meet the performance constraints. In addition, the query performance by SEPI is 2.22x higher than it by EPI. We consider that the key reason for the poor performance of EPI is two scans of key-value store and distinct(). Therefore, we capture time T s spent on two scans and find that it is 53.33% of time spent on parsing EPI. In addition, distinct() spends about 7% of parsing time under 10,000 partitions. It can be explained that two scans and distinct() have the high overhead. Fortunately, SEPI excludes one extra scan and distinct(). Moreover, the single scan of SEPI needs to load less key-value pairs than EPI because the SEPI's size is half of EPI. It leads that time spent on scan() for SEPI only is 36.25% of T s .
Stretching analysis. We randomly select a scientific event which has been found by listing analysis. Furthermore, set timeinterval(stime − ∆t 1 , etime + ∆t 2 ) as the temporal range, where stime and etime are two endpoints of this scientific event and ∆t 1 = ∆t 2 = 20 minutes being from the real demand of astronomers. We find that the average query latency is 0.34 second under 1,000 partitions, 0.2 second under 10,000 partitions and 0.19 second under 100,000 partitions, respectively. The query performance slightly reduces under less partitions due to more objects in each partition. The outstanding performance is because Aserv only needs to load one partition of data involving the corresponding object.
C. Performance Constraint Evaluation
In this section, we evaluate the accuracy of performance model when a cluster size K is given. We set K = 19 excluding one master node and use acc p = 1 − |T e − T a |/T a ∈ [0, 1] as the prediction accuracy, where T e is the estimated execution time and T a is the actual execution time. If the prediction accuracy is less than 0, we will set it into 0. We use the case of 10,000 partitions to take experiments, and all tests will take 1,920 times of data collection. Insertion latency prediction. We build Aserv's cluster on two cloud instances to simulate the data generation of an observation unit. One instance is for the insertion component and another is for Redis cluster. The size of data being collected will be about equal to 1/19 of the total data size. Finally, we use the average insertion latency f p + f s in Eq.
(3) as the estimated insertion latency T e of 19 nodes. We find that T e is 2.25 seconds which is less than ct. It suggests that Aserv's insertion latency can meet the performance constraint when K = 19. In addition, we have solved T a = 2.35 seconds in Figure 5. Therefore, the prediction accuracy is 0.96. It is explained that both the insertion component and Redis cluster have a good linear scalability.
Query latency prediction. To evaluate the parallel time f r + f q , data distribution on one cloud instance needs to be simulated when K = 19. Partition IDs are designed to be continuous. They naturally subject to uniform distribution, so that Redis cluster can easily place data evenly over the cluster. Therefore, we employ the modulus strategy for partition IDs (i.e., modulo 19) to select the corresponding partitions and ingest them into Redis cluster. We build Aserv's cluster on two cloud instances, one of which is employed to simulate the data generation of 19 observation units with the modulus strategy and another is for Redis cluster. It is noted that we do not take this way to evaluate the insertion latency, because 19 insertion processes in one instance may share the resources to impact the insertion performance. Only on the instance that Redis cluster resides in, we launch query engine and use the actual execution time as f r + f q .
For the scale overhead, we build Aserv's cluster on K instances and also simulate data distribution of 19 observation units with the modulus strategy to evaluate f o (K ). Firstly, we try to employ the modulus strategy to ensure that the number of partitions on K instances is close to K times of the number of partitions on one instance when K = 19, so that we set K ∈ {3, 5, 10}. For example, partition IDs are modulo 6 when K = 3. Then, we launch query engine on K instances and capture the actual query latency to solve f o (K ) as the training data. Finally, we use Levenberg-Marquardt solver [19] to find f o that best fits the training data.
For each analysis method, we also run them 10 times with different parameters and list the average results in Table I. When K = 19, the estimated query latency solved by Eq. (4) is also less than 15 seconds. It suggests that the performance of query engine can meet the performance constraint. Although only 3 points are used for training, the average prediction accuracy is 0.86, which is high enough to help scientists estimate Aserv's query performance and cloud resource setup. On the one hand, our queries implemented on Spark do not contain the complex communication pattern. On the other hand, our strategy is effective to directly capture the parallel time in Aserv. Therefore, we can use a linear model to best fit a few training data.
VII. CONCLUSION
In this paper, we propose three basic analysis methods for the fast sky survey and develop a distributed system Aserv to implement real-time and low latency scientific event analysis. The unnecessary cost is cut down to help us achieve an accuracy aware approach to improve the analysis performance. We trade off the query accuracy, the resource consumption (mainly memory and network), insertion and query latency by modeling scientific event and adjusting the number of partitions. Ultimately, achieve the overall balance of the largescale scientific data analysis system. The specific optimization methods include DAfilter, EPgrid, SEPI index, PCAG and a performance model. Aserv can be downloaded from https://github.com/yangchenwo/Aserv.git.
| 6,958 |
1811.10902
|
2903456962
|
Cellular network configuration plays a critical role in network performance. In current practice, network configuration depends heavily on field experience of engineers and often remains static for a long period of time. This practice is far from optimal. To address this limitation, online-learning-based approaches have great potentials to automate and optimize network configuration. Learning-based approaches face the challenges of learning a highly complex function for each base station and balancing the fundamental exploration-exploitation tradeoff while minimizing the exploration cost. Fortunately, in cellular networks, base stations (BSs) often have similarities even though they are not identical. To leverage such similarities, we propose kernel-based multi-BS contextual bandit algorithm based on multi-task learning. In the algorithm, we leverage the similarity among different BSs defined by conditional kernel embedding. We present theoretical analysis of the proposed algorithm in terms of regret and multi-task-learning efficiency. We evaluate the effectiveness of our algorithm based on a simulator built by real traces.
|
Various aspects of network parameter configuration have been studied in the literature, such as pilot power configuration, spectrum, handoff threshold, etc. Traditional approaches derive analytical relationship between network configuration and its performance based on communication theory, such as @cite_13 @cite_5 @cite_20 @cite_15 . Such approaches are often prohibitively complex, involve various approximations, and require a significant amount of input information (such as the number of users, the location of each user, etc.).
|
{
"abstract": [
"The paper validates the feasibility of automating the setting of common pilot power in a WCDMA radio network. The pilot automation improves operability of the network and it is implemented with a control software aiming for load and coverage balancing. The control applies measurements of base station total transmission power of neighboring cells and terminal reports of received pilot signal level to determine the pilot qualification. The pilot power of a cell is periodically updated with simple heuristic rules in order to improve the load and coverage balance. The approach was validated using a dynamic WCDMA system simulator with a deployment of macro and micro cells on a city region whose measured propagation characteristics were incorporated into the model. The results showed that the proposed control method balanced load and coverage and improved the air interface performance measured as a function of packet throughput.",
"Base station (BS) sleeping is an effective way to improve the energy-efficiency of cellular networks. However, it may bring extra user-perceived delay. We conduct a theoretical study into the impact of BS sleeping on both energy-efficiency and user-perceived delay. We consider hysteresis sleep and three typical wake-up schemes, namely single sleep, multiple sleep, and @math -limited schemes. We model the system as an @math vacation queue, which captures the setup time, the mode-changing cost, as well as the counting or detection cost during the sleep mode. Closed-form expressions for the average power and the Laplace–Stieltjes transform of delay distribution are obtained. The impacts of system parameters on these expressions are analyzed. We then formulate an optimization problem to design delay-constrained energy-optimal BS sleeping policies. We show that the optimal solutions possess a special structure, thereby allowing us to obtain them explicitly or numerically by simple bisection search. In addition, the relationship between the optimal power consumption and the mean delay constraint is analyzed, so as to answer the fundamental question: how much energy can be saved by trading off a certain amount of delay? It is shown that this optimal relationship is linear only when the delay constraint is lower than a threshold. Numerical studies are also conducted, where the impact of detection or counting cost during the sleep mode is explored, and the delay distribution under the optimal policy is obtained.",
"Pilot power management is an important issue for coverage planning in UMTS systems. We consider the problem of minimizing the pilot power subject to the constraint of full service coverage. For this planning problem, which is NP-hard in complexity, effective methods being able to deal with large-scale networks of heterogeneous cell coverage patterns are highly desirable. We propose an integer linear optimization formulation, and a decomposition method that exploits the problem structure using a Dantzig-Wolfe reformulation. We report numerical results for networks of various sizes. The proposed method efficiently finds near-optimal solutions that yield substantial savings in power consumption when compared to baseline approaches.",
"Radio coverage optimization is a key element in improving the overall performance of femtocell cellular networks. Owing to the increased size and complexity of femtocell networks, there is an imperative need to develop decentralized coverage optimization algorithms, which is a highly challenging task as these algorithms have to work without global information and coordinated central control of network nodes. In this paper, we consider a deployment scenario of a group of @math femtocells in an enterprise environment and propose a decentralized algorithm for joint femtocell coverage area optimization. The algorithm updates the femtocell's pilot Tx power only and serves to balance the user load amongst the collocated femtocells as well as minimizing the coverage holes and pilot Tx power. Compared to the fixed pilot Tx power allocation, the algorithm introduces an improvement of approximately 18 in terms of supported user traffic, in addition to a significant reduction of pilot leakage to neighboring cells."
],
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"1531572832",
"2343363056",
"2108218430",
"2165874967"
]
}
|
Kernel-based Multi-Task Contextual Bandits in Cellular Network Configuration
|
With the development of mobile Internet and the rising number of smart phones, recent years have witnessed a significant growth in mobile data traffic [1]. To satisfy the increasing traffic demand, cellular providers are facing increasing pressure to further optimize their networks. Along this line, one critical aspect is cellular base station (BS) configuration. In cellular networks, a BS is a piece of network equipment that provides service to mobile users in its geographical coverage area (similar to a WiFi access point, but much more complex), as shown in Figure 1. Each BS has a large number of parameters to configure, such as spectrum band, power configuration, antenna setting, and user hand-off threshold. These parameters have a significant impact on the overall cellular network performance, such as user throughput or delay. For instance, the transmit power of a BS determines its coverage and affects the throughput of all users it serves.
In current practice, cellular configuration needs manual adjustment and is mostly decided based on the field experience of engineers . Network configuration parameters typically remain static for a long period of time, even years, unless severe performance problems arise. This is clearly not optimal in terms of network performance: different base stations have different deployment environments (e.g., geographical areas), and the conditions of each BS (e.g., the number of users) also change over time. Therefore, as shown in Figure 1, setting appropriate parameters for each deployed BS based on its specific conditions could significantly help the industry to optimize its networks. A natural way of achieving this goal is to apply onlinelearning-based algorithms in order to automate and optimize network configuration. Online-learning-based cellular BS configuration faces multiple challenges. First, the mapping between network configuration and performance is highly complex. Since different BSs have different deployment environments, they have different mappings between network configuration and performance, given a BS condition. Furthermore, for a given BS, its condition also changes over time due to network dynamics, leading to different optimal configurations at different points in time. In addition, for a given BS and given condition, the impact of network configuration on performance is too complicated to model using white-box analysis due to the complexity and dynamics of network environment, user diversity, traffic demand, mobility, etc. Second, to learn this mapping and to optimize the network performance over a period of time, operators face a fundamental exploitation-exploration tradeoff: in this case, exploitation means to use the best known configuration that benefits immediate performance but may overlook better configurations that are unknown; and exploration means to experiment with unknown or uncertain configurations which may have a better performance in the long run, at the risk of a potentially lower immediate performance. Furthermore, running experiments in cellular networks is disruptive -users suffer poor performance under poor configurations. Thus, providers are often conservative when running experiments and would prefer to reduce the number of explorations needed in each BS. Fortunately, in a cellular network, BSs usually have similarities, even though they are not identical. Therefore, it would be desirable to effectively leverage data from different BSs by exploiting such similarities. To address these challenges, we consider multiple BSs jointly and formulate the corresponding configuration problem as a multi-task on-line learning framework as shown in Figure 2. The key idea is to leverage information from multiple BSs to jointly learn a model that maps the network state and its configuration to performance. The model is then customized to each BS based on its characteristics. Furthermore, the model also allows the BSs to balance the tradeoff between the exploration and exploitation of the different configuration. Specifically, we propose a kernelbased multi-BS contextual bandits algorithm that can leverage similarity among BSs to automate and optimize cellular network configuration of multiple BSs simultaneously. Our contributions are multi-fold:
• We develop a kernel-based multi-task contextual bandits algorithm to optimize cellular network configuration. The key idea is to explore similarities among BSs to make intelligent decisions about networks configurations in a sequential manner. • We propose a method to estimate the similarity among the BSs based on the conditional kernel embedding. • We present theoretical guarantees for the proposed algorithm in terms of regret and multi-task-learning efficiency. • We evaluate our algorithm using real traces. Our proposed algorithm outperforms bandits algorithms not using multi-task learning by up to 35%. The rest of the paper is organized as follows. The related work is in Sec. II. We introduce the system model and problem formulation in Sec. III. We present a kernel-based multi-BS contextual bandit algorithm in Sec. IV. The theoretical analysis of the algorithm is in Sec.V. We demonstrate the numerical results in Sec. VI, and conclude in Sec. VII.
III. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, firstly, we describe the detail of the multi-BS configuration problem. Then we formulate the problem as a multi-task contextual bandits model.
A. Multi-BS Configuration
In this work, we focus on the multi-BS network configuration problem. Specifically, we consider a set of BSs M := {1, · · · , M } in a network. The time of the system is discretized, over a time horizon of T slots. At time slot t, ∀t ∈ T := {1, · · · , T }, for each BS m ∈ M, its state is represented by a vector s at,t ∈ R, which is a measure of network performance. We note that a In practice, the configuration parameters can include pilot power, antenna direction, handoff threshold, etc. The reward can be metrics of network performance, such as uplink throughput, downlink throughput, and quality-ofservice scores. Time granularity of the system is decided by network operators. In the current practice, configurations can be updated daily during midnight maintenance hours. To further improve network performance, network operators are moving towards more frequent network configuration updates, e.g., on an hourly basis, based on network states.
The goal of the problem is to find the configuration a
In this problem, for a given BS and a given state, we do not have a prior knowledge of the reward of its action. We need to learn such a mapping during the time horizon. In other words, when choosing a and the corresponding reward also affect future actions. Therefore, there exists a fundamental exploitationexploration tradeoff: exploitation is to use the best learned configuration that benefits the immediate reward but may overlook better configurations that are unknown; and exploration is to experiment unknown or uncertain configurations which may have a better reward in the long run, at the risk of a potentially lower immediate reward.
Furthermore, we note that the action of one BS can be affected by the information of other BSs. Therefore, the information from multiple BSs should be leveraged jointly to optimize the problem in (1). Also, note that the BSs are similar but not identical. Therefore, the similarity of BSs need to be explored and leveraged to optimize the network configuration.
In summary, the goal of multi-BS configuration problem is to choose appropriate actions for all time slot and all BSs to maximize the the problem defined in Eq. (1).
B. Multi-Task Contextual Bandit
We model the problem as multi-task contextual bandits. Now, we briefly introduce the classical bandit model and contextual bandit model.
Multi-armed bandit (MAB) [11] is a powerful tool in a sequential decision making scenario where at each time step, a learning task pulls one of the arms and observes an instantaneous reward that is independently and identically (i.i.d.) drawn from a fixed but unknown distribution. The task's objective is to maximize its cumulative reward by balancing the exploitation of those arms that have yielded high rewards in the past and the exploration of new arms that have not been tried. The contextual bandit model [10] is an extension of the MAB in which each arm is associated with side information, called the context. The distribution of rewards for each arm is related to the associated context. The task is to learn the arm selection strategy by leveraging the contexts to predict the expected reward of each arm. Specifically, in the contextual bandit, over a time horizon of T slots, at each time t, environment reveals context x a,t ∈ R p for each arm a ∈ A. If learner selects and pulls an a t ∈ A , he receives a reward r (m) at,t from environment. At the end of the time slot t, learner improves arm selection strategy based on new observation {x at,t , r at,t }. At time t, the best arm is defined as a * t = arg max a∈A E(r at,t |x at,t ) and the corresponding reward is r a * t ,t . The regret at time T is defined as the sum of the gap between the real reward and the optimal reward through the T time slots in Eq. (2). The goal of maximization of the accumulative reward T t=1 r at,t is equivalent to minimizing the regret 1 .
R(T ) = T t=1 (r a * t ,t − r at,t )(2)
Based on the classical contextual bandit problem, we propose a multi-task contextual bandit model. Consider a set of tasks M := {1, · · · , M } , each task m ∈ M can be seen as a standard contextual bandit problem. More specifically, in task m, at each time t, for each arm a ∈ A, there is an associated context vector x We also define the best arm as a Improves arm selection based on new observations
{(x (m) at,t , r (m) at,t )|m ∈ M} 8: end for R(T ) = M m=1 T t=1 r (m) a * t ,t − r (m) at,t(3)
We can formulate the multi-BSs configuration problem as multi-task contextual bandit. We regard the configuration optimization problem for one BS as one task. Specifically, for each BS m, at time t, context associated with arm a is the combination of the state and the action, i.e., x (m)
a,t = (s (m) t , a (m) t )
. Then the goal of finding the best arms which can maximize the total accumulative reward in Eq.
(1) is equivalent to minimizing the regret defined in Eq. (3).
IV. METHODOLOGY
Most existing work on the contextual bandit problems, such as LinUCB [12], KernelUCB [13] assume the reward is a function of the context, i.e., r at,t = f (x at,t ). At each time slot t, these algorithms use the estimated functionf (·) to predict the reward of each arm according to the context at time t, i.e.,{x at,t } a∈A . Based on the value and uncertainty of the prediction, they calculate the upper confident bound (UCB) of each arm. Then they select the arm a t that has the maximum UCB value and then obtains a reward r at,t . Last, they update the estimated functionf (·) by the new observation (x at , r at,t ).
In our multi-BS configuration problem defined in Eq. (1), if we model every BS as an independent classical contextual bandit problem and use the existing algorithm to make its own decision, it would lose information across BSs and thus is not efficient. Specifically, in the training process, it would learn a group of function {f (m) |m ∈ M} independently and ignore the similarity among them. In practice, the BSs that are configured simultaneously have lots of similar characteristics, such as geographical location, leading to similar reward functions. Furthermore, in the real case, since the configuration parameters have a large impact on the network performance, the cost of experience is expensive. We need an approach to use the data effectively. So, motivated by this observation, we design the kernelbased multi-BS contextual bandits that can leverage the similarity information and share experiences among BSs, i.e., tasks.
In this section, we propose a framework to solve the problem in Eq. (3). We start with the regression model. Then we describe how to incorporate it with multi-task learning. Next, we propose kernel-based multi-BS contextual bandits algorithm in Sec.IV-C. In the last, we discuss the details of task similarity for real data in Sec. IV-D.
A. Kernel Ridge Regression
For the network configure problem, we need to learn a model from historical data that can predict the reward r at,t from the context x at,t . There are two challenges. First, the learned model should capture the non-linear relation between the configuration parameters, state (context) and the network utility (reward) in complex scenarios. Second, since the learned model is used in the contextual bandit model, it needs to not only offer the mean estimate value of the prediction but also a confidence interval of the estimation that can describe the uncertainty of the prediction. This important feature is used later to trade off exploration and exploration in the bandit model.
To address these two challenges, we use kernel ridge regression to learn the prediction model that can capture non-linear relation and provide an explicit form of the uncertainty of the prediction. Furthermore, intuitively, the kernel function can be regarded as a measure of similarity among data points. which makes it suitable for the multitask learning into it in Sec. IV-B. Let us briefly describe the kernel regression model.
Kernel ridge regression is a powerful tool in supervised learning to characterize the non-linear relation between the target and feature. For a training data set {(x i , y i )} n i=1 , kernel method assumes that there exists a feature mapping φ(x) : X → H which can map data into a feature space in which a linear relationship y = φ(x) T θ between φ(x) and y can be observed, where θ is the parameter need to be trained. The kernel function is defined as the inner product of two data vectors in the feature space.
k(x, x ) = φ(x) T φ(x ), ∀x, x ∈ X .
The feature space H is a Hilbert space of functions f : X → R with inner product k < ·, · >. It can be called as the associated reproducing kernel Hilbert space (RKHS) of k, notated by H k . The goal of kernel ridge regression is to find a function f in the RKHS H that can minimize the mean squared error of all training data, as shown in Eq. (4).
f = arg min f ∈H k n i=1 (f (x i ) − y i ) 2 + λ||f || 2 H k(4)
Applying the representer theorem, the optimal f can be represented as a linear combination of the data points in the feature space, f (·) = n i=1 α i k(x i , ·). Then we can get the solution of Eq. (4)
f (x) = k T X:x (K + λI) −1 y(5)
where y = (y 1 , · · · , y n ), K is the Gram matrix, i.e., K ij = k(x i , x j ), k X:x = (k(x 1 , x), · · · , k(x n , x)) is the vector of the kernel value between all historical data X and the new data, x. This provides basis for our bandit algorithms. The uncertainty of prediction of the kernel ridge regression is discussed in Sec.IV-C.
B. Multi-Task Learning
We next introduce how to integrate kernel ridge regression into multi-task learning which allows us to use similarities information among BSs.
In multi-task learning, the main question is how to efficiently use data from one task to another task. Borrowing the idea from [16], [19], we define the regression functions in the followings:
f :X → Y (6) whereX = Z × X , X is the original context space, Z is the task similarity space, Y is the reward space. For each context x (m)
at,t of BS m, we can associate it with the task/BS descriptor z m ∈ Z, and definex (m)
at,t = (z m , x (m)
at,t ) to be the augmented context. We define the following kernel functioñ k in (7) to capture the relation among tasks.
k((z, x), (z , x )) = k Z (z, z )k X (x, x )(7)
where k X is the kernel defined in original context, and k Z is the kernel defined in tasks that measures the similarity among tasks/BSs. Then we define the task/BS similarity matrix K Z as (K Z ) ij = k Z (z i , z j ). We discuss the training of this similarity kernel and similarity matrix in Sec.IV-D.
In the multi-tasks contextual bandit model, at time t, we need to train an arm selection strategy based on the history data we experienced, i.e.,{(x (m) τ , r (m) aτ ,τ )|m ∈ M, τ < t}. We formulate a regression problem in Eq. (8)
f t = arg min f ∈Hk M m=1 t−1 τ =1 (f (x (m) aτ ,τ ) − r (m) a,τ ) 2 + λ||f || 2 Hk (8) wherex (m)
aτ ,τ is the augmented context of the arm a τ for task m, which is defined as the combination of the task descriptor z m and origianl context x The only difference is that we use the augmented contextx and new kernelk instead of the x and k.
f t (x) =k T t−1 (x)(K t−1 + λI) −1 y t−1(9)
whereK t−1 is Gram matrix of [x
C. Kernel-based Multi-BS Contextual Bandits
Next, we introduce how to measure the uncertainty of the prediction in Eq. (9). At time T , for a specific task, i.e., BS, m ∈ M, for a given augmented contextx aτ ,τ )|m ∈ M, τ < T } are all independent random variables. Then we can use McDiarmid's inequality to get an upper confident bound of the predicted value. Since the mathematical derivation of this step is the same as Lemma 1 in [19], we only make a minor modification to obtain Theorem 1.
1 − δ T , we have that ∀a ∈ A |f t (x (m) a,t ) − f * (x (m) a,t )| ≤ (α + c √ λ)σ (m) a,t(10)
where the width is
σ (m) a,t = k (x (m) a,t ,x (m) a,t ) −k T t−1 (x (m) a,t )(K t−1 + λI) −1k t−1 (x (m) a,t )(11)
Based on Theorem 1, we define the upper confident bound UCB for each arm for each task in Eq. (12), wheref t is obtained from Eq. (9), and β is a hyper-parameter.
UCB (m) a,t =f t (x (m) a,t ) + βσ (m) a,t(12)
Then we propose Algorithm 1 to solve the multi-BS configuration problem.
In Algorithm 1, at each time t, it updates the prediction modelf t . Then for each task m ∈ M, it uses the model to obtain the UCB of each arm a ∈ A. Next it selects the arm that has the maximum UCB. Algorithm 1 can trade off between the exploitation and exploration in the multi-BS configuration problem. The intuition behind it is as following: if one configuration is only tried for few times or even yet tried, its corresponding arm's width defined in Eq.(11) is larger, which makes its UCB value larger, then this configuration will be tried in following time with high probability.
Independent Assumption Note that the independent assumption of Theorem 1 is not true in Algorithm 1, because the previous rewards influence the arm selection strategy (prediction function), then influence the following reward. To address it, we select a subset of them to make this assumption hold true in Sec. V.
M −1 = A U V C −1 (13) = S −1 −S −1 U C −1 −C −1 V S −1 C −1 V S −1 U C −1 + C −1(14)
Based on it, we can update (K t + λI) −1 by (K t−1 + λI) −1 . It decreases the computation complexity to O(M t 2 ).
For the issue of dealing with large dimension of Gram matrix K has been much studied in Chapter 8 of [22]. Most of them are designed for the supervise learning cases. In our problem, based on thr feature of online learning, Schur complement method is more suitable and efficiency.
D. Similarity
The kernel k Z (z, z ) that defines the similarities among the tasks/BSs plays a significant role in Algorithm 1. When k Z (z, z ) = 1(m = m ), where 1 is the characteristic function, Algorithm 1 is equivalent to running the contextual bandit independently for each BS. In this section, we discuss how to measure the similarity in real data if it is not provided.
Suppose the ground truth function for task i (i.e., BS i) is y = f i (x) , we need to define the similarity among different BSs based on the ground truth functions f i (x). From a Bayesian view, y = f i (x) is equivalent to the conditional distribution P (Y i |X i ). Therefore, we can use the conditional kernel embedding to map the conditional distributions to operators in a high-dimensional space, and then define the similarity based on it. Let us start with the definition of kernel embedding and conditional kernel embedding.
1) Conditional kernel embedding: Kernel embedding is a method in which a probability is mapped to an element of a potentially infinite dimensional feature spaces, i.e., a reproducing kernel Hilbert space (RKHS) [23]. For a random variable in domain X with distribution P (X) , suppose k : X × X → R is the positive definite kernels with corresponding RKHS H X , the kernel embedding of a kernel k for X is defined as
ν x = E X [k(·, x)] = k(·, x)dP (x)(15)
It is an element in H X . For two random variable X and Y , suppose k : X × X → R and l : Y × Y → R are respectively the positive definite kernels with corresponding RKHS H X and H Y . The kernel embedding for the marginal distribution P (Y |X = x) is:
ν Y |x = E Y [l(·, y)|x] = l(·, y)dP (y|x)(16)
It is an element in H Y . Then for the conditional probability P (Y |X), the kernel embedding is defined as a conditional operator C Y |X : H Y → H X that satisfies Eq. (17)
ν Y |x = C Y |X k(x, ·)(17)
If we have a data set {x i , y i } n i=1 , which are i.i.d drawn from P (X, Y ), the conditional kernel embedding operator can be estimated by:Ĉ
Y |X = Ψ(K + λI) −1 Φ(18)
where Ψ = (l(y 1 , ·), · · · , l(y n , ·)) and Φ = (k(x 1 , ·), · · · , k(x n , ·)) are implicitly formed feature matrix, K is the Gram matrix of x, i.e.,
(K) ij = k(x i , x j )
The definition of conditional kernel mean embedding provides a way to measure probability P (Y |X) as an operator between the spaces H Y and H X .
2) Similarity Calculation: In this section, we use the conditional kernel mean embedding to define the similarity space Z and augmented context kernel k Z in Eq. (7).
We define the task/BS similarity space as Z = P Y|X , the set of all conditional probability distributions of Y given X. Then for task/BS i, given a context x Then we use the Gaussian-form kernel based on the conditional kernel embedding to define k Z :
k Z (P Yi|Xi , P Yj |Xj ) = exp(−||C Yi|Xi − C Yj |Xj || 2 /2σ 2 Z )(19)
where ||·|| is Frobenius norm, C Y |X is the conditional kernel embedding defined in Eq. (17) and can be estimated by Eq. (18). The hyper parameter σ Z can be heuristically estimated by the median of Frobenius norm of all dataset. In Eq. (18), it can only be used in explicit kernels. Next, we use the kernel trick to derive a form that does not include explicit features.
For two data sets
D 1 = {(x i , y i )} n1 i=1 and D 2 = {(x i , y i )} n2
i=1 , k and l are respectively two positive definite kernels with RKHS H X and H Y . For data set D m , we define Ψ m = (l(y 1 , ·), · · · , l(y nm , ·)) and Φ m = (k(x 1 , ·), · · · , k(x nm , ·)) are implicitly formed feature matrix of y and x. K m = Φ T m Φ m and L m = Ψ T m Ψ m are Gram matrix of all x and y. We use U Y |X and O Y |X to denote the conditional kernel embeddings for D 1 and D 2 , respectively. According to Eq. (18), we have
U Y |X = Ψ 1 (K 1 + λI) −1 Φ T 1 O Y |X = Ψ 2 (K 2 + λI) −1 Φ T 2 Then ||U Y |X − O Y |X || 2 =tr(U T Y |X U Y |X ) − 2tr(U T Y |X O Y |X ) + tr(O T Y |X O Y |X )(20)
Define matrix K 12 and L 12 by (
K 12 ) ij = k(x i , x j ) and (L 12 ) ij = l(y i , y j ), where (x i , y i ) is the i-th data in D 1
and (x j , y j ) is the j-th data in D 2 , so as K 21 and L 21 . Then for the second term in Eq. (20),
tr(U T Y |X O Y |X ) = tr(Ψ 1 (K 1 + λI) −1 Φ T 1 Φ 2 (K 2 + λI) −1 Ψ T 2 ) = tr((K 1 + λI) −1 Φ T 1 Φ 1 (K 2 + λI) −1 Ψ T 2 Ψ 1 ) = tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 )
After using the same trick for other terms, Eq. (20) can be written as
||U Y |X − O Y |X || 2 = tr((K 1 + λI) −1 K 1 (K 1 + λI) −1 L 1 ) − 2 * tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 + tr((K 2 + λI) −1 K 2 (K 2 + λI) −1 L 2 )(21)
Then we can use Eq. (21) in Eq. (19) to measure the similarity between tasks.
V. THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of Algorithm 1 based on the classical bandit analysis. The first part is about regret analysis and the second part is about the multi-task-learning efficiency.
A. Regret Analysis
In Algorithm 1, at each time slot t, it uses the trained model to make a decision for all BSs synchronously. This is not in the same form of classical bandit model. In order to simply the analysis, we make an asynchronous version in Algorithm 2, in which at each time t, it receives the context (state and action) of one BS with its BS ID, denoted by V t , that is used to identify the BS index m. Then algorithm 2 obtains the augment context using V t and then makes a decision for the BS. In this manner, Algorithm 2 makes a decision for all BSs asynchronously. The performance of synchronous and asynchronous methods are similar when the number of BSs is moderate and all BSs come in order, as in our case. Choose arm a t = arg max ucb a,t for BS V t
9:
Observe reward r at,t 10:
Update y t by r at,t 11: end for
The regret of Algorithm 2 is defined by
R(T ) = M m=1 T t=1 (r (m) a * t ,t − r (m) at,t )1(V t = m)(22)
In Algorithm 2, the estimated rewardr at,t at time t can be regarded as the sum of variables in history [r aτ ,τ ] τ <t that are dependent random variables. It does not meet the assumption in Theorem 1, thus we are unable to analysis the uncertainty of the prediction.
To address this issue, as in [14], [11], we design the base version (Algorithm 3) and super version (Algorithm 4) of Algorithm 2 in order to meet the requirement of Theorem 1. In Algorithm 4, it constructs special, mutually exclusive subsets {Ψ(s)} S of ts the elapsed time to guarantee the event {t ∈ Ψ (s) t+1 } is independent of the rewards observed at times in Ψ (s) t . On each of these sets, it uses Algorithm 3 as subroutine to obtain the estimated reward and width of the upper confident bound which is the same as Algorithm 2.
Algorithm 3 Base asynchronous multi-BS configuration 1: Input: β, Ψ ⊂ {1, · · · , t − 1} 2: Calculate Gram matrixK Ψ and get y Ψ = [r aτ ,τ ] τ ∈Ψ 3: Observe the BS ID V t and corresponding context features at time t: x a,t for each a ∈ A 4: Determine the BS descriptor z m and get the augmented contextx a,t 5: for all arm a in A at time t do 6:
σ a,t = k (x a,t ,x a,t ) −k T a,Ψ (K Ψ + λI)k a,Ψ 7:
ucb a,t =f (x a,t ) + βσ a,t 8: end for Algorithm 4 Super asynchronous multi-BS configuration 1: Input: β, T ∈ N 2: Initialize S ← log T and Ψ if ω a,t ≤ 1 √ T for all a ∈Â (s) then 9:
Choose a t = arg max a∈Â (s) ucb a,t 10: until a t is found 19: Observe reward r at,t 20: end for
Φ (s) t+1 ← Φ
The construction of Algorithm 3 and Algorithm 4 follow similar strategy of that in the proof of KernelUCB (see Theorem 1 in [13] or Theorem 1 in [19]). Then we can get the following theorem.
Theorem 3. Assume that r a,t ∈ [0, 1], ∀a ∈ A, T ≥ 1, ||f * || Hk ≤ ck, ∀x ∈X and tasks similarity matrix K Z is known. With probability 1 − δ, the regret of Algorithm 4 satisfies,
R(T ) ≤ 2 √ T + 10( log(2T N (log(T ) + 1)/δ)) 2 + c √ λ)
B. Multi-task-learning Efficiency
In this section, we discuss the benefits of multi-task learning from the theoretical view point.
In the asynchronous setting, i.e., Algorithm 2 and Algorithm 4, because all BSs/tasks come in order, at time t, each task happens n = t M times. Let K Xt be Gram matrix of [x (m) aτ ,τ ] τ ≤t,m∈M i.e., original context, K Z be the similarity matrix. Then, following Theorem 2 in [19], the following results hold, Theorem 4. Define the rank of matrix K X T +1 as r x and the rank of matrix K Z as r z . Then log(g([T ])) ≤ r z r x log (T + 1)ck + λ λ According to Eq. (23), if the rank of similarity matrix is lower, which means all BSs/tasks have higher inter-task similarity, the regret bound is tighter.
We make the further assumption that all distinct tasks are similar to each other with task similarity equal to µ. Define g µ ([T ]) as the corresponding value of g([T ]) when all task similarity equal to µ. According to Theorem 3 in [19], we have
Theorem 5. If µ 1 ≤ µ 2 , then g µ1 ([T ]) ≥ g µ2 ([T ])
This shows that when BSs/tasks are more similar, the regret bound is tighter. In our case, running all task independently is equivalent to setting the similarity as an identify matrix, i.e., µ = 0. So, based on the previous two theorems, we show the benefits of our algorithm using the multi-task learning.
VI. EVALUATION
In this section, we evaluate the performance of the proposed approach in a simulator built on the real network data. We start with the data collection and simulator construction procedure, then discuss about the numerical results.
A. Data Collection and Simulator Construction
Since test of bandit algorithm requires an interactive environment, we build a network simulator based on data collected in real networks, which can provide feedback on the algorithm's action.
The data is collected in the real base station configuration experience conducted by a service provider in a metropolitan city. Each data instance contains the following information: sample time, cell name and ID, network measurements (e.g., user number, CQI, average packet size, etc), and configured parameter values. In the test, the configured parameter is the Reference Signal Received Quality (RSRP) threshold for Long-Term Evolution (LTE) A2 event during inter-frequency handover. Inter-frequency handover is a procedure for a BS to guarantee the user experience in cellular network. If one BS observes RSRP of a user it serves is lower than the The network utility that measures the performance is the ratio of users with throughput less than 5 Mbps, i.e., the performance of edge users. Some data samples are illustrated in Table. I. In the problem, the configured parameter a t is the RSRP threshold for LTE A2 event during inter-frequency handover, as shown in the 7th column of Table I. The goal is to minimize the ratio of users with throughput less than 5Mbps, as shown in column 8. To accord with the maximization problem, we define the negative value of this ratio as r at,t . Further, based on field experience, 5 measurement metrics are carefully selected as state s t for each BS, including: number of downlink average active users, ratio of CQI index 0 reports (i.e., low CQI ratio), ratio of small packet SDUs, ratio of small packet traffic volume, number of downlink average users as shown in column 2 to column 6 in Table. I. Then we define the context x a,t = (s t , a t ) for each BS to formulate it into a multi-task contextual bandit problem. The goal is to find the best configured parameter a that maximize the reward r.
The input of the simulator is a query state S * and a configuration parameter value A * . The output is the corresponding reward r. The simulator estimates the rewards for different states and configurations using the following method. For the query state S * and the configuration parameter A * , we search for all samples (S i , A i , R i ) in the data set, and compute a similarity score between (S * , A * ) and (S i , A i ). The similarity score is calculated based on the Euclidean distance between (S * , A * ) and (S i , A i ). We sort the samples according to the similarity score and choose the top-k samples. The average reward of the top-k samples is used as the return of the simulator.
B. Evaluation Setup and Results
As we described in last section, the dimension of the state space is 5. The action space is from -112 dBm to -84 dBm with 1 dBm resolution, i.e., the number of arms in our model is 29. The reward space is R. We use 3 different BSs generated by the simulator and indexed them by {0, 1, 2}, i.e., BS 0, BS 1, BS 2, to test our algorithms. Based on the definition of similarity in Sec. IV-D, we can train the similarity matrix K Z among them, leading to the following result,
We test the multi-task learning case. In Fig. 3, the result for Algorithm 1 using similarity matrix K Z is shown. It compares the accumulated regret of the case with multitask learning and the case that models BSs as independent contextual bandits problems. We use KernelUCB [13] for the independent contextual bandits problems as a baseline. Here, the accumulated regrets shown in Fig. 3 are the sum of the accumulated regrets of the three BSs. Further, each data point is the average result of 20 individual simulations. It can be shown that, when the multi-task learning is used, the regret increases much slower than the case where BSs run independently. At the end of the 1000 time slots, the multi-task learning decreases 35% of the regret in the final. In order to compare the multi-task-learning efficiency of Algorithm 1 for one BS in different multi-task scenarios, we test the following cases for BS 0 with online learning data from itself and other different BS: (BS 0 and BS 1), (BS 0 and BS 2). We also test a case with a BS that is identical to BS 0, resulting in an online learning data set from (BS 0, BS 0). We can regard this as the optimal multitask learning case. In Fig. 4, we measure the performance of Algorithm 1 only by the regret of one BS, i.e., BS 0 in the above mentioned learning scenarios, which is different from the total regret of all BSs in Fig. 3.
In Fig. 4, we find that, the regret of BS 0's default configuration grows out of the bound of the figure quickly. The default configuration is used in present network. It's a fixed parameter and do not change by the state of the BS. The line 'single BS 0' is the result of KernelUCB [13] for a independent contextual bandit model for BS 0. Expect the ideal multi-task case (BS 0, BS 0), the case (BS 0, BS 1) has a better multi-task-learning efficiency, in which the BS 0 has lower regret. Through experiment, we also find that BS 1 has a better similarity with BS 0. To be specific, the similarity is 0.811. So that we can see conditional kernelembedding is a reasonable similarity in this problem. VII. CONCLUSION In this work, in order to address the multi-BS network configuration problem, we propose a kernel-based multi-task contextual bandits algorithm that leverages the similarity among BSs effectively. In the algorithm, we also provided an approach to measure the similarity among tasks based on conditional kernel embedding. Furthermore, we present theoretical bounds for the proposed algorithm in terms of regret and multi-task-learning efficiency. It shows that the bound of regret is tighter if the learning tasks are more similar. We also evaluate the effectiveness of our algorithm on the real problem, based on a simulator built by real traces. Future work includes possible experimental evaluations in real field tests and further studies on the impact of different similarity metrics.
| 6,526 |
1811.10902
|
2903456962
|
Cellular network configuration plays a critical role in network performance. In current practice, network configuration depends heavily on field experience of engineers and often remains static for a long period of time. This practice is far from optimal. To address this limitation, online-learning-based approaches have great potentials to automate and optimize network configuration. Learning-based approaches face the challenges of learning a highly complex function for each base station and balancing the fundamental exploration-exploitation tradeoff while minimizing the exploration cost. Fortunately, in cellular networks, base stations (BSs) often have similarities even though they are not identical. To leverage such similarities, we propose kernel-based multi-BS contextual bandit algorithm based on multi-task learning. In the algorithm, we leverage the similarity among different BSs defined by conditional kernel embedding. We present theoretical analysis of the proposed algorithm in terms of regret and multi-task-learning efficiency. We evaluate the effectiveness of our algorithm based on a simulator built by real traces.
|
Recently, learning-based methods are proposed @cite_0 @cite_19 @cite_10 @cite_4 . In @cite_0 , the authors propose a tailored form of reinforcement learning to adaptively select the optimal antenna configuration in a time-varying environment. In @cite_4 , the authors use Q-learning with compact state representation for traffic offloading. In @cite_10 , the authors design a generalized global bandit algorithm to control the transmit power in the cellular coverage optimization problem. In all these papers, BS similarities are not considered, and thus require more exploration. In @cite_19 , the authors study the pilot power configuration problem and design a Gibbs-sampling-based online learning algorithm so as to maximize the throughput of users. In comparison, they make the assumption that all BSs are equal while we allow different BSs to learn different mappings.
|
{
"abstract": [
"",
"Cellular network configuration is critical for network performance. Current practice is labor-intensive, error-prone, and far from optimal. To automate efficient cellular network configuration, in this work, we propose an online-learning-based joint-optimization approach that addresses a few specific challenges: limited data availability, convoluted sample data, highly complex optimization due to interactions among neighboring cells, and the need to adapt to network dynamics. In our approach, to learn an appropriate utility function for a cell, we develop a neural-network-based model that addresses the convoluted sample data issue and achieves good accuracy based on data aggregation. Based on the utility function learned, we formulate a global network configuration optimization problem. To solve this high-dimensional non-concave maximization problem, we design a Gibbs-sampling-based algorithm that converges to an optimal solution when a technical parameter is small enough. Furthermore, we design an online scheme that updates the learned utility function and solves the corresponding maximization problem efficiently to adapt to network dynamics. To illustrate the idea, we use the case study of pilot power configuration. Numerical results illustrate the effectiveness of the proposed approach.",
"Motivated by the engineering problem of cellular coverage optimization, we propose a novel multiarmed bandit model called generalized global bandit. We develop a series of greedy algorithms that have the capability to handle nonmonotonic but decomposable reward functions, multidimensional global parameters, and switching costs. The proposed algorithms are rigorously analyzed under the multiarmed bandit framework, where we show that they achieve bounded regret, and hence, they are guaranteed to converge to the optimal arm in finite time. The algorithms are then applied to the cellular coverage optimization problem to achieve the optimal tradeoff between sufficient small cell coverage and limited macroleakage without prior knowledge of the deployment environment. The performance advantage of the new algorithms over existing bandits solutions is revealed analytically and further confirmed via numerical simulations. The key element behind the performance improvement is a more efficient “trial and error” mechanism, in which any trial will help improve the knowledge of all candidate power levels.",
"This paper first provides a brief survey on existing traffic offloading techniques in wireless networks. Particularly as a case study, we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network (HCN), where the time-varying traffic in the network can be offloaded to nearby small cells. Our aim is to minimize the total discounted energy consumption of the HCN while maintaining the quality-of-service (QoS) experienced by mobile users. For each cell (i.e., a macro cell or a small cell), the energy consumption is determined by its system load, which is coupled with system loads in other cells due to the sharing over a common frequency band. We model the energy-aware traffic offloading problem in such HCNs as a discrete-time Markov decision process (DTMDP). Based on the traffic observations and the traffic offloading operations, the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the DTMDP statistics. Such a model-free learning framework is important, particularly when the state space is huge. In order to solve the curse of dimensionality, we design a centralized @math -learning with compact state representation algorithm, which is named @math -learning. Moreover, a decentralized version of the @math -learning is developed based on the fact the macro base stations (BSs) can independently manage the operations of local small-cell BSs through making use of the global network state information obtained from the network controller. Simulations are conducted to show the effectiveness of the derived centralized and decentralized @math -learning algorithms in balancing the tradeoff between energy saving and QoS satisfaction."
],
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_10",
"@cite_4"
],
"mid": [
"",
"2784007778",
"2791827017",
"2125890412"
]
}
|
Kernel-based Multi-Task Contextual Bandits in Cellular Network Configuration
|
With the development of mobile Internet and the rising number of smart phones, recent years have witnessed a significant growth in mobile data traffic [1]. To satisfy the increasing traffic demand, cellular providers are facing increasing pressure to further optimize their networks. Along this line, one critical aspect is cellular base station (BS) configuration. In cellular networks, a BS is a piece of network equipment that provides service to mobile users in its geographical coverage area (similar to a WiFi access point, but much more complex), as shown in Figure 1. Each BS has a large number of parameters to configure, such as spectrum band, power configuration, antenna setting, and user hand-off threshold. These parameters have a significant impact on the overall cellular network performance, such as user throughput or delay. For instance, the transmit power of a BS determines its coverage and affects the throughput of all users it serves.
In current practice, cellular configuration needs manual adjustment and is mostly decided based on the field experience of engineers . Network configuration parameters typically remain static for a long period of time, even years, unless severe performance problems arise. This is clearly not optimal in terms of network performance: different base stations have different deployment environments (e.g., geographical areas), and the conditions of each BS (e.g., the number of users) also change over time. Therefore, as shown in Figure 1, setting appropriate parameters for each deployed BS based on its specific conditions could significantly help the industry to optimize its networks. A natural way of achieving this goal is to apply onlinelearning-based algorithms in order to automate and optimize network configuration. Online-learning-based cellular BS configuration faces multiple challenges. First, the mapping between network configuration and performance is highly complex. Since different BSs have different deployment environments, they have different mappings between network configuration and performance, given a BS condition. Furthermore, for a given BS, its condition also changes over time due to network dynamics, leading to different optimal configurations at different points in time. In addition, for a given BS and given condition, the impact of network configuration on performance is too complicated to model using white-box analysis due to the complexity and dynamics of network environment, user diversity, traffic demand, mobility, etc. Second, to learn this mapping and to optimize the network performance over a period of time, operators face a fundamental exploitation-exploration tradeoff: in this case, exploitation means to use the best known configuration that benefits immediate performance but may overlook better configurations that are unknown; and exploration means to experiment with unknown or uncertain configurations which may have a better performance in the long run, at the risk of a potentially lower immediate performance. Furthermore, running experiments in cellular networks is disruptive -users suffer poor performance under poor configurations. Thus, providers are often conservative when running experiments and would prefer to reduce the number of explorations needed in each BS. Fortunately, in a cellular network, BSs usually have similarities, even though they are not identical. Therefore, it would be desirable to effectively leverage data from different BSs by exploiting such similarities. To address these challenges, we consider multiple BSs jointly and formulate the corresponding configuration problem as a multi-task on-line learning framework as shown in Figure 2. The key idea is to leverage information from multiple BSs to jointly learn a model that maps the network state and its configuration to performance. The model is then customized to each BS based on its characteristics. Furthermore, the model also allows the BSs to balance the tradeoff between the exploration and exploitation of the different configuration. Specifically, we propose a kernelbased multi-BS contextual bandits algorithm that can leverage similarity among BSs to automate and optimize cellular network configuration of multiple BSs simultaneously. Our contributions are multi-fold:
• We develop a kernel-based multi-task contextual bandits algorithm to optimize cellular network configuration. The key idea is to explore similarities among BSs to make intelligent decisions about networks configurations in a sequential manner. • We propose a method to estimate the similarity among the BSs based on the conditional kernel embedding. • We present theoretical guarantees for the proposed algorithm in terms of regret and multi-task-learning efficiency. • We evaluate our algorithm using real traces. Our proposed algorithm outperforms bandits algorithms not using multi-task learning by up to 35%. The rest of the paper is organized as follows. The related work is in Sec. II. We introduce the system model and problem formulation in Sec. III. We present a kernel-based multi-BS contextual bandit algorithm in Sec. IV. The theoretical analysis of the algorithm is in Sec.V. We demonstrate the numerical results in Sec. VI, and conclude in Sec. VII.
III. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, firstly, we describe the detail of the multi-BS configuration problem. Then we formulate the problem as a multi-task contextual bandits model.
A. Multi-BS Configuration
In this work, we focus on the multi-BS network configuration problem. Specifically, we consider a set of BSs M := {1, · · · , M } in a network. The time of the system is discretized, over a time horizon of T slots. At time slot t, ∀t ∈ T := {1, · · · , T }, for each BS m ∈ M, its state is represented by a vector s at,t ∈ R, which is a measure of network performance. We note that a In practice, the configuration parameters can include pilot power, antenna direction, handoff threshold, etc. The reward can be metrics of network performance, such as uplink throughput, downlink throughput, and quality-ofservice scores. Time granularity of the system is decided by network operators. In the current practice, configurations can be updated daily during midnight maintenance hours. To further improve network performance, network operators are moving towards more frequent network configuration updates, e.g., on an hourly basis, based on network states.
The goal of the problem is to find the configuration a
In this problem, for a given BS and a given state, we do not have a prior knowledge of the reward of its action. We need to learn such a mapping during the time horizon. In other words, when choosing a and the corresponding reward also affect future actions. Therefore, there exists a fundamental exploitationexploration tradeoff: exploitation is to use the best learned configuration that benefits the immediate reward but may overlook better configurations that are unknown; and exploration is to experiment unknown or uncertain configurations which may have a better reward in the long run, at the risk of a potentially lower immediate reward.
Furthermore, we note that the action of one BS can be affected by the information of other BSs. Therefore, the information from multiple BSs should be leveraged jointly to optimize the problem in (1). Also, note that the BSs are similar but not identical. Therefore, the similarity of BSs need to be explored and leveraged to optimize the network configuration.
In summary, the goal of multi-BS configuration problem is to choose appropriate actions for all time slot and all BSs to maximize the the problem defined in Eq. (1).
B. Multi-Task Contextual Bandit
We model the problem as multi-task contextual bandits. Now, we briefly introduce the classical bandit model and contextual bandit model.
Multi-armed bandit (MAB) [11] is a powerful tool in a sequential decision making scenario where at each time step, a learning task pulls one of the arms and observes an instantaneous reward that is independently and identically (i.i.d.) drawn from a fixed but unknown distribution. The task's objective is to maximize its cumulative reward by balancing the exploitation of those arms that have yielded high rewards in the past and the exploration of new arms that have not been tried. The contextual bandit model [10] is an extension of the MAB in which each arm is associated with side information, called the context. The distribution of rewards for each arm is related to the associated context. The task is to learn the arm selection strategy by leveraging the contexts to predict the expected reward of each arm. Specifically, in the contextual bandit, over a time horizon of T slots, at each time t, environment reveals context x a,t ∈ R p for each arm a ∈ A. If learner selects and pulls an a t ∈ A , he receives a reward r (m) at,t from environment. At the end of the time slot t, learner improves arm selection strategy based on new observation {x at,t , r at,t }. At time t, the best arm is defined as a * t = arg max a∈A E(r at,t |x at,t ) and the corresponding reward is r a * t ,t . The regret at time T is defined as the sum of the gap between the real reward and the optimal reward through the T time slots in Eq. (2). The goal of maximization of the accumulative reward T t=1 r at,t is equivalent to minimizing the regret 1 .
R(T ) = T t=1 (r a * t ,t − r at,t )(2)
Based on the classical contextual bandit problem, we propose a multi-task contextual bandit model. Consider a set of tasks M := {1, · · · , M } , each task m ∈ M can be seen as a standard contextual bandit problem. More specifically, in task m, at each time t, for each arm a ∈ A, there is an associated context vector x We also define the best arm as a Improves arm selection based on new observations
{(x (m) at,t , r (m) at,t )|m ∈ M} 8: end for R(T ) = M m=1 T t=1 r (m) a * t ,t − r (m) at,t(3)
We can formulate the multi-BSs configuration problem as multi-task contextual bandit. We regard the configuration optimization problem for one BS as one task. Specifically, for each BS m, at time t, context associated with arm a is the combination of the state and the action, i.e., x (m)
a,t = (s (m) t , a (m) t )
. Then the goal of finding the best arms which can maximize the total accumulative reward in Eq.
(1) is equivalent to minimizing the regret defined in Eq. (3).
IV. METHODOLOGY
Most existing work on the contextual bandit problems, such as LinUCB [12], KernelUCB [13] assume the reward is a function of the context, i.e., r at,t = f (x at,t ). At each time slot t, these algorithms use the estimated functionf (·) to predict the reward of each arm according to the context at time t, i.e.,{x at,t } a∈A . Based on the value and uncertainty of the prediction, they calculate the upper confident bound (UCB) of each arm. Then they select the arm a t that has the maximum UCB value and then obtains a reward r at,t . Last, they update the estimated functionf (·) by the new observation (x at , r at,t ).
In our multi-BS configuration problem defined in Eq. (1), if we model every BS as an independent classical contextual bandit problem and use the existing algorithm to make its own decision, it would lose information across BSs and thus is not efficient. Specifically, in the training process, it would learn a group of function {f (m) |m ∈ M} independently and ignore the similarity among them. In practice, the BSs that are configured simultaneously have lots of similar characteristics, such as geographical location, leading to similar reward functions. Furthermore, in the real case, since the configuration parameters have a large impact on the network performance, the cost of experience is expensive. We need an approach to use the data effectively. So, motivated by this observation, we design the kernelbased multi-BS contextual bandits that can leverage the similarity information and share experiences among BSs, i.e., tasks.
In this section, we propose a framework to solve the problem in Eq. (3). We start with the regression model. Then we describe how to incorporate it with multi-task learning. Next, we propose kernel-based multi-BS contextual bandits algorithm in Sec.IV-C. In the last, we discuss the details of task similarity for real data in Sec. IV-D.
A. Kernel Ridge Regression
For the network configure problem, we need to learn a model from historical data that can predict the reward r at,t from the context x at,t . There are two challenges. First, the learned model should capture the non-linear relation between the configuration parameters, state (context) and the network utility (reward) in complex scenarios. Second, since the learned model is used in the contextual bandit model, it needs to not only offer the mean estimate value of the prediction but also a confidence interval of the estimation that can describe the uncertainty of the prediction. This important feature is used later to trade off exploration and exploration in the bandit model.
To address these two challenges, we use kernel ridge regression to learn the prediction model that can capture non-linear relation and provide an explicit form of the uncertainty of the prediction. Furthermore, intuitively, the kernel function can be regarded as a measure of similarity among data points. which makes it suitable for the multitask learning into it in Sec. IV-B. Let us briefly describe the kernel regression model.
Kernel ridge regression is a powerful tool in supervised learning to characterize the non-linear relation between the target and feature. For a training data set {(x i , y i )} n i=1 , kernel method assumes that there exists a feature mapping φ(x) : X → H which can map data into a feature space in which a linear relationship y = φ(x) T θ between φ(x) and y can be observed, where θ is the parameter need to be trained. The kernel function is defined as the inner product of two data vectors in the feature space.
k(x, x ) = φ(x) T φ(x ), ∀x, x ∈ X .
The feature space H is a Hilbert space of functions f : X → R with inner product k < ·, · >. It can be called as the associated reproducing kernel Hilbert space (RKHS) of k, notated by H k . The goal of kernel ridge regression is to find a function f in the RKHS H that can minimize the mean squared error of all training data, as shown in Eq. (4).
f = arg min f ∈H k n i=1 (f (x i ) − y i ) 2 + λ||f || 2 H k(4)
Applying the representer theorem, the optimal f can be represented as a linear combination of the data points in the feature space, f (·) = n i=1 α i k(x i , ·). Then we can get the solution of Eq. (4)
f (x) = k T X:x (K + λI) −1 y(5)
where y = (y 1 , · · · , y n ), K is the Gram matrix, i.e., K ij = k(x i , x j ), k X:x = (k(x 1 , x), · · · , k(x n , x)) is the vector of the kernel value between all historical data X and the new data, x. This provides basis for our bandit algorithms. The uncertainty of prediction of the kernel ridge regression is discussed in Sec.IV-C.
B. Multi-Task Learning
We next introduce how to integrate kernel ridge regression into multi-task learning which allows us to use similarities information among BSs.
In multi-task learning, the main question is how to efficiently use data from one task to another task. Borrowing the idea from [16], [19], we define the regression functions in the followings:
f :X → Y (6) whereX = Z × X , X is the original context space, Z is the task similarity space, Y is the reward space. For each context x (m)
at,t of BS m, we can associate it with the task/BS descriptor z m ∈ Z, and definex (m)
at,t = (z m , x (m)
at,t ) to be the augmented context. We define the following kernel functioñ k in (7) to capture the relation among tasks.
k((z, x), (z , x )) = k Z (z, z )k X (x, x )(7)
where k X is the kernel defined in original context, and k Z is the kernel defined in tasks that measures the similarity among tasks/BSs. Then we define the task/BS similarity matrix K Z as (K Z ) ij = k Z (z i , z j ). We discuss the training of this similarity kernel and similarity matrix in Sec.IV-D.
In the multi-tasks contextual bandit model, at time t, we need to train an arm selection strategy based on the history data we experienced, i.e.,{(x (m) τ , r (m) aτ ,τ )|m ∈ M, τ < t}. We formulate a regression problem in Eq. (8)
f t = arg min f ∈Hk M m=1 t−1 τ =1 (f (x (m) aτ ,τ ) − r (m) a,τ ) 2 + λ||f || 2 Hk (8) wherex (m)
aτ ,τ is the augmented context of the arm a τ for task m, which is defined as the combination of the task descriptor z m and origianl context x The only difference is that we use the augmented contextx and new kernelk instead of the x and k.
f t (x) =k T t−1 (x)(K t−1 + λI) −1 y t−1(9)
whereK t−1 is Gram matrix of [x
C. Kernel-based Multi-BS Contextual Bandits
Next, we introduce how to measure the uncertainty of the prediction in Eq. (9). At time T , for a specific task, i.e., BS, m ∈ M, for a given augmented contextx aτ ,τ )|m ∈ M, τ < T } are all independent random variables. Then we can use McDiarmid's inequality to get an upper confident bound of the predicted value. Since the mathematical derivation of this step is the same as Lemma 1 in [19], we only make a minor modification to obtain Theorem 1.
1 − δ T , we have that ∀a ∈ A |f t (x (m) a,t ) − f * (x (m) a,t )| ≤ (α + c √ λ)σ (m) a,t(10)
where the width is
σ (m) a,t = k (x (m) a,t ,x (m) a,t ) −k T t−1 (x (m) a,t )(K t−1 + λI) −1k t−1 (x (m) a,t )(11)
Based on Theorem 1, we define the upper confident bound UCB for each arm for each task in Eq. (12), wheref t is obtained from Eq. (9), and β is a hyper-parameter.
UCB (m) a,t =f t (x (m) a,t ) + βσ (m) a,t(12)
Then we propose Algorithm 1 to solve the multi-BS configuration problem.
In Algorithm 1, at each time t, it updates the prediction modelf t . Then for each task m ∈ M, it uses the model to obtain the UCB of each arm a ∈ A. Next it selects the arm that has the maximum UCB. Algorithm 1 can trade off between the exploitation and exploration in the multi-BS configuration problem. The intuition behind it is as following: if one configuration is only tried for few times or even yet tried, its corresponding arm's width defined in Eq.(11) is larger, which makes its UCB value larger, then this configuration will be tried in following time with high probability.
Independent Assumption Note that the independent assumption of Theorem 1 is not true in Algorithm 1, because the previous rewards influence the arm selection strategy (prediction function), then influence the following reward. To address it, we select a subset of them to make this assumption hold true in Sec. V.
M −1 = A U V C −1 (13) = S −1 −S −1 U C −1 −C −1 V S −1 C −1 V S −1 U C −1 + C −1(14)
Based on it, we can update (K t + λI) −1 by (K t−1 + λI) −1 . It decreases the computation complexity to O(M t 2 ).
For the issue of dealing with large dimension of Gram matrix K has been much studied in Chapter 8 of [22]. Most of them are designed for the supervise learning cases. In our problem, based on thr feature of online learning, Schur complement method is more suitable and efficiency.
D. Similarity
The kernel k Z (z, z ) that defines the similarities among the tasks/BSs plays a significant role in Algorithm 1. When k Z (z, z ) = 1(m = m ), where 1 is the characteristic function, Algorithm 1 is equivalent to running the contextual bandit independently for each BS. In this section, we discuss how to measure the similarity in real data if it is not provided.
Suppose the ground truth function for task i (i.e., BS i) is y = f i (x) , we need to define the similarity among different BSs based on the ground truth functions f i (x). From a Bayesian view, y = f i (x) is equivalent to the conditional distribution P (Y i |X i ). Therefore, we can use the conditional kernel embedding to map the conditional distributions to operators in a high-dimensional space, and then define the similarity based on it. Let us start with the definition of kernel embedding and conditional kernel embedding.
1) Conditional kernel embedding: Kernel embedding is a method in which a probability is mapped to an element of a potentially infinite dimensional feature spaces, i.e., a reproducing kernel Hilbert space (RKHS) [23]. For a random variable in domain X with distribution P (X) , suppose k : X × X → R is the positive definite kernels with corresponding RKHS H X , the kernel embedding of a kernel k for X is defined as
ν x = E X [k(·, x)] = k(·, x)dP (x)(15)
It is an element in H X . For two random variable X and Y , suppose k : X × X → R and l : Y × Y → R are respectively the positive definite kernels with corresponding RKHS H X and H Y . The kernel embedding for the marginal distribution P (Y |X = x) is:
ν Y |x = E Y [l(·, y)|x] = l(·, y)dP (y|x)(16)
It is an element in H Y . Then for the conditional probability P (Y |X), the kernel embedding is defined as a conditional operator C Y |X : H Y → H X that satisfies Eq. (17)
ν Y |x = C Y |X k(x, ·)(17)
If we have a data set {x i , y i } n i=1 , which are i.i.d drawn from P (X, Y ), the conditional kernel embedding operator can be estimated by:Ĉ
Y |X = Ψ(K + λI) −1 Φ(18)
where Ψ = (l(y 1 , ·), · · · , l(y n , ·)) and Φ = (k(x 1 , ·), · · · , k(x n , ·)) are implicitly formed feature matrix, K is the Gram matrix of x, i.e.,
(K) ij = k(x i , x j )
The definition of conditional kernel mean embedding provides a way to measure probability P (Y |X) as an operator between the spaces H Y and H X .
2) Similarity Calculation: In this section, we use the conditional kernel mean embedding to define the similarity space Z and augmented context kernel k Z in Eq. (7).
We define the task/BS similarity space as Z = P Y|X , the set of all conditional probability distributions of Y given X. Then for task/BS i, given a context x Then we use the Gaussian-form kernel based on the conditional kernel embedding to define k Z :
k Z (P Yi|Xi , P Yj |Xj ) = exp(−||C Yi|Xi − C Yj |Xj || 2 /2σ 2 Z )(19)
where ||·|| is Frobenius norm, C Y |X is the conditional kernel embedding defined in Eq. (17) and can be estimated by Eq. (18). The hyper parameter σ Z can be heuristically estimated by the median of Frobenius norm of all dataset. In Eq. (18), it can only be used in explicit kernels. Next, we use the kernel trick to derive a form that does not include explicit features.
For two data sets
D 1 = {(x i , y i )} n1 i=1 and D 2 = {(x i , y i )} n2
i=1 , k and l are respectively two positive definite kernels with RKHS H X and H Y . For data set D m , we define Ψ m = (l(y 1 , ·), · · · , l(y nm , ·)) and Φ m = (k(x 1 , ·), · · · , k(x nm , ·)) are implicitly formed feature matrix of y and x. K m = Φ T m Φ m and L m = Ψ T m Ψ m are Gram matrix of all x and y. We use U Y |X and O Y |X to denote the conditional kernel embeddings for D 1 and D 2 , respectively. According to Eq. (18), we have
U Y |X = Ψ 1 (K 1 + λI) −1 Φ T 1 O Y |X = Ψ 2 (K 2 + λI) −1 Φ T 2 Then ||U Y |X − O Y |X || 2 =tr(U T Y |X U Y |X ) − 2tr(U T Y |X O Y |X ) + tr(O T Y |X O Y |X )(20)
Define matrix K 12 and L 12 by (
K 12 ) ij = k(x i , x j ) and (L 12 ) ij = l(y i , y j ), where (x i , y i ) is the i-th data in D 1
and (x j , y j ) is the j-th data in D 2 , so as K 21 and L 21 . Then for the second term in Eq. (20),
tr(U T Y |X O Y |X ) = tr(Ψ 1 (K 1 + λI) −1 Φ T 1 Φ 2 (K 2 + λI) −1 Ψ T 2 ) = tr((K 1 + λI) −1 Φ T 1 Φ 1 (K 2 + λI) −1 Ψ T 2 Ψ 1 ) = tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 )
After using the same trick for other terms, Eq. (20) can be written as
||U Y |X − O Y |X || 2 = tr((K 1 + λI) −1 K 1 (K 1 + λI) −1 L 1 ) − 2 * tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 + tr((K 2 + λI) −1 K 2 (K 2 + λI) −1 L 2 )(21)
Then we can use Eq. (21) in Eq. (19) to measure the similarity between tasks.
V. THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of Algorithm 1 based on the classical bandit analysis. The first part is about regret analysis and the second part is about the multi-task-learning efficiency.
A. Regret Analysis
In Algorithm 1, at each time slot t, it uses the trained model to make a decision for all BSs synchronously. This is not in the same form of classical bandit model. In order to simply the analysis, we make an asynchronous version in Algorithm 2, in which at each time t, it receives the context (state and action) of one BS with its BS ID, denoted by V t , that is used to identify the BS index m. Then algorithm 2 obtains the augment context using V t and then makes a decision for the BS. In this manner, Algorithm 2 makes a decision for all BSs asynchronously. The performance of synchronous and asynchronous methods are similar when the number of BSs is moderate and all BSs come in order, as in our case. Choose arm a t = arg max ucb a,t for BS V t
9:
Observe reward r at,t 10:
Update y t by r at,t 11: end for
The regret of Algorithm 2 is defined by
R(T ) = M m=1 T t=1 (r (m) a * t ,t − r (m) at,t )1(V t = m)(22)
In Algorithm 2, the estimated rewardr at,t at time t can be regarded as the sum of variables in history [r aτ ,τ ] τ <t that are dependent random variables. It does not meet the assumption in Theorem 1, thus we are unable to analysis the uncertainty of the prediction.
To address this issue, as in [14], [11], we design the base version (Algorithm 3) and super version (Algorithm 4) of Algorithm 2 in order to meet the requirement of Theorem 1. In Algorithm 4, it constructs special, mutually exclusive subsets {Ψ(s)} S of ts the elapsed time to guarantee the event {t ∈ Ψ (s) t+1 } is independent of the rewards observed at times in Ψ (s) t . On each of these sets, it uses Algorithm 3 as subroutine to obtain the estimated reward and width of the upper confident bound which is the same as Algorithm 2.
Algorithm 3 Base asynchronous multi-BS configuration 1: Input: β, Ψ ⊂ {1, · · · , t − 1} 2: Calculate Gram matrixK Ψ and get y Ψ = [r aτ ,τ ] τ ∈Ψ 3: Observe the BS ID V t and corresponding context features at time t: x a,t for each a ∈ A 4: Determine the BS descriptor z m and get the augmented contextx a,t 5: for all arm a in A at time t do 6:
σ a,t = k (x a,t ,x a,t ) −k T a,Ψ (K Ψ + λI)k a,Ψ 7:
ucb a,t =f (x a,t ) + βσ a,t 8: end for Algorithm 4 Super asynchronous multi-BS configuration 1: Input: β, T ∈ N 2: Initialize S ← log T and Ψ if ω a,t ≤ 1 √ T for all a ∈Â (s) then 9:
Choose a t = arg max a∈Â (s) ucb a,t 10: until a t is found 19: Observe reward r at,t 20: end for
Φ (s) t+1 ← Φ
The construction of Algorithm 3 and Algorithm 4 follow similar strategy of that in the proof of KernelUCB (see Theorem 1 in [13] or Theorem 1 in [19]). Then we can get the following theorem.
Theorem 3. Assume that r a,t ∈ [0, 1], ∀a ∈ A, T ≥ 1, ||f * || Hk ≤ ck, ∀x ∈X and tasks similarity matrix K Z is known. With probability 1 − δ, the regret of Algorithm 4 satisfies,
R(T ) ≤ 2 √ T + 10( log(2T N (log(T ) + 1)/δ)) 2 + c √ λ)
B. Multi-task-learning Efficiency
In this section, we discuss the benefits of multi-task learning from the theoretical view point.
In the asynchronous setting, i.e., Algorithm 2 and Algorithm 4, because all BSs/tasks come in order, at time t, each task happens n = t M times. Let K Xt be Gram matrix of [x (m) aτ ,τ ] τ ≤t,m∈M i.e., original context, K Z be the similarity matrix. Then, following Theorem 2 in [19], the following results hold, Theorem 4. Define the rank of matrix K X T +1 as r x and the rank of matrix K Z as r z . Then log(g([T ])) ≤ r z r x log (T + 1)ck + λ λ According to Eq. (23), if the rank of similarity matrix is lower, which means all BSs/tasks have higher inter-task similarity, the regret bound is tighter.
We make the further assumption that all distinct tasks are similar to each other with task similarity equal to µ. Define g µ ([T ]) as the corresponding value of g([T ]) when all task similarity equal to µ. According to Theorem 3 in [19], we have
Theorem 5. If µ 1 ≤ µ 2 , then g µ1 ([T ]) ≥ g µ2 ([T ])
This shows that when BSs/tasks are more similar, the regret bound is tighter. In our case, running all task independently is equivalent to setting the similarity as an identify matrix, i.e., µ = 0. So, based on the previous two theorems, we show the benefits of our algorithm using the multi-task learning.
VI. EVALUATION
In this section, we evaluate the performance of the proposed approach in a simulator built on the real network data. We start with the data collection and simulator construction procedure, then discuss about the numerical results.
A. Data Collection and Simulator Construction
Since test of bandit algorithm requires an interactive environment, we build a network simulator based on data collected in real networks, which can provide feedback on the algorithm's action.
The data is collected in the real base station configuration experience conducted by a service provider in a metropolitan city. Each data instance contains the following information: sample time, cell name and ID, network measurements (e.g., user number, CQI, average packet size, etc), and configured parameter values. In the test, the configured parameter is the Reference Signal Received Quality (RSRP) threshold for Long-Term Evolution (LTE) A2 event during inter-frequency handover. Inter-frequency handover is a procedure for a BS to guarantee the user experience in cellular network. If one BS observes RSRP of a user it serves is lower than the The network utility that measures the performance is the ratio of users with throughput less than 5 Mbps, i.e., the performance of edge users. Some data samples are illustrated in Table. I. In the problem, the configured parameter a t is the RSRP threshold for LTE A2 event during inter-frequency handover, as shown in the 7th column of Table I. The goal is to minimize the ratio of users with throughput less than 5Mbps, as shown in column 8. To accord with the maximization problem, we define the negative value of this ratio as r at,t . Further, based on field experience, 5 measurement metrics are carefully selected as state s t for each BS, including: number of downlink average active users, ratio of CQI index 0 reports (i.e., low CQI ratio), ratio of small packet SDUs, ratio of small packet traffic volume, number of downlink average users as shown in column 2 to column 6 in Table. I. Then we define the context x a,t = (s t , a t ) for each BS to formulate it into a multi-task contextual bandit problem. The goal is to find the best configured parameter a that maximize the reward r.
The input of the simulator is a query state S * and a configuration parameter value A * . The output is the corresponding reward r. The simulator estimates the rewards for different states and configurations using the following method. For the query state S * and the configuration parameter A * , we search for all samples (S i , A i , R i ) in the data set, and compute a similarity score between (S * , A * ) and (S i , A i ). The similarity score is calculated based on the Euclidean distance between (S * , A * ) and (S i , A i ). We sort the samples according to the similarity score and choose the top-k samples. The average reward of the top-k samples is used as the return of the simulator.
B. Evaluation Setup and Results
As we described in last section, the dimension of the state space is 5. The action space is from -112 dBm to -84 dBm with 1 dBm resolution, i.e., the number of arms in our model is 29. The reward space is R. We use 3 different BSs generated by the simulator and indexed them by {0, 1, 2}, i.e., BS 0, BS 1, BS 2, to test our algorithms. Based on the definition of similarity in Sec. IV-D, we can train the similarity matrix K Z among them, leading to the following result,
We test the multi-task learning case. In Fig. 3, the result for Algorithm 1 using similarity matrix K Z is shown. It compares the accumulated regret of the case with multitask learning and the case that models BSs as independent contextual bandits problems. We use KernelUCB [13] for the independent contextual bandits problems as a baseline. Here, the accumulated regrets shown in Fig. 3 are the sum of the accumulated regrets of the three BSs. Further, each data point is the average result of 20 individual simulations. It can be shown that, when the multi-task learning is used, the regret increases much slower than the case where BSs run independently. At the end of the 1000 time slots, the multi-task learning decreases 35% of the regret in the final. In order to compare the multi-task-learning efficiency of Algorithm 1 for one BS in different multi-task scenarios, we test the following cases for BS 0 with online learning data from itself and other different BS: (BS 0 and BS 1), (BS 0 and BS 2). We also test a case with a BS that is identical to BS 0, resulting in an online learning data set from (BS 0, BS 0). We can regard this as the optimal multitask learning case. In Fig. 4, we measure the performance of Algorithm 1 only by the regret of one BS, i.e., BS 0 in the above mentioned learning scenarios, which is different from the total regret of all BSs in Fig. 3.
In Fig. 4, we find that, the regret of BS 0's default configuration grows out of the bound of the figure quickly. The default configuration is used in present network. It's a fixed parameter and do not change by the state of the BS. The line 'single BS 0' is the result of KernelUCB [13] for a independent contextual bandit model for BS 0. Expect the ideal multi-task case (BS 0, BS 0), the case (BS 0, BS 1) has a better multi-task-learning efficiency, in which the BS 0 has lower regret. Through experiment, we also find that BS 1 has a better similarity with BS 0. To be specific, the similarity is 0.811. So that we can see conditional kernelembedding is a reasonable similarity in this problem. VII. CONCLUSION In this work, in order to address the multi-BS network configuration problem, we propose a kernel-based multi-task contextual bandits algorithm that leverages the similarity among BSs effectively. In the algorithm, we also provided an approach to measure the similarity among tasks based on conditional kernel embedding. Furthermore, we present theoretical bounds for the proposed algorithm in terms of regret and multi-task-learning efficiency. It shows that the bound of regret is tighter if the learning tasks are more similar. We also evaluate the effectiveness of our algorithm on the real problem, based on a simulator built by real traces. Future work includes possible experimental evaluations in real field tests and further studies on the impact of different similarity metrics.
| 6,526 |
1811.10902
|
2903456962
|
Cellular network configuration plays a critical role in network performance. In current practice, network configuration depends heavily on field experience of engineers and often remains static for a long period of time. This practice is far from optimal. To address this limitation, online-learning-based approaches have great potentials to automate and optimize network configuration. Learning-based approaches face the challenges of learning a highly complex function for each base station and balancing the fundamental exploration-exploitation tradeoff while minimizing the exploration cost. Fortunately, in cellular networks, base stations (BSs) often have similarities even though they are not identical. To leverage such similarities, we propose kernel-based multi-BS contextual bandit algorithm based on multi-task learning. In the algorithm, we leverage the similarity among different BSs defined by conditional kernel embedding. We present theoretical analysis of the proposed algorithm in terms of regret and multi-task-learning efficiency. We evaluate the effectiveness of our algorithm based on a simulator built by real traces.
|
Contextual bandit @cite_16 is an extension of classic multi-armed bandit (MAB) problem @cite_8 . One type of algorithm is the UCB-type such as Lin-UCB @cite_6 , Kernel-UCB @cite_18 , in which they assume the reward is a function of the context and trade off between the exploitation and exploration based on upper confident bound of the estimation @cite_22 . The contextual bandit is also widely used in many application areas, such as news article recommendation @cite_6 , clinical trials @cite_14 .
|
{
"abstract": [
"We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.",
"Multi-armed bandit problems (MABPs) are a special type of optimal control problem well suited to model resource allocation under uncertainty in a wide variety of contexts. Since the first publication of the optimal solution of the classic MABP by a dynamic index rule, the bandit literature quickly diversified and emerged as an active research topic. Across this literature, the use of bandit models to optimally design clinical trials became a typical motivating application, yet little of the resulting theory has ever been used in the actual design and analysis of clinical trials. To this end, we review two MABP decision-theoretic approaches to the optimal allocation of treatments in a clinical trial: the infinite-horizon Bayesian Bernoulli MABP and the finite-horizon variant. These models possess distinct theoretical properties and lead to separate allocation rules in a clinical trial design context. We evaluate their performance compared to other allocation rules, including fixed randomization. Our results indicate that bandit approaches offer significant advantages, in terms of assigning more patients to better treatments, and severe limitations, in terms of their resulting statistical power. We propose a novel bandit-based patient allocation rule that overcomes the issue of low power, thus removing a potential barrier for their use in practice.",
"We show how a standard tool from statistics --- namely confidence bounds --- can be used to elegantly deal with situations which exhibit an exploitation-exploration trade-off. Our technique for designing and analyzing algorithms for such situations is general and can be applied when an algorithm has to make exploitation-versus-exploration decisions based on uncertain information provided by a random process. We apply our technique to two models with such an exploitation-exploration trade-off. For the adversarial bandit problem with shifting our new algorithm suffers only O((ST)1 2) regret with high probability over T trials with S shifts. Such a regret bound was previously known only in expectation. The second model we consider is associative reinforcement learning with linear value functions. For this model our technique improves the regret from O(T3 4) to O(T1 2).",
"Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.",
"",
"We present Epoch-Greedy, an algorithm for multi-armed bandits with observable side information. Epoch-Greedy has the following properties: No knowledge of a time horizon @math is necessary. The regret incurred by Epoch-Greedy is controlled by a sample complexity bound for a hypothesis class. The regret scales as @math or better (sometimes, much better). Here @math is the complexity term in a sample complexity bound for standard supervised learning."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_6",
"@cite_16"
],
"mid": [
"2950238385",
"1916369423",
"2108114251",
"2168405694",
"",
"2119850747"
]
}
|
Kernel-based Multi-Task Contextual Bandits in Cellular Network Configuration
|
With the development of mobile Internet and the rising number of smart phones, recent years have witnessed a significant growth in mobile data traffic [1]. To satisfy the increasing traffic demand, cellular providers are facing increasing pressure to further optimize their networks. Along this line, one critical aspect is cellular base station (BS) configuration. In cellular networks, a BS is a piece of network equipment that provides service to mobile users in its geographical coverage area (similar to a WiFi access point, but much more complex), as shown in Figure 1. Each BS has a large number of parameters to configure, such as spectrum band, power configuration, antenna setting, and user hand-off threshold. These parameters have a significant impact on the overall cellular network performance, such as user throughput or delay. For instance, the transmit power of a BS determines its coverage and affects the throughput of all users it serves.
In current practice, cellular configuration needs manual adjustment and is mostly decided based on the field experience of engineers . Network configuration parameters typically remain static for a long period of time, even years, unless severe performance problems arise. This is clearly not optimal in terms of network performance: different base stations have different deployment environments (e.g., geographical areas), and the conditions of each BS (e.g., the number of users) also change over time. Therefore, as shown in Figure 1, setting appropriate parameters for each deployed BS based on its specific conditions could significantly help the industry to optimize its networks. A natural way of achieving this goal is to apply onlinelearning-based algorithms in order to automate and optimize network configuration. Online-learning-based cellular BS configuration faces multiple challenges. First, the mapping between network configuration and performance is highly complex. Since different BSs have different deployment environments, they have different mappings between network configuration and performance, given a BS condition. Furthermore, for a given BS, its condition also changes over time due to network dynamics, leading to different optimal configurations at different points in time. In addition, for a given BS and given condition, the impact of network configuration on performance is too complicated to model using white-box analysis due to the complexity and dynamics of network environment, user diversity, traffic demand, mobility, etc. Second, to learn this mapping and to optimize the network performance over a period of time, operators face a fundamental exploitation-exploration tradeoff: in this case, exploitation means to use the best known configuration that benefits immediate performance but may overlook better configurations that are unknown; and exploration means to experiment with unknown or uncertain configurations which may have a better performance in the long run, at the risk of a potentially lower immediate performance. Furthermore, running experiments in cellular networks is disruptive -users suffer poor performance under poor configurations. Thus, providers are often conservative when running experiments and would prefer to reduce the number of explorations needed in each BS. Fortunately, in a cellular network, BSs usually have similarities, even though they are not identical. Therefore, it would be desirable to effectively leverage data from different BSs by exploiting such similarities. To address these challenges, we consider multiple BSs jointly and formulate the corresponding configuration problem as a multi-task on-line learning framework as shown in Figure 2. The key idea is to leverage information from multiple BSs to jointly learn a model that maps the network state and its configuration to performance. The model is then customized to each BS based on its characteristics. Furthermore, the model also allows the BSs to balance the tradeoff between the exploration and exploitation of the different configuration. Specifically, we propose a kernelbased multi-BS contextual bandits algorithm that can leverage similarity among BSs to automate and optimize cellular network configuration of multiple BSs simultaneously. Our contributions are multi-fold:
• We develop a kernel-based multi-task contextual bandits algorithm to optimize cellular network configuration. The key idea is to explore similarities among BSs to make intelligent decisions about networks configurations in a sequential manner. • We propose a method to estimate the similarity among the BSs based on the conditional kernel embedding. • We present theoretical guarantees for the proposed algorithm in terms of regret and multi-task-learning efficiency. • We evaluate our algorithm using real traces. Our proposed algorithm outperforms bandits algorithms not using multi-task learning by up to 35%. The rest of the paper is organized as follows. The related work is in Sec. II. We introduce the system model and problem formulation in Sec. III. We present a kernel-based multi-BS contextual bandit algorithm in Sec. IV. The theoretical analysis of the algorithm is in Sec.V. We demonstrate the numerical results in Sec. VI, and conclude in Sec. VII.
III. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, firstly, we describe the detail of the multi-BS configuration problem. Then we formulate the problem as a multi-task contextual bandits model.
A. Multi-BS Configuration
In this work, we focus on the multi-BS network configuration problem. Specifically, we consider a set of BSs M := {1, · · · , M } in a network. The time of the system is discretized, over a time horizon of T slots. At time slot t, ∀t ∈ T := {1, · · · , T }, for each BS m ∈ M, its state is represented by a vector s at,t ∈ R, which is a measure of network performance. We note that a In practice, the configuration parameters can include pilot power, antenna direction, handoff threshold, etc. The reward can be metrics of network performance, such as uplink throughput, downlink throughput, and quality-ofservice scores. Time granularity of the system is decided by network operators. In the current practice, configurations can be updated daily during midnight maintenance hours. To further improve network performance, network operators are moving towards more frequent network configuration updates, e.g., on an hourly basis, based on network states.
The goal of the problem is to find the configuration a
In this problem, for a given BS and a given state, we do not have a prior knowledge of the reward of its action. We need to learn such a mapping during the time horizon. In other words, when choosing a and the corresponding reward also affect future actions. Therefore, there exists a fundamental exploitationexploration tradeoff: exploitation is to use the best learned configuration that benefits the immediate reward but may overlook better configurations that are unknown; and exploration is to experiment unknown or uncertain configurations which may have a better reward in the long run, at the risk of a potentially lower immediate reward.
Furthermore, we note that the action of one BS can be affected by the information of other BSs. Therefore, the information from multiple BSs should be leveraged jointly to optimize the problem in (1). Also, note that the BSs are similar but not identical. Therefore, the similarity of BSs need to be explored and leveraged to optimize the network configuration.
In summary, the goal of multi-BS configuration problem is to choose appropriate actions for all time slot and all BSs to maximize the the problem defined in Eq. (1).
B. Multi-Task Contextual Bandit
We model the problem as multi-task contextual bandits. Now, we briefly introduce the classical bandit model and contextual bandit model.
Multi-armed bandit (MAB) [11] is a powerful tool in a sequential decision making scenario where at each time step, a learning task pulls one of the arms and observes an instantaneous reward that is independently and identically (i.i.d.) drawn from a fixed but unknown distribution. The task's objective is to maximize its cumulative reward by balancing the exploitation of those arms that have yielded high rewards in the past and the exploration of new arms that have not been tried. The contextual bandit model [10] is an extension of the MAB in which each arm is associated with side information, called the context. The distribution of rewards for each arm is related to the associated context. The task is to learn the arm selection strategy by leveraging the contexts to predict the expected reward of each arm. Specifically, in the contextual bandit, over a time horizon of T slots, at each time t, environment reveals context x a,t ∈ R p for each arm a ∈ A. If learner selects and pulls an a t ∈ A , he receives a reward r (m) at,t from environment. At the end of the time slot t, learner improves arm selection strategy based on new observation {x at,t , r at,t }. At time t, the best arm is defined as a * t = arg max a∈A E(r at,t |x at,t ) and the corresponding reward is r a * t ,t . The regret at time T is defined as the sum of the gap between the real reward and the optimal reward through the T time slots in Eq. (2). The goal of maximization of the accumulative reward T t=1 r at,t is equivalent to minimizing the regret 1 .
R(T ) = T t=1 (r a * t ,t − r at,t )(2)
Based on the classical contextual bandit problem, we propose a multi-task contextual bandit model. Consider a set of tasks M := {1, · · · , M } , each task m ∈ M can be seen as a standard contextual bandit problem. More specifically, in task m, at each time t, for each arm a ∈ A, there is an associated context vector x We also define the best arm as a Improves arm selection based on new observations
{(x (m) at,t , r (m) at,t )|m ∈ M} 8: end for R(T ) = M m=1 T t=1 r (m) a * t ,t − r (m) at,t(3)
We can formulate the multi-BSs configuration problem as multi-task contextual bandit. We regard the configuration optimization problem for one BS as one task. Specifically, for each BS m, at time t, context associated with arm a is the combination of the state and the action, i.e., x (m)
a,t = (s (m) t , a (m) t )
. Then the goal of finding the best arms which can maximize the total accumulative reward in Eq.
(1) is equivalent to minimizing the regret defined in Eq. (3).
IV. METHODOLOGY
Most existing work on the contextual bandit problems, such as LinUCB [12], KernelUCB [13] assume the reward is a function of the context, i.e., r at,t = f (x at,t ). At each time slot t, these algorithms use the estimated functionf (·) to predict the reward of each arm according to the context at time t, i.e.,{x at,t } a∈A . Based on the value and uncertainty of the prediction, they calculate the upper confident bound (UCB) of each arm. Then they select the arm a t that has the maximum UCB value and then obtains a reward r at,t . Last, they update the estimated functionf (·) by the new observation (x at , r at,t ).
In our multi-BS configuration problem defined in Eq. (1), if we model every BS as an independent classical contextual bandit problem and use the existing algorithm to make its own decision, it would lose information across BSs and thus is not efficient. Specifically, in the training process, it would learn a group of function {f (m) |m ∈ M} independently and ignore the similarity among them. In practice, the BSs that are configured simultaneously have lots of similar characteristics, such as geographical location, leading to similar reward functions. Furthermore, in the real case, since the configuration parameters have a large impact on the network performance, the cost of experience is expensive. We need an approach to use the data effectively. So, motivated by this observation, we design the kernelbased multi-BS contextual bandits that can leverage the similarity information and share experiences among BSs, i.e., tasks.
In this section, we propose a framework to solve the problem in Eq. (3). We start with the regression model. Then we describe how to incorporate it with multi-task learning. Next, we propose kernel-based multi-BS contextual bandits algorithm in Sec.IV-C. In the last, we discuss the details of task similarity for real data in Sec. IV-D.
A. Kernel Ridge Regression
For the network configure problem, we need to learn a model from historical data that can predict the reward r at,t from the context x at,t . There are two challenges. First, the learned model should capture the non-linear relation between the configuration parameters, state (context) and the network utility (reward) in complex scenarios. Second, since the learned model is used in the contextual bandit model, it needs to not only offer the mean estimate value of the prediction but also a confidence interval of the estimation that can describe the uncertainty of the prediction. This important feature is used later to trade off exploration and exploration in the bandit model.
To address these two challenges, we use kernel ridge regression to learn the prediction model that can capture non-linear relation and provide an explicit form of the uncertainty of the prediction. Furthermore, intuitively, the kernel function can be regarded as a measure of similarity among data points. which makes it suitable for the multitask learning into it in Sec. IV-B. Let us briefly describe the kernel regression model.
Kernel ridge regression is a powerful tool in supervised learning to characterize the non-linear relation between the target and feature. For a training data set {(x i , y i )} n i=1 , kernel method assumes that there exists a feature mapping φ(x) : X → H which can map data into a feature space in which a linear relationship y = φ(x) T θ between φ(x) and y can be observed, where θ is the parameter need to be trained. The kernel function is defined as the inner product of two data vectors in the feature space.
k(x, x ) = φ(x) T φ(x ), ∀x, x ∈ X .
The feature space H is a Hilbert space of functions f : X → R with inner product k < ·, · >. It can be called as the associated reproducing kernel Hilbert space (RKHS) of k, notated by H k . The goal of kernel ridge regression is to find a function f in the RKHS H that can minimize the mean squared error of all training data, as shown in Eq. (4).
f = arg min f ∈H k n i=1 (f (x i ) − y i ) 2 + λ||f || 2 H k(4)
Applying the representer theorem, the optimal f can be represented as a linear combination of the data points in the feature space, f (·) = n i=1 α i k(x i , ·). Then we can get the solution of Eq. (4)
f (x) = k T X:x (K + λI) −1 y(5)
where y = (y 1 , · · · , y n ), K is the Gram matrix, i.e., K ij = k(x i , x j ), k X:x = (k(x 1 , x), · · · , k(x n , x)) is the vector of the kernel value between all historical data X and the new data, x. This provides basis for our bandit algorithms. The uncertainty of prediction of the kernel ridge regression is discussed in Sec.IV-C.
B. Multi-Task Learning
We next introduce how to integrate kernel ridge regression into multi-task learning which allows us to use similarities information among BSs.
In multi-task learning, the main question is how to efficiently use data from one task to another task. Borrowing the idea from [16], [19], we define the regression functions in the followings:
f :X → Y (6) whereX = Z × X , X is the original context space, Z is the task similarity space, Y is the reward space. For each context x (m)
at,t of BS m, we can associate it with the task/BS descriptor z m ∈ Z, and definex (m)
at,t = (z m , x (m)
at,t ) to be the augmented context. We define the following kernel functioñ k in (7) to capture the relation among tasks.
k((z, x), (z , x )) = k Z (z, z )k X (x, x )(7)
where k X is the kernel defined in original context, and k Z is the kernel defined in tasks that measures the similarity among tasks/BSs. Then we define the task/BS similarity matrix K Z as (K Z ) ij = k Z (z i , z j ). We discuss the training of this similarity kernel and similarity matrix in Sec.IV-D.
In the multi-tasks contextual bandit model, at time t, we need to train an arm selection strategy based on the history data we experienced, i.e.,{(x (m) τ , r (m) aτ ,τ )|m ∈ M, τ < t}. We formulate a regression problem in Eq. (8)
f t = arg min f ∈Hk M m=1 t−1 τ =1 (f (x (m) aτ ,τ ) − r (m) a,τ ) 2 + λ||f || 2 Hk (8) wherex (m)
aτ ,τ is the augmented context of the arm a τ for task m, which is defined as the combination of the task descriptor z m and origianl context x The only difference is that we use the augmented contextx and new kernelk instead of the x and k.
f t (x) =k T t−1 (x)(K t−1 + λI) −1 y t−1(9)
whereK t−1 is Gram matrix of [x
C. Kernel-based Multi-BS Contextual Bandits
Next, we introduce how to measure the uncertainty of the prediction in Eq. (9). At time T , for a specific task, i.e., BS, m ∈ M, for a given augmented contextx aτ ,τ )|m ∈ M, τ < T } are all independent random variables. Then we can use McDiarmid's inequality to get an upper confident bound of the predicted value. Since the mathematical derivation of this step is the same as Lemma 1 in [19], we only make a minor modification to obtain Theorem 1.
1 − δ T , we have that ∀a ∈ A |f t (x (m) a,t ) − f * (x (m) a,t )| ≤ (α + c √ λ)σ (m) a,t(10)
where the width is
σ (m) a,t = k (x (m) a,t ,x (m) a,t ) −k T t−1 (x (m) a,t )(K t−1 + λI) −1k t−1 (x (m) a,t )(11)
Based on Theorem 1, we define the upper confident bound UCB for each arm for each task in Eq. (12), wheref t is obtained from Eq. (9), and β is a hyper-parameter.
UCB (m) a,t =f t (x (m) a,t ) + βσ (m) a,t(12)
Then we propose Algorithm 1 to solve the multi-BS configuration problem.
In Algorithm 1, at each time t, it updates the prediction modelf t . Then for each task m ∈ M, it uses the model to obtain the UCB of each arm a ∈ A. Next it selects the arm that has the maximum UCB. Algorithm 1 can trade off between the exploitation and exploration in the multi-BS configuration problem. The intuition behind it is as following: if one configuration is only tried for few times or even yet tried, its corresponding arm's width defined in Eq.(11) is larger, which makes its UCB value larger, then this configuration will be tried in following time with high probability.
Independent Assumption Note that the independent assumption of Theorem 1 is not true in Algorithm 1, because the previous rewards influence the arm selection strategy (prediction function), then influence the following reward. To address it, we select a subset of them to make this assumption hold true in Sec. V.
M −1 = A U V C −1 (13) = S −1 −S −1 U C −1 −C −1 V S −1 C −1 V S −1 U C −1 + C −1(14)
Based on it, we can update (K t + λI) −1 by (K t−1 + λI) −1 . It decreases the computation complexity to O(M t 2 ).
For the issue of dealing with large dimension of Gram matrix K has been much studied in Chapter 8 of [22]. Most of them are designed for the supervise learning cases. In our problem, based on thr feature of online learning, Schur complement method is more suitable and efficiency.
D. Similarity
The kernel k Z (z, z ) that defines the similarities among the tasks/BSs plays a significant role in Algorithm 1. When k Z (z, z ) = 1(m = m ), where 1 is the characteristic function, Algorithm 1 is equivalent to running the contextual bandit independently for each BS. In this section, we discuss how to measure the similarity in real data if it is not provided.
Suppose the ground truth function for task i (i.e., BS i) is y = f i (x) , we need to define the similarity among different BSs based on the ground truth functions f i (x). From a Bayesian view, y = f i (x) is equivalent to the conditional distribution P (Y i |X i ). Therefore, we can use the conditional kernel embedding to map the conditional distributions to operators in a high-dimensional space, and then define the similarity based on it. Let us start with the definition of kernel embedding and conditional kernel embedding.
1) Conditional kernel embedding: Kernel embedding is a method in which a probability is mapped to an element of a potentially infinite dimensional feature spaces, i.e., a reproducing kernel Hilbert space (RKHS) [23]. For a random variable in domain X with distribution P (X) , suppose k : X × X → R is the positive definite kernels with corresponding RKHS H X , the kernel embedding of a kernel k for X is defined as
ν x = E X [k(·, x)] = k(·, x)dP (x)(15)
It is an element in H X . For two random variable X and Y , suppose k : X × X → R and l : Y × Y → R are respectively the positive definite kernels with corresponding RKHS H X and H Y . The kernel embedding for the marginal distribution P (Y |X = x) is:
ν Y |x = E Y [l(·, y)|x] = l(·, y)dP (y|x)(16)
It is an element in H Y . Then for the conditional probability P (Y |X), the kernel embedding is defined as a conditional operator C Y |X : H Y → H X that satisfies Eq. (17)
ν Y |x = C Y |X k(x, ·)(17)
If we have a data set {x i , y i } n i=1 , which are i.i.d drawn from P (X, Y ), the conditional kernel embedding operator can be estimated by:Ĉ
Y |X = Ψ(K + λI) −1 Φ(18)
where Ψ = (l(y 1 , ·), · · · , l(y n , ·)) and Φ = (k(x 1 , ·), · · · , k(x n , ·)) are implicitly formed feature matrix, K is the Gram matrix of x, i.e.,
(K) ij = k(x i , x j )
The definition of conditional kernel mean embedding provides a way to measure probability P (Y |X) as an operator between the spaces H Y and H X .
2) Similarity Calculation: In this section, we use the conditional kernel mean embedding to define the similarity space Z and augmented context kernel k Z in Eq. (7).
We define the task/BS similarity space as Z = P Y|X , the set of all conditional probability distributions of Y given X. Then for task/BS i, given a context x Then we use the Gaussian-form kernel based on the conditional kernel embedding to define k Z :
k Z (P Yi|Xi , P Yj |Xj ) = exp(−||C Yi|Xi − C Yj |Xj || 2 /2σ 2 Z )(19)
where ||·|| is Frobenius norm, C Y |X is the conditional kernel embedding defined in Eq. (17) and can be estimated by Eq. (18). The hyper parameter σ Z can be heuristically estimated by the median of Frobenius norm of all dataset. In Eq. (18), it can only be used in explicit kernels. Next, we use the kernel trick to derive a form that does not include explicit features.
For two data sets
D 1 = {(x i , y i )} n1 i=1 and D 2 = {(x i , y i )} n2
i=1 , k and l are respectively two positive definite kernels with RKHS H X and H Y . For data set D m , we define Ψ m = (l(y 1 , ·), · · · , l(y nm , ·)) and Φ m = (k(x 1 , ·), · · · , k(x nm , ·)) are implicitly formed feature matrix of y and x. K m = Φ T m Φ m and L m = Ψ T m Ψ m are Gram matrix of all x and y. We use U Y |X and O Y |X to denote the conditional kernel embeddings for D 1 and D 2 , respectively. According to Eq. (18), we have
U Y |X = Ψ 1 (K 1 + λI) −1 Φ T 1 O Y |X = Ψ 2 (K 2 + λI) −1 Φ T 2 Then ||U Y |X − O Y |X || 2 =tr(U T Y |X U Y |X ) − 2tr(U T Y |X O Y |X ) + tr(O T Y |X O Y |X )(20)
Define matrix K 12 and L 12 by (
K 12 ) ij = k(x i , x j ) and (L 12 ) ij = l(y i , y j ), where (x i , y i ) is the i-th data in D 1
and (x j , y j ) is the j-th data in D 2 , so as K 21 and L 21 . Then for the second term in Eq. (20),
tr(U T Y |X O Y |X ) = tr(Ψ 1 (K 1 + λI) −1 Φ T 1 Φ 2 (K 2 + λI) −1 Ψ T 2 ) = tr((K 1 + λI) −1 Φ T 1 Φ 1 (K 2 + λI) −1 Ψ T 2 Ψ 1 ) = tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 )
After using the same trick for other terms, Eq. (20) can be written as
||U Y |X − O Y |X || 2 = tr((K 1 + λI) −1 K 1 (K 1 + λI) −1 L 1 ) − 2 * tr((K 1 + λI) −1 K 12 (K 2 + λI) −1 L 21 + tr((K 2 + λI) −1 K 2 (K 2 + λI) −1 L 2 )(21)
Then we can use Eq. (21) in Eq. (19) to measure the similarity between tasks.
V. THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of Algorithm 1 based on the classical bandit analysis. The first part is about regret analysis and the second part is about the multi-task-learning efficiency.
A. Regret Analysis
In Algorithm 1, at each time slot t, it uses the trained model to make a decision for all BSs synchronously. This is not in the same form of classical bandit model. In order to simply the analysis, we make an asynchronous version in Algorithm 2, in which at each time t, it receives the context (state and action) of one BS with its BS ID, denoted by V t , that is used to identify the BS index m. Then algorithm 2 obtains the augment context using V t and then makes a decision for the BS. In this manner, Algorithm 2 makes a decision for all BSs asynchronously. The performance of synchronous and asynchronous methods are similar when the number of BSs is moderate and all BSs come in order, as in our case. Choose arm a t = arg max ucb a,t for BS V t
9:
Observe reward r at,t 10:
Update y t by r at,t 11: end for
The regret of Algorithm 2 is defined by
R(T ) = M m=1 T t=1 (r (m) a * t ,t − r (m) at,t )1(V t = m)(22)
In Algorithm 2, the estimated rewardr at,t at time t can be regarded as the sum of variables in history [r aτ ,τ ] τ <t that are dependent random variables. It does not meet the assumption in Theorem 1, thus we are unable to analysis the uncertainty of the prediction.
To address this issue, as in [14], [11], we design the base version (Algorithm 3) and super version (Algorithm 4) of Algorithm 2 in order to meet the requirement of Theorem 1. In Algorithm 4, it constructs special, mutually exclusive subsets {Ψ(s)} S of ts the elapsed time to guarantee the event {t ∈ Ψ (s) t+1 } is independent of the rewards observed at times in Ψ (s) t . On each of these sets, it uses Algorithm 3 as subroutine to obtain the estimated reward and width of the upper confident bound which is the same as Algorithm 2.
Algorithm 3 Base asynchronous multi-BS configuration 1: Input: β, Ψ ⊂ {1, · · · , t − 1} 2: Calculate Gram matrixK Ψ and get y Ψ = [r aτ ,τ ] τ ∈Ψ 3: Observe the BS ID V t and corresponding context features at time t: x a,t for each a ∈ A 4: Determine the BS descriptor z m and get the augmented contextx a,t 5: for all arm a in A at time t do 6:
σ a,t = k (x a,t ,x a,t ) −k T a,Ψ (K Ψ + λI)k a,Ψ 7:
ucb a,t =f (x a,t ) + βσ a,t 8: end for Algorithm 4 Super asynchronous multi-BS configuration 1: Input: β, T ∈ N 2: Initialize S ← log T and Ψ if ω a,t ≤ 1 √ T for all a ∈Â (s) then 9:
Choose a t = arg max a∈Â (s) ucb a,t 10: until a t is found 19: Observe reward r at,t 20: end for
Φ (s) t+1 ← Φ
The construction of Algorithm 3 and Algorithm 4 follow similar strategy of that in the proof of KernelUCB (see Theorem 1 in [13] or Theorem 1 in [19]). Then we can get the following theorem.
Theorem 3. Assume that r a,t ∈ [0, 1], ∀a ∈ A, T ≥ 1, ||f * || Hk ≤ ck, ∀x ∈X and tasks similarity matrix K Z is known. With probability 1 − δ, the regret of Algorithm 4 satisfies,
R(T ) ≤ 2 √ T + 10( log(2T N (log(T ) + 1)/δ)) 2 + c √ λ)
B. Multi-task-learning Efficiency
In this section, we discuss the benefits of multi-task learning from the theoretical view point.
In the asynchronous setting, i.e., Algorithm 2 and Algorithm 4, because all BSs/tasks come in order, at time t, each task happens n = t M times. Let K Xt be Gram matrix of [x (m) aτ ,τ ] τ ≤t,m∈M i.e., original context, K Z be the similarity matrix. Then, following Theorem 2 in [19], the following results hold, Theorem 4. Define the rank of matrix K X T +1 as r x and the rank of matrix K Z as r z . Then log(g([T ])) ≤ r z r x log (T + 1)ck + λ λ According to Eq. (23), if the rank of similarity matrix is lower, which means all BSs/tasks have higher inter-task similarity, the regret bound is tighter.
We make the further assumption that all distinct tasks are similar to each other with task similarity equal to µ. Define g µ ([T ]) as the corresponding value of g([T ]) when all task similarity equal to µ. According to Theorem 3 in [19], we have
Theorem 5. If µ 1 ≤ µ 2 , then g µ1 ([T ]) ≥ g µ2 ([T ])
This shows that when BSs/tasks are more similar, the regret bound is tighter. In our case, running all task independently is equivalent to setting the similarity as an identify matrix, i.e., µ = 0. So, based on the previous two theorems, we show the benefits of our algorithm using the multi-task learning.
VI. EVALUATION
In this section, we evaluate the performance of the proposed approach in a simulator built on the real network data. We start with the data collection and simulator construction procedure, then discuss about the numerical results.
A. Data Collection and Simulator Construction
Since test of bandit algorithm requires an interactive environment, we build a network simulator based on data collected in real networks, which can provide feedback on the algorithm's action.
The data is collected in the real base station configuration experience conducted by a service provider in a metropolitan city. Each data instance contains the following information: sample time, cell name and ID, network measurements (e.g., user number, CQI, average packet size, etc), and configured parameter values. In the test, the configured parameter is the Reference Signal Received Quality (RSRP) threshold for Long-Term Evolution (LTE) A2 event during inter-frequency handover. Inter-frequency handover is a procedure for a BS to guarantee the user experience in cellular network. If one BS observes RSRP of a user it serves is lower than the The network utility that measures the performance is the ratio of users with throughput less than 5 Mbps, i.e., the performance of edge users. Some data samples are illustrated in Table. I. In the problem, the configured parameter a t is the RSRP threshold for LTE A2 event during inter-frequency handover, as shown in the 7th column of Table I. The goal is to minimize the ratio of users with throughput less than 5Mbps, as shown in column 8. To accord with the maximization problem, we define the negative value of this ratio as r at,t . Further, based on field experience, 5 measurement metrics are carefully selected as state s t for each BS, including: number of downlink average active users, ratio of CQI index 0 reports (i.e., low CQI ratio), ratio of small packet SDUs, ratio of small packet traffic volume, number of downlink average users as shown in column 2 to column 6 in Table. I. Then we define the context x a,t = (s t , a t ) for each BS to formulate it into a multi-task contextual bandit problem. The goal is to find the best configured parameter a that maximize the reward r.
The input of the simulator is a query state S * and a configuration parameter value A * . The output is the corresponding reward r. The simulator estimates the rewards for different states and configurations using the following method. For the query state S * and the configuration parameter A * , we search for all samples (S i , A i , R i ) in the data set, and compute a similarity score between (S * , A * ) and (S i , A i ). The similarity score is calculated based on the Euclidean distance between (S * , A * ) and (S i , A i ). We sort the samples according to the similarity score and choose the top-k samples. The average reward of the top-k samples is used as the return of the simulator.
B. Evaluation Setup and Results
As we described in last section, the dimension of the state space is 5. The action space is from -112 dBm to -84 dBm with 1 dBm resolution, i.e., the number of arms in our model is 29. The reward space is R. We use 3 different BSs generated by the simulator and indexed them by {0, 1, 2}, i.e., BS 0, BS 1, BS 2, to test our algorithms. Based on the definition of similarity in Sec. IV-D, we can train the similarity matrix K Z among them, leading to the following result,
We test the multi-task learning case. In Fig. 3, the result for Algorithm 1 using similarity matrix K Z is shown. It compares the accumulated regret of the case with multitask learning and the case that models BSs as independent contextual bandits problems. We use KernelUCB [13] for the independent contextual bandits problems as a baseline. Here, the accumulated regrets shown in Fig. 3 are the sum of the accumulated regrets of the three BSs. Further, each data point is the average result of 20 individual simulations. It can be shown that, when the multi-task learning is used, the regret increases much slower than the case where BSs run independently. At the end of the 1000 time slots, the multi-task learning decreases 35% of the regret in the final. In order to compare the multi-task-learning efficiency of Algorithm 1 for one BS in different multi-task scenarios, we test the following cases for BS 0 with online learning data from itself and other different BS: (BS 0 and BS 1), (BS 0 and BS 2). We also test a case with a BS that is identical to BS 0, resulting in an online learning data set from (BS 0, BS 0). We can regard this as the optimal multitask learning case. In Fig. 4, we measure the performance of Algorithm 1 only by the regret of one BS, i.e., BS 0 in the above mentioned learning scenarios, which is different from the total regret of all BSs in Fig. 3.
In Fig. 4, we find that, the regret of BS 0's default configuration grows out of the bound of the figure quickly. The default configuration is used in present network. It's a fixed parameter and do not change by the state of the BS. The line 'single BS 0' is the result of KernelUCB [13] for a independent contextual bandit model for BS 0. Expect the ideal multi-task case (BS 0, BS 0), the case (BS 0, BS 1) has a better multi-task-learning efficiency, in which the BS 0 has lower regret. Through experiment, we also find that BS 1 has a better similarity with BS 0. To be specific, the similarity is 0.811. So that we can see conditional kernelembedding is a reasonable similarity in this problem. VII. CONCLUSION In this work, in order to address the multi-BS network configuration problem, we propose a kernel-based multi-task contextual bandits algorithm that leverages the similarity among BSs effectively. In the algorithm, we also provided an approach to measure the similarity among tasks based on conditional kernel embedding. Furthermore, we present theoretical bounds for the proposed algorithm in terms of regret and multi-task-learning efficiency. It shows that the bound of regret is tighter if the learning tasks are more similar. We also evaluate the effectiveness of our algorithm on the real problem, based on a simulator built by real traces. Future work includes possible experimental evaluations in real field tests and further studies on the impact of different similarity metrics.
| 6,526 |
1906.11518
|
2954023930
|
Recently there emerge many distributed algorithms that aim at solving subgraph matching at scale. Existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm mainly due to the intertwining of strategy and optimization. In this paper, we identify four strategies and three general-purpose optimizations from representative state-of-the-art works. We implement the four strategies with the optimizations based on the common Timely dataflow system for systematic strategy-level comparison. Our implementation covers all representation algorithms. We conduct extensive experiments for both unlabelled matching and labelled matching to analyze the performance of distributed subgraph matching under various settings, which is finally summarized as a practical guide.
|
Isomorphism-based Subgraph Matching. In the labelled case, @cite_63 used the spanning tree of the query graph to filter infeasible results. @cite_6 observed the importance of matching order. In @cite_15 , the authors proposed to utilize the symmetry properties in the data graph to compress the results. @cite_25 proposed based on the core-forest-leaves'' matching order, and obtained performance gain by postponing the notorious cartesian product.
|
{
"abstract": [
"Subgraph Isomorphism is a fundamental problem in graph data processing. Most existing subgraph isomorphism algorithms are based on a backtracking framework which computes the solutions by incrementally matching all query vertices to candidate data vertices. However, we observe that extensive duplicate computation exists in these algorithms, and such duplicate computation can be avoided by exploiting relationships between data vertices. Motivated by this, we propose a novel approach, BoostIso, to reduce duplicate computation. Our extensive experiments with real datasets show that, after integrating our approach, most existing subgraph isomorphism algorithms can be speeded up significantly, especially for some graphs with intensive vertex relationships, where the improvement can be up to several orders of magnitude.",
"In this paper, we study the problem of subgraph matching that extracts all subgraph isomorphic embeddings of a query graph q in a large data graph G. The existing algorithms for subgraph matching follow Ullmann's backtracking approach; that is, iteratively map query vertices to data vertices by following a matching order of query vertices. It has been shown that the matching order of query vertices is a very important aspect to the efficiency of a subgraph matching algorithm. Recently, many advanced techniques, such as enforcing connectivity and merging similar vertices in query or data graphs, have been proposed to provide an effective matching order with the aim to reduce unpromising intermediate results especially the ones caused by redundant Cartesian products. In this paper, for the first time we address the issue of unpromising results by Cartesian products from \"dissimilar\" vertices. We propose a new framework by postponing the Cartesian products based on the structure of a query to minimize the redundant Cartesian products. Our second contribution is proposing a new path-based auxiliary data structure, with the size O(|E(G)| x |V(q)|), to generate a matching order and conduct subgraph matching, which significantly reduces the exponential size O(|V(G)||V(q)|-1) of the existing path-based auxiliary data structure, where V (G) and E (G) are the vertex and edge sets of a data graph G, respectively, and V (q) is the vertex set of a query @math . Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms by up to @math orders of magnitude.",
"Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"Graphs are widely used to model complicated data semantics in many applications. In this paper, we aim to develop efficient techniques to retrieve graphs, containing a given query graph, from a large set of graphs. Considering the problem of testing subgraph isomorphism is generally NP-hard, most of the existing techniques are based on the framework of filtering-and-verification to reduce the precise computation costs; consequently various novel feature-based indexes have been developed. While the existing techniques work well for small query graphs, the verification phase becomes a bottleneck when the query graph size increases. Motivated by this, in the paper we firstly propose a novel and efficient algorithm for testing subgraph isomorphism, QuickSI. Secondly, we develop a new feature-based index technique to accommodate QuickSI in the filtering phase. Our extensive experiments on real and synthetic data demonstrate the efficiency and scalability of the proposed techniques, which significantly improve the existing techniques."
],
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_6",
"@cite_63"
],
"mid": [
"2254833717",
"2423652555",
"2035173902",
"2140840007"
]
}
|
A SURVEY AND EXPERIMENTAL ANALYSIS OF DISTRIBUTED SUBGRAPH MATCHING A PREPRINT
|
with no need of exchanging data. As a result, it typically renders much less communication cost than that of BINJOIN and WOPTJOIN algorithms. MultiwayJoin adopts the idea of SHRCUBE for subgraph matching. In order to properly partition the computation without missing results, MultiwayJoin needs to duplicate each edge in multiple workers. As a result, MultiwayJoin can almost carry the whole graph in each worker for certain queries [35,13] and thus scale out poorly.
OTHERS. Shao et al. proposed PSgL [50] that processes subgraph matching via breadth-first-style traversal. Staring from an initial query vertex, PSgL iteratively expands the partial results by merging the matches of certain vertex's unmatched neighbors. It has been pointed out in [35] that PSgL is actually a variant of StarJoin. Very recently, Qiao et al. proposed CrystalJoin [45] that aims at resolving the "output crisis" by compressing the (intermediate) results. The idea is to first compute the matches of the vertex cover of the query graph, then the remaining vertices' matches can be compressed as intersection of the vertex cover's neighbors to avoid costly cartesian product.
Optimizations. Apart from join strategies, existing algorithms also explored a variety of optimizations, some of which are query-or algorithm-specific, while we spotlight three general-purpose optimizations, Batching, TrIndexing and Compression. Batching aims to divide the whole computation into sub-tasks that can be evaluated independently in order to save resource (memory) allocation. TrIndexing precomputes and indices the triangles (3-cycles) of the graph to facilitate pruning. Compression attempts to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost.
Motivations.
In this paper, we survey seven representative algorithms to solve distributed subgraph matching: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. While all these algorithms embody some good merits in theory, existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm due to several reasons. Firstly, the prior experiments did not take into consideration the differences of languages and the cost of the systems on which each implementation is based (Table 1). Secondly, some implementations hardcode query-specific optimizations for each query, which makes it hard to judge whether the observed performance is from the algorithmic advancement or hardcoded optimization. Thirdly, all BINJOIN and WOPTJOIN algorithms (more precisely, their implementations) intertwined join strategy with some optimizations of Batching, TrIndexing and Compression. We show in Table 1 how each optimization has been applied in current implementation. For example, CliqueJoin only adopted TrIndexing and some queryspecific Compression, while BiGJoin considered Batching in general, but TrIndexing only for one specific query (Compression was only discussed in paper, but not implemented). People naturally wonder that "maybe it is better to adopt A strategy with B optimization", but unfortunately none of existing implementation covers that combination. Last but not least, there misses an important benchmarking of the FULLREP strategy, that is to maintain the whole graph in each partition and parallelize embarrassingly [29]. FULLREP strategy requires no communication, and it should be the most efficient strategy when each machine can hold the whole graph (the case for most experimental settings nowadays). [51] BINJOIN No Trinity [49] None MultiwayJoin [12] SHRCUBE N/A Hadoop [35], Myria [20] N/A PSgL [50] OTHERS No Giraph [4] None TwinTwigJoin [35] BINJOIN No Hadoop Compression [36] CliqueJoin [37] BINJOIN Yes (Section 6) Hadoop TrIndexing, some Compression CrystalJoin [45] OTHERS N/A Hadoop TrIndexing, Compression BiGJoin [13] WOPTJOIN Yes [13] Timely Dataflow [43] Batching, specific TrIndexing Table 1 summarizes the surveyed algorithms via the category of strategy, the optimality guarantee, and the status of current implementations including the based platform and how the three optimizations are adopted.
Our Contributions
To address the above issues, we aims at a systematic, strategy-level benchmarking of distributed subgraph matching in this paper. To achieve that goal, we implement all strategies, together with the three general-purpose optimizations for subgraph matching based on the Timely dataflow system [43]. Note that our implementation covers all seven representative algorithms. Here, we use Timely as the base system as it incurs less cost [42] than other popular systems like Giraph [4], Spark [54] and GraphLab [38], so that the system's impact can be reduced to the minimum.
We implement the benchmarking platform using our best effort based on the papers of each algorithm and email communications with the authors. Our implementation is (1) generic to handle arbitrary query, and does not include any hardcoded optimizations; (2) flexible that can configure Batching, TrIndexing and Compression optimizations in any combination for BINJOIN and WOPTJOIN algorithms; and (3) efficient that are comparable to and sometimes even faster than the original hardcoded implementation. Note that the three general-purpose optimizations are mainly used to reduce communication cost, and is not useful to the SHRCUBE and FULLREP strategies, while we still devote a lot of efforts into their implementations. Aware that their performance heavily depends on the local algorithm, we implement and compare the state-of-the-art local subgraph matching algorithms proposed in [34], [11] (for unlabelled matching), and [16] (for labelled matching), and adopt the best-possible implementation. For SHRCUBE, we refer to [20] to implement "Hypercube Optimization" for better hypercube sharing.
We make the following contributions in the paper.
(1) A benchmarking platform based on Timely dataflow system for distributed subgraph matching. We implement four distributed subgraph matching strategies (and the general optimizations) that covers seven state-of-the-art algorithms: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. Our implementation is generic to handle arbitrary query, including the labelled and directed query, and thus can guide practical use.
(2) Three general-purpose optimizations -Batching, TrIndexing and Compression. We investigate the literature on the optimization strategies, and spotlight the three general-purpose optimizations. We propose heuristics to incorporate the three optimizations into BINJOIN and WOPTJOIN strategies, with no need of query-specific adjustments from human experts. The three optimizations can be flexibly configured in any combination.
(3) In-depth experimental studies. In order to extensively evaluate the performance of each strategy and the effectiveness of the optimizations, we use data graphs of different sizes and densities, including sparse road network, dense ego network, and web-scale graph that is larger than each machine's configured memory. We select query graphs of various characteristics that are either from existing works or suitable for benchmarking purpose. In addition to running time, we measure the communication cost, memory usage and other metrics to help reason the performance.
(4) A practical guide of distributed subgraph matching. Through empirical analysis covering the variances of join strategies, optimizations, join plans, we propose a practical guide for distributed subgraph matching. We also inspire interesting future work based on the experimental findings.
Organizations
. The rest of the paper is organized as follows. Section 2 defines the problem of subgraph matching and introduces preliminary knowledge. Section 3 surveys the representative algorithms, and our implementation details following the categories of BINJOIN, WOPTJOIN, SHRCUBE and OTHERS. Section 4 investigates the three general-purpose optimizations and devises heuristics of applying them to BINJOIN and WOPTJOIN algorithms. Section 5 demonstrates the experimental results and our in-depth analysis. Section 7 discusses the related works, and Section 8 concludes the whole paper.
Preliminaries
Problem Definition
Graph Notations. A graph g is defined as a 3-tuple, g = (V g , E g , L g ), where V g is the vertex set and E g ⊆ V g × V g is the edge set of g, and L g is a label function that maps each vertex µ ∈ V g and/or each edge e ∈ E g to a label. Note that for unlabelled graph, L g simply maps all vertices and edges to ∅. For a vertex µ ∈ V g , denote N g (µ) as the set of neighbors, d g (µ) = |N g (µ)| as the degree of µ, d g = 2|Eg| |Vg| and D g = max µ∈V (g) d g (µ) as the average and maximum degree, respectively. A subgraph g of g, denoted g ⊆ g, is a graph that satisfies V g ⊆ V g and E g ⊆ E g .
Given V ⊆ V g , we define induced subgraph g(V ) as the subgraph induced by V , that is
g(V ) = (V , E(V ), L g ), where E(V ) = {e = (µ, µ ) | e ∈ E g , µ ∈ V ∧ µ ∈ V }. We say V ⊆ V g is a vertex cover of g, if ∀ e = (µ, µ ) ∈ E g , µ ∈ V or µ ∈ V . A minimum vertex cover V c
g is a vertex cover of g that contains minimum number of vertices. A connected vertex cover is a vertex cover whose induced subgraph is connected, among which a minimum connected vertex cover, denoted V cc g , is the one with the minimum number of vertices.
Data and Query Graph. We denote the data graph as G, and let N = |V G |, M = |E G |. Denote a data vertex of id i as u i where 1 <= i <= N . Note that the data vertex has been reordered such that if d G (u) < d G (u ), then id(u) < id(u ). We denote the query graph as Q, and let n = |V Q |, m = |E Q |, and V Q = {v 1 , v 2 , · · · , v n }.
Subgraph Matching. Given a data graph G and a query graph Q, we define subgraph isomorphism:
Definition 2.1. (Subgraph Isomorphism.) Subgraph isomorphism is defined as a bijective mapping f : V (Q) → V (G) such that, (1) ∀v ∈ V (Q), L Q (v) = L G (f (v)); (2) ∀(v, v ) ∈ E(Q), (f (v), f (v )) ∈ E(G), and L Q ((v, v )) = L G ((f (v), f (v )))
. A subgraph isomorphism is called a Match in this paper. With the query vertices listed as {v 1 , v 2 , · · · , v n }, we can simply represent a match f as
{u k1 , u k2 , · · · , u kn }, where f (v i ) = u ki for 1 <= i <= n.
The Subgraph Matching problem aims at finding all matches of Q in G. Denote R G (Q) (or R(Q) when the context is clear) as the result set of Q in G. As prior works [35,37,50], we apply symmetry breaking for unlabelled matching to avoid duplicate enumeration caused by automorphism. Specifically, we first assign partial order O Q to the query graph according to [26].
Here, O Q ⊆ V Q × V Q , and (v i , v j ) ∈ O Q means v i < v j .
In unlabelled matching, a match f must satisfy the order constraint:
∀(v, v ) ∈ O Q , it holds f (v) < f (v )
. Note that we do not consider order constraint in labelled matching. Example 2.1. In Figure 1, we present a query graph Q and a data graph G. For unlabelled matching, we give the partial order O Q under the query graph. There are three matches: {u 1 , u 2 , u 6 , u 5 }, {u 2 , u 5 , u 3 , u 6 } and {u 4 , u 3 , u 6 , u 5 }. It is easy to check that these matches satisfy the order constraint. Without the order constraint, there are actually four automorphic 3 matches corresponding to each above match [12]. For labelled matching, we use different fillings to represent the labels. There are two matches accordingly -{u 1 , u 2 , u 6 , u 5 } and By treating the query vertices as attributes and data edges as relational table, we can write subgraph matching query as a multiway-way join of the edge relations. For example, regardless of label and order constraints, the query of Example 2.1 can be written as the following join
{u 4 , u 3 , u 6 , u 5 }. v 1 v 2 v 3 O Q = {(v 1 , v 3 ), (v 2 , v 4 )} u 1 u 3 u 4 u 5 u 6 v 4 u 2R(Q) = E(v1, v2) E(v2, v3) E(v3, v4) E(v1, v4) E(v2, v4).(1)
This motivates researchers to leverage join operation for large-scale subgraph matching, given that join can be easily distributed, and it is natively supported in many distributed data engines like Spark [54] and Flink [17].
Timely Dataflow System
Timely is a distributed data-parallel dataflow system [43]. The minimum processing unit of Timely is a worker, which can be simply seen as a process that occupies a CPU core. Typically, one physical multi-core machine can run several workers. Timely follows the shared-nothing dataflow computation model [22] that abstracts the computation as a dataflow graph. In the dataflow graph, the vertex (a.k.a. operator) defines the computing logics and the edges in between the operators represent the data streams. One operator can accept multiple input streams, feed them to the computing, and produce (typically) one output stream. After the dataflow graph for certain computing task is defined, it is distributed to each worker in the cluster, and further translated into a physical execution plan. Based on the physical plan, each worker can accordingly process the task in parallel while accepting the corresponding input portion.
Algorithm Survey
We survey the distributed subgraph matching algorithms following the categories of BINJOIN, WOPTJOIN, SHRCUBE, and OTHERS. We also show that CliqueJoin is a variant of GenericJoin [44], and is thus worst-case optimal.
BinJoin
The simplest BINJOIN algorithm uses data edges as the base relation, which starts from one edge, and expands by one edge in each join. For example, to solve the join of Equation 1, a simple plan is shown in Figure 2a. The join plan is straightforward, but the intermediate results, especially R 2 (a 3-path), can be huge. To improve the performance of BINJOIN, people devoted their efforts into: (1) using more complex base relations other than edge; (2) devising better join plan P . The base relations B [q] represent the matches of a set of sub-structures [q] of the query graph Q. Each p ∈ [q] is called a join unit, and it must satisfy
⋊ ⋉ E(v 1 , v 2 ) E(v 2 , v 3 ) R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 2 , v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (a) Left-deep join plan E(v 1 , v 2 ) E(v 2 , v 3 ) ⋊ ⋉ R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 3 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (b) Bushy join planV Q = p∈[q] V p and E Q = p∈[q] E p .
With the data graph partitioned across the cluster, [37] constrains the join unit to be the structure whose results can be independently computed within each partition (i.e. embarrassingly parallel [29]). It is not hard to see that when each vertex has full access to the neighbors in the partition, we can compute the matches of a k-star (a star of k leaves) rooted on the vertex u by enumerating all k-combinations within N G (u). Therefore, star is a qualified and indeed widely used join unit.
Given the base relations, the join plan P determines an order of processing binary joins. A join plan is left-deep 4 if there is at least a base relation involved in each join, otherwise it is bushy. For example, the join plan in Figure 2a is left-deep, and a bushy join plan is shown in Figure 2b. Note that the bushy plan avoids the expensive R 2 in the left-deep plan, and is generally better.
StarJoin. As the name suggests, StarJoin uses star as the join unit, and it follows the left-deep join order. To decompose the query graph, it first locates the vertex cover of the query graph, and each vertex in the cover and its unused neighbors naturally form a star [51]. A StarJoin plan for Equation 1 is
(J 1 ) R(Q) = Star(v 2 ; {v 1 , v 3 , v 4 }) Star(v 4 ; {v 2 , v 3 }),
where Star(r; L) denotes a Star relation (the matches of the star) with r as the root, and L as the set of leaves.
TwinTwigJoin. Enumerating a k-star on a vertex of degree d will render O(d k ) cost. We refer star explosion to the case while enumerating stars on a large-degree vertex. Lai et al. proposed TwinTwigJoin [35] to address the issue of StarJoin by forcing the join plan to use TwinTwig (a star of at most two edges) instead of a general star as the join unit. Intuitively, this would help ameliorate the star explosion by constraining the cost of each join unit from d k of arbitrary k to at most d 2 . TwinTwigJoin follows StarJoin to use left-deep join order. The authors proved that TwinTwigJoin is instance optimal to StarJoin, that is given any general StarJoin plan in the left-deep join order, we can rewrite it as an alternative TwinTwigJoin plan that draws no more cost (in the big O sense) than the original StarJoin, where the cost is evaluated based on Erdös-Rényi random graph (ER) model [23]. A TwinTwigJoin plan for Equation 1 is
(J1) R1(v1, v2, v3, v4) = TwinTwig(v1; {v2, v4}) TwinTwig(v2; {v3, v4}); (J2) R(Q) = R1(v1, v2, v3, v4) TwinTwig(v3; {v4}),(2)
where TwinTwig(r; L) denotes a TwinTwig relation with r as the root, and L as the leaves.
CliqueJoin. TwinTwigJoin hampers star explosion to some extent, but still suffers from the problems of long execution (Ω( m 2 ) rounds) and suboptimal left-deep join plan. CliqueJoin resolves the issues by extending StarJoin in two aspects. Firstly, CliqueJoin applies the "triangle partition" strategy (Section 4.2), which enables CliqueJoin to use clique, in addition to star, as the join unit. The use of clique can greatly shorten the execution especially when the query is dense, although it still degenerates to StarJoin when the query contains no clique subgraph. Secondly, CliqueJoin exploits the bushy join plan to approach optimality. A CliqueJoin plan for Equation 1 is:
(J 1 ) R(Q) = Clique({v 1 , v 2 , v 4 }) Clique({v 2 , v 3 , v 4 }),(3)
where Clique(V ) denotes a Clique relation of the involving vertices V .
Implementation Details. We implement the BINJOIN strategy based on the join framework proposed in [37] to cover StarJoin, TwinTwigJoin and CliqueJoin.
We use power-law random graph (PR) model [21] to estimate the cost as [37], and implement the dynamic programming algorithm [37] to compute the cost-optimal join plan. Once the join plan is computed, we translate the plan into Timely dataflow that processes each binary join using a Join operator. We implement the Join operator following Timely's official "pipeline" HashJoin example 5 . We modify it into "batching-style" -the mappers (senders) shuffle the data based on the join key, while the reducers (receivers) maintain the received key-value pairs in a hash table (until mapper completes) for join processing. The reasons that we implement the join as "batching-style" are, (1) its performance is similar to "pipeline" join as a whole; (2) it replays the original implementation in Hadoop; and (3) it favors the Batching optimization (Section 4.1).
WOptJoin
WOPTJOIN strategy processes subgraph matching by matching vertices in a predefined order. Given the query graph Q and V Q = {v 1 , v 2 , · · · , v n } as the matching order, the algorithm starts from an empty set, and computes the matches of the subset {v 1 , · · · , v i } in the i th rounds. Denote the partial results after the i th (i < n) round as R i , and p = {u k1 , u k2 , · · · , u ki } ∈ R i is one of the tuples. In the i + 1 th round, the algorithm expands the results by matching v i+1 with u ki+1 for p iff. ∀ 1≤j≤i (v j , v i+1 ) ∈ E Q , (u kj , u ki+1 ) ∈ E G . It is immediate that the candidate matches of v i+1 , denoted C(v i+1 ), can be obtained by intersecting the relevant neighbors of the matched vertices as
C(v i+1 ) = ∀ 1≤j≤i ∧(vj ,vi+1)∈E Q N G (u kj ).(4)
BiGJoin. BiGJoin adopts the WOPTJOIN strategy in Timely dataflow system. The main challenge is to implement the intersection efficiently using Timely dataflow. For that purpose, the authors designed the following three operators:
• Count: Checking the number of neighbors of each u kj in Equation 4 and recording the location (worker) of the one with the smallest neighbor set. • Propose: Attaching the smallest neighbor set to p as (p; C(v i+1 )).
• Intersect: Sending (p; C(v i+1 )) to the worker that maintains each u kj and update C(
v i+1 ) = C(v i+1 ) ∩ N G (u kj ).
After intersection, we will expand p by pushing into p every vertex of C(v i+1 ).
Implementation Details. We directly use the authors' implementation [5], but slightly modify the codes to use the common graph data structure. We do not consider the dynamic version of BiGJoin in this paper, as the other strategies currently only support static context. The matching order is determined using a greedy heuristic that starts with the vertex of the largest degree, and consequently selects the next vertex with the most connections (id as tie breaker) with already-selected vertices.
ShrCube
SHRCUBE strategy treats the join processing of the query Q as a hypercube of n = |V Q | dimension. It attempts to divide the hypercube evenly across the workers in the cluster, so that each worker can complete its own share without data communication. However, it is normally required that each data tuple is duplicated into multiple workers. This renders a space requirement of M w 1−ρ for each worker, where M is size of the input data, w is the number of workers and 0 < ρ ≤ 1 is a query-dependent parameter. When ρ is close to 1, the algorithm ends up with maintaining the whole input data in each worker.
MultiwayJoin. MultiwayJoin applies the SHRCUBE strategy to solve subgraph matching in one single round. Consider w workers in the cluster, a query graph Q with
V Q = {v 1 , v 2 , . . . , v n } vertices and E Q = {e 1 , e 2 , . . . , e m }, where e i = (v i1 , v i2 ). Regarding each query vertex v i , assign a positive integer as bucket number b i that satisfies n i=1 b i = w.
The algorithm then divides the candidate data vertices for v i evenly into b i parts via a hash function
h : u → z i , where u ∈ V G , 1 ≤ z i ≤ b i .
This accordingly divides the whole computation into w shares, each of which can be indiced via an n-ary tuple (z 1 , z 2 , · · · , z n ), and is assigned to one worker. Afterwards, regarding each query edge
e i = (v i1 , v i2 ), MultiwayJoin maps a data edge (u, u ) as (z 1 , · · · , z i1 = h(u), · · · , z i2 = h(u ), . . . , z n ),
where other than z i1 and z i2 , each above z i iterates through {1, 2, · · · , b i }, and the edge will be routed to the workers accordingly. Taking triangle query with
E Q = {(v 1 , v 2 ), (v 1 , v 3 ), (v 2 , v 3 )
} as an example. According to [12],
b 1 = b 2 = b 3 = b = n
√ w is an optimal bucket number assignment. Each edge (u, u ) is then routed to the workers as:
(1) (h(u), h(u ), z) regarding (v 1 , v 2 ); (2) (h(u), z, h(u )) regarding (v 1 , v 3 ); (3) (z, h(u), h(u )) regarding (v 2 , v 3 ),
where the above z iterates through {1, 2, · · · , b}. Consequently, each data edge is duplicated by roughly 3 3 √ w times, and by expectation each worker will receive 3M w 1−1/3 edges. For unlabelled matching, MultiwayJoin utilizes the partial order of the query graph (Section 2.1) to reduce edge duplication, and details can be found in [12].
Implementation Details. There are two main impact factors of the performance of SHRCUBE. Firstly, the hypercube sharing by assigning proper b i for v i . Beame et al. [15] generalized the problem of computing optimal hypercube sharing for arbitrary query as linear programming. However, the optimal solution may assign fractional bucket number that is unwanted in practice. An easy refinement is to round down to an integer, but it will apparently result in idle workers. Chu et al. [20] addressed this issue via "Hypercube Optimization", that is to enumerate all possible bucket sequences around the optimal solutions, and choose the one that produces shares (product of bucket numbers) closest to the number of workers. We adopt this strategy in our implementation.
Secondly, the local algorithm. When the edges arrive at the worker, we collect them into a local graph (duplicate edges are removed), and use local algorithm to compute the matches. For unlabelled matching. we study the state-of-the-art local algorithms from "EmptyHeaded" [11] and "DualSim" [34]. "EmptyHeaded" is inspired by Ngo's worst-case optimal algorithm [44] that decomposes the query graph via "Hyper-Tree Decomposition", computes each decomposed part using worst-case optimal join and finally glues all parts together using hash join. "DualSim" was proposed by [34] for subgraph matching in the external-memory setting. The idea is to first compute the matches of V cc Q , then the remaining vertices V Q \V cc Q can be efficiently matched by enumerating the intersection of V cc Q 's neighbors. We find out that "DualSim" actually produces the same query plans as "EmptyHeaded" for all our benchmarking queries ( Figure 4) except q 9 . We implement both algorithms for q 9 and "DualSim" performs better than "EmptyHeaded" on the GO, US, GP and LJ datasets ( Table 2). As a result, we adopt "DualSim" as the local algorithm for MultiwayJoin. For labelled matching, we implement "CFLMatch" proposed in [16] that has been shown so far to have the best performance. Now we let each worker independently compute matches in its local graph. Simply doing so will result in duplicates, so we process deduplication as follows: given a match f that is computed in the worker identified by t w , we can recover the tuple t f e of the matched edge (f (v), f (v )) regarding the query edge e = (v, v ), then the match f is retained if and only if t w = t f e for every e ∈ E Q . To explain this, let's consider b = 2, and a match {u 0 , u 1 , u 2 } for a triangle query
(v 0 , v 1 , v 2 ), where h(u 0 ) = h(u 1 ) = h(u 2 ) = 0.
It is easy to see that the match will be computed in workers of (0, 0, 0) and (0, 0, 1), while the match in worker (0, 0, 1) will be eliminated as (u 0 , u 2 ) that matches the query edge (v 0 , v 2 ) can not be hashed to (0, 0, 1) regarding (v 0 , v 2 ). We can also avoid deduplication by separately maintaing each edge regarding different query edges it stands for, and use the local algorithm proposed in [20], but it results in too many edge duplicates that drain our memory even when processing a medium-size graph.
Others
PSgL and its implementation. PSgL iteratively processes subgraph matching via breadth-first traversal. All query vertices are configured three status, "white" (initialized), "gray" (candidate) and "black" (matched). Denote v i as the vertex to match in the i th round. The algorithm starts from matching initial query vertex v 1 , and coloring the neighbors as "gray". In the i th round, the algorithm applies the workload-aware expanding strategy at runtime, that is to select the v i to expand among all current "gray" vertices based on a greedy heuristic to minimize the communication cost [49]; the partial results from previous round R i−1 (specially, R 0 = ∅) will be distributed among the workers based on the candidate data vertices that can match v i ; in the certain worker, the algorithm computes R i by merging R i−1 with the matches of the Star formed by v i and its "white" neighbors N w
Q (v i ), namely Star(v i ; N w Q (v i ));
after v i is matched, v i is colored as "black" and its "white" neighbors will be colored as "gray"; essentially, this process is analogous to StarJoin by processing
R i = R i−1 Star(v i ; N w Q (v i ))
. Thus, PSgL can be seen as an alternative implementation of StarJoin on Pregel [41]. In this work, we also implement PSgL using a Pregel on Timely. Note that we introduce Pregel api to as much as possible replay the implementation of PSgL. In fact, it is simply wrapping Timely's primitive operators such as binary_notify and loop 6 , and barely introduces extra cost to the implementation. Our experimental results demonstrate similar findings as prior work [37] that PSgL's performance is dominated by CliqueJoin [37]. Thus, we will not further discuss this algorithm in this paper.
CrystalJoin and its implementation. CrystalJoin aims at resolving the "output crisis" by compressing the results of subgraph matching [45]. The authors defined a structure called crystal, denoted Q(x, y). A crystal is a subgraph of Q that contains two sets of vertices V x and V y (|V x | = x and |V y | = y), where the induced subgraph Q(V x ) is a x-clique, and every vertex in V y connects to all vertices of V x . We call V x clique vertices, and V y the bud vertices. The algorithm first obtains the minimum vertex cover V c Q , and then applies the Core-Crystal Decomposition to decompose the query graph into the core Q(V c Q ) and a set of crystals
{Q 1 (x 1 , y 1 ), . . . , Q t (x t , y t )}. The crystals must satisfy that ∀1 ≤ i ≤ t, Q(V xi ) ⊆ Q(V c Q )
, namely, the clique part of each crystal is a subgraph of the core. As an example, we plot a query graph and the corresponding core-crystal decomposition in Figure 3. Note that in the example, both crystals have an edge (i.e. 2-clique) as the clique part. With core-crystal decomposition, the computation has accordingly split into three stages:
v 1 v 2 v 3 v 4 v 5 Core: v 3 v 2 v 5 Crystals: v 2 v 5 v 1 v 3 v 5 v 4 Q 1 (2, 1) Q 2 (2, 1) Q Core-Crystal Decomposition
1. Core computation. Given that Q(V c Q ) itself is a query graph, the algorithm can be recursively applied to compute Q(V c Q ) according to [45].
2. Crystal computation. A special case of crystal is Q(x, 1), which is indeed a (x + 1)-clique. Suppose an instance of the Q(V x ) is f x = {u 1 , u 2 , . . . , u x }, we can represent the matches w.r.t. f x as (f x , I y ), where I y = x i=1 N G (u i )
denotes the set of vertices that can match V y . This can naturally be extended to the case with y > 1, where any y-combinations of the vertices of I y together with f x represent a match. This way, the matches of crystals can be largely compressed. 3. One-time assembly. This stage assembles the core instances and the compressed crystal matches to produce the final results. More precisely, this stage is to join the core instance with the crystal matches.
We notice two technical obstacles to implement CrystalJoin according to the paper. Firstly, it is worth noting that the core Q(V c Q ) may be disconnected, a case that can produce exponential number of results. The authors applied a query-specific optimization in the original implementation to resolve this issue. Secondly, the authors proposed to precompute the cliques up to certain k, while it is often cost-prohibitive to do so in practice. Take UK (Table 2) dataset as an example, the triangles, 4-cliques and 5-cliques are respectively about 20, 600 and 40000 times larger than the graph itself. It is worth noting that the main purpose of this paper is not to study how well each algorithm performs for a specific query, which has its theoretical value, but can barely guide practice. After communicating with the authors, we adapt CrystalJoin in the following. Firstly, we replace the core Q(V c Q ) with the induced subgraph of the minimum connected vertex cover Q(V cc Q ). Secondly, instead of implementing CrystalJoin as a strategy, we use it as an alternative join plan (matching order) for WOPTJOIN. According to CrystalJoin, we first match V cc Q , while the matching order inside and outside V cc Q still follows WOPTJOIN's greedy heuristic (Section 3.2). It is worth noting that this adaptation achieves high performance comparable to the original implementation. In fact, we also apply CrystalJoin plan to BINJOIN, while it does not perform as well as the WOPTJOIN version, thus we do not discuss this implementation.
FullRep and its implementation. FULLREP simply maintains a full replica of the graph in each physical machine. Each worker picks one independent share of computation and solves it using existing local algorithm.
The implementation is straightforward. We let each worker pick its share of computation via a Round-Robin strategy, that is we settle an initial query vertex v 1 , and let first worker match v 1 with u 1 to continue the remaining process, and second worker match v 1 with u 2 , and so on. This simple strategy already works very well on balancing the load of our benchmarking queries (Figure 4). We use "DualSim" for unlabelled matching and "CFLMatch" for labelled matching as MultiwayJoin.
Worst-case Optimality.
Given a query Q and the data graph G, we denote the maximum possible result set as R G (Q). Simply speaking, an algorithm is worst-case optimal if the aggregation of the total intermediate results is bounded by Θ(|R G (Q)|). Ngo et al. proposed a class of worst-case optimal join algorithm called GenericJoin [44], and we first overview this algorithm.
GenericJoin. Let the join be R
(V ) = F ⊆Ψ R(F ), where Ψ = {U | U ⊆ V } and V = U ∈Ψ U . Given a vertex subset U ⊆ V , let Ψ U = {V | V ∈ Ψ ∧ V ∩ U = ∅}, and for a tuple t ∈ R(V ), denote t U as t's projection on U .
We then show the GenericJoin in Algorithm 1.
Algorithm 1: GenericJoin(V, Ψ, U ∈Ψ R(U )) 1 R(V ) ← ∅; 2 if |V | = 1 then 3 Return U ∈Ψ R(U ); 4 V ← (I, J), where ∅ = I ⊂ V , and J = V \ I; 5 R(I) ← GenericJoin(I, ΨI , U ∈Ψ I πI (R(U ))); 6 forall tI ∈ R(I) do 7 R(J)w.r.t. t I ← GenericJoin(J, ΨJ , U ∈Ψ J πJ (R(U ) tI )); 8 R(V ) ← R(V ) ∪ {tI } × R(J)w.r.t. t I ; 9 Return R(V );
In Algorithm 1, the original join is recursively decomposed into two parts R(I) and R(J) regarding the disjoint sets I and J. From line 5, it is clear that R(I) will record R(V )'s projection on I, thus we have |R(I)| ≤ |R(V )|, where R(V ) is the maximum possible results of the query. Meanwhile, in line 7, the semi-join R(U ) t I = {r | r ∈ R(U ) ∧ r (U ∪I) = t (U ∪I) } only retains those R(J) w.r.t. t I that can end up in the join result, which infers that the R(J) must also be bounded by the final results. This intuitively explains the worst-case optimality of GenericJoin, while we refer interested readers to [44] for a complete proof.
It is easy to see that BiGJoin is worst-case optimal. In Algorithm 1, we select I in line 4 by popping the edge relation E(v s , v i )(s < i) in the i th step. In line 7, the recursive call to solve the semi-join R(U ) t I actually corresponds to the intersection process.
Worst-case Optimality of CliqueJoin. Note that the two clique relations in Equation 3 interleave one common edge (v 2 , v 4 ) in the query graph. This optimization, called "overlapping decomposition" [37], eventually contributes to CliqueJoin's worst-cast optimality. Note that it is not possible to apply this optimization to StarJoin and TwinTwigJoin. We have the following theorem. Theorem 3.1. CliqueJoin is worst-case optimal while applying "overlapped decomposition".
Proof. We implement CliqueJoin using Algorithm 1 in the following. Note that Q(V ) denotes a subgraph of Q induced by V . In line 2, we change the stopping condition to "Q(I) is either a clique or a star". In line 4, the I is selected such that Q(I) is either a clique or a star. Note that by applying the "overlapping decomposition" in CliqueJoin, the sub-query of the J part must be the J-induced graph Q(J), and it will also include the edges of E Q(I) ∩ E Q(J) , which infers that R(Q(J)) = R(Q(J)) R(Q(I)) , and just reflects the semi-join in line 7. Therefore, CliqueJoin belongs to GenericJoin, and is thus worst-case optimal.
Optimizations
We introduce the three general-purpose optimizations, Batching, TrIndexing and Compression in this section, and how we orthogonally apply them to BINJOIN and WOPTJOIN algorithms. In the rest of the paper, we will use the strategy BINJOIN, WOPTJOIN, SHRCUBE instead of their corresponding algorithms, as we focus on strategy-level comparison.
Batching
Let R(V i ) be the partial results that match the given vertices V i = {v si , v s2 , . . . , v si } (R i for short if V i follows a given order), and R(V j ) denote the more complete results with V i ⊂ V j . Denote R j |R i as the tuples in R j whose projection
on V i equates R i . Let's partition R i into b disjoint parts {R 1 i , R 2 i , . . . , R b i }.
We define Batching on R j |R i as the technique to independently process the following sub-tasks that compute
{R j |R 1 i , R j |R 2 i , . . . , R j |R b i }. Obviously, R j |R i = b k=1 R j |R k i .
WOptJoin. Recall from Section 3.2 that WOPTJOIN progresses according to a predefined matching order {v 1 , v 2 , . . . , v n }. In the i th round, WOPTJOIN will Propose on each p ∈ R i−1 to compute R i . It is not hard to see that we can easily apply Batching to the computation of R i |R i−1 by randomly partitioning R i−1 . For simplicity, the authors implemented Batching on R(Q)|R 1 (v 1 ). Note that R 1 (v 1 ) = V G in unlabelled matching, which means that we can achieve Batching simply by partitioning the data vertices 7 . For short, we also say the strategy batches on v 1 , and call v 1 the batching vertex. We follow the same idea to apply Batching to BINJOIN algorithms.
BinJoin. While it is natural for WOPTJOIN to batch on v 1 , it is non-trivial to pick such a vertex for BINJOIN. Given a decomposition of the query graph {p 1 , p 2 , . . . , p s }, where each p i is a join unit, we have R(Q) = R(p 1 ) R(p 2 ) · · · R(p s ). If we partition R 1 (v) so as to batch on v ∈ V Q , we correspondingly split the join task, and one of the sub-task is R(Q)|R k
1 (v) = R(p 1 )|R k 1 (v) · · · R(p s )|R k 1 (v) (R k 1 (v) is one partition of R 1 (v)).
Observe that if there exists a join unit p where v ∈ V p , we must have R(p) = R(p)|R k 1 (v), which means R(p) have to be fully computed in each sub-task. Let's consider the example query in Equation 2.
R(Q) = T 1 (v 1 , v 2 , v 4 ) T 2 (v 2 , v 3 , v 4 ) T 3 (v 3 , v 4 ).
Suppose we batch on v 1 , the above join can be divided into the following independent sub-tasks:
R(Q)|R 1 1 (v1) = (T1(v1, v2, v4)|R 1 1 (v1)) T2(v2, v3, v4) T3(v3, v4), R(Q)|R 2 1 (v1) = (T1(v1, v2, v4)|R 2 1 (v1)) T2(v2, v3, v4) T3(v3, v4), · · · R(Q)|R b 1 (v1) = (T1(v1, v2, v4)|R b 1 (v1)) T2(v2, v3, v4) T3(v3, v4).
It is not hard to see that we will have to re-compute T 2 (v 2 , v 3 , v 4 ) and T 3 (v 3 , v 4 ) in all the above sub-tasks. Alternatively, if we batch on v 4 , we can avoid such re-computation as T 1 , T 2 and T 3 can all be partitioned in each sub-task. Inspired by this, for BINJOIN, we come up with the heuristic to apply Batching on the vertex that presents in as many join units as possible. Note that such vertex can only be in the join key, as otherwise it must at least not present in one side of the join. For complex query, we can still have join unit that does not contain any vertex for Batching after applying the above heuristic. In this case, we either re-compute the join unit, or cache it on disk. Another problem caused by this is potential memory burden of the join. Thus, we devise the join-level Batching following the idea of external MergeSort. Specifically, we inject a Buffer-and-Batch operator for the two data streams before they arrive at the Join operator. Buffer-and-Batch functions in two parts:
• Buffer: While the operator receives data from the upstream, it buffers the data until reaching a given threshold. Then the buffer is sorted according to the join key's hash value and spilled to the disk. The buffer is reused for the next batch of data. • Batch: After the data to join is fully received, we read back the data from the disk in a batching manner, where each batch must include all join keys whose hash values are within a certain range.
While one batch of data is delivered to the Join operator, Timely allows us to supervise the progress and hold the next batch until the current batch completes. This way, the internal memory requirement is one batch of the data. Note that such join-level Batching is natively implemented in Hadoop's "Shuffle" stage, and we replay this process in Timely to improve the scalability of the algorithm.
Triangle Indexing
As the name suggests, TrIndexing precomputes the triangles of the data graph and indices them along with the graph data to prune infeasible results. The authors of BiGJoin [13] optimized the 4-clique query by using the triangles as base relations to join, which reduces the rounds of join and network communication. In [45], the authors proposed to not only maintain triangles, but all k-cliques up to a given k. As we mentioned earlier, it incurs huge extra cost of maintaining triangles already, let alone larger cliques.
In addition to the default hash partition, Lai et al. proposed "triangle partition" [37] by also incorporating the edges among the neighbors (it forms triangles with the anchor vertex) in the partition. "Triangle partition" allows BINJOIN to use clique as the join unit [37], which greatly reduces the intermediate results of certain queries and improves the performance. "Triangle partition" is in de facto a variant of TrIndexing, which instead of explicitly materializing the triangles, maintains them in the local graph structure (e.g. adjacency list). As we will show in the experiment (Section 5), this will save a lot of space compared to explicit triangle materialization. Therefore, we adopt the "triangle partition" for TrIndexing optimization in this work.
BinJoin. Obviously, BINJOIN becomes CliqueJoin with TrIndexing, and StarJoin (or TwinTwigJoin) otherwise.
With worst-case optimality guarantee (Section 3.5), BINJOIN should perform much better with TrIndexing, which is also observed in "Exp-1" of Section 5.
WOptJoin. In order to match v i in the i th round, WOPTJOIN utilizes Count, Propose and Intersect to process the intersection of Equation 4. For ease of presentation, suppose v i+1 connects to the first s query vertices {v 1 , v 2 , . . . , v s }, and given a partial match,
{f (v 1 ), . . . , f (v s )}, we have C(v i+1 ) = s j=1 N G (f (v j )).
In the original implementation, it is required to send (p; C(v i+1 )) via network to all machines that contain each f (v j )(1 ≤ j ≤ s) to process the intersection, which can render massive communication cost. In order to reduce the communication cost, we implement TrIndexing for WOPTJOIN in the following. We first group {v 1 , . . . , v s } such that for each group U (v x ), we have
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q }.
Because of TrIndexing, we have N G (f (v y )) (∀v y ∈ U (v x )) maintain in f (v x )'s partition. Thus, we only need to send the prefix to f (v x )'s machine, and the intersection within U (v x ) can be done locally. We process the grouping using a greedy strategy that always constructs the largest group from the remaining vertices.
Remark 4.1. The "triangle partition" may result in maintaining a large portion of the data graph in certain partition. Lai et al. pointed out this issue, and proposed a space-efficient alternative by leveraging the vertex orderings [37]. That is, given the partitioned vertex as u, and two neighbors u and u that close a triangle, we place the edge (u , u ) in the partition only when u < u < u . Although this alteration reduces storage, it may affect the effectiveness of TrIndexing for WOPTJOIN and the implementations of Batching and Compression for BINJOIN algorithms. Take WOPTJOIN as an example, after using the space-efficient "triangle partition", we should modify the above grouping as:
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q ∧ (v x , v y ) ∈ O Q }.
Note that the order between query vertices are for symmetry breaking (Section 2.1), and it may not present in certain query, which makes TrIndexing completely useless for WOPTJOIN.
Compression
Subgraph matching is a typical combinatorial problem, and can easily produce results of exponential size. Compression aims to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost. In the following, when we say "compress a query vertex", we mean maintaining its matched data vertices in the form of an array, instead of unfolding them in line with the one-one mapping of a match (Definiton 2.1). Qiao et al. proposed CrystalJoin to study Compression in general for subgraph matching. As we introduced in Section 3.4, CrystalJoin first extracts the minimum vertex cover as uncompressed part, and then it can compress the remaining query vertices as the intersection of certain uncompressed matches' neighbors. Such Compression leverages the fact that all dependencies (edges) of the compressed part that requires further computation are already covered by the uncompressed part, thus it can stay compressed until the actual matches are requested. CrystalJoin inspires a heuristic for doing Compression, that is to compress the vertices whose matches will not be used in any future computation. In the following, we will apply the same heuristic to the other algorithms.
BinJoin. Obviously we can not compress any vertex that presents in the join key. What we need to do is to simply locate the vertices to compress in the join unit, namely star and clique. For star, the root vertex must remain uncompressed, as the leaves' computation depends on it. For clique, we can only compress one vertex, as otherwise the mutual connection between the compressed vertices will be lost. In a word, we compress two types of vertices for BINJOIN, (1) non-key and non-root vertices of a star join unit, (2) one non-key vertex of a clique join unit.
WOptJoin. Based on a predefined join order {v 1 , v 2 , . . . , v n }, we can compress
v i (1 ≤ i ≤ n), if there does not exist v j (i < j) such that (v i , v j ) ∈ E Q .
In other words, v i 's matches will never be involved in any future intersection (computation). Note that v n 's can be trivially compressed. With Compression, when v i is compressed, we will maintain its matches as an array instead of unfolding it into the prefix like a normal vertex.
Experiments
Experimental settings
Environments. We deploy two clusters for the experiments: (1) a local cluster of 10 machines connected via one 10GBps switch and one 1GBps switch. Each machine has 64GB memory, 1 TB disk and 1 Intel Xeon CPU E3-1220 V6 3.00GHz with 4 physical cores; (2) an AWS cluster of 40 "r5-2xlarge" instances connected via a 10GBps switch, each with 64GB memory, 8 vCpus and 500GB Amazon EBS storage. By default we use the local cluster of 10 machines with 10GBps switch. We run 3 workers in each machine in the local cluster, and 6 workers in the AWS cluster for Timely. The codes are implemented based on the open-sourced Timely dataflow system [8] using Rust 1.32. We are still working towards open-sourcing the codes, and the bins together with their usages are temporarily provided 8 to verify the results.
Metrics.
In the experiments, we measure query time T as the slowest worker's wall clock time from an average of three runs. We allow 3 hours as the maximum running time for each test. We use OT and OOM to indicate a test case runs out of the time limit and out of memory, respectively. By default we will not show the OOM results for clear presentation.
We divide T into two parts, the computation time T comp and the communication time T comm . We measure T comp as the time the slowest worker spends on actual computation by timing every computing function. We are aware that the actual communication time is hard to measure as Timely overlaps computation and communication to improve throughput. We consider T − T comp , which mainly records the time the worker waits data from the network channel (a.k.a. communication time). While the other part of communication that overlaps computation is of less interest as it does not affect the query progress. As a result, we simply let T comm = T − T comp in the experiments. We measure the maximum peak memory using Linux's "time -v" in each machine. We define the communication cost as the number of integers a worker receives during the process, and measure the maximum communication cost among the workers accordingly.
Dataset Formats. We preprocess each dataset as follows: we treat it as a simple undirected graph by removing selfloop and duplicate edges, and format it using "Compressed Sparse Row" (CSR) [3]. We relabel the vertex id according to the degree and break the ties arbitrarily.
Compared Strategies. In the experiments, we implement BINJOIN and WOPTJOIN with all Batching, TrIndexing and Compression optimizations (Section 4). SHRCUBE is implemented with "Hypercube Optimization" [20], and "DualSim" (unlabelled) [34] and "CFLMatch" (labelled) [16] as local algorithms. FULLREP is implemented with the same local algorithms as SHRCUBE.
Auxiliary Experiments. We have also conducted several auxiliary experiments in the appendix to study the strategies of BINJOIN, WOPTJOIN, SHRCUBE and FULLREP.
Unlabelled Experiments
Datasets. The datasets used in this experiment are shown in Table 2. All datasets except SY are downloaded from public source, which are indicated by the letter in the bracket (S [9], W [10], D [1]). All statistics are measured as G is an undirected graph. Among the datasets, GO is a small dataset to study cases of extremely large (intermediate) result set; LJ, UK and FS are three popular datasets used in prior works, featuring statistics of real social network and web graph; GP is the google plus ego network, which is exceptionally dense; US and EU, on the other end, are sparse road networks. These datasets vary in number of vertices and edges, densities and maximum degree, as shown in Table 2. We synthesize the SY data according to [18] that generates data with real-graph characteristics. Note that the data occupies roughly 80GB space, and is larger than the configured memory of our machine. We synthesize the data because we do not find public accessible data of this size. Larger dataset like Clueweb [2] is available, but it is beyond the processing power of our current cluster.
Each data is hash partitioned ("hash") across the cluster. We also implement the "triangle partition" ("tri.") for TrIndexing optimization (Section 4.2). To do so, we use BiGJoin to compute the triangles and send the triangle edges to corresponding partition. We record the time T * and average number of edges |E * | of the two partition strategies. The partition statistics are recorded using the local cluster, except for SY that is processed in the AWS cluster. From Table 2, we can see that |E tri. | is noticeably larger, around 1-10 times larger than |E hash |. Note that in GP and UK, which either is dense, or must contain a large dense community, the "triangle partition" can maintain a large portion of data in each partition. While compared to complete triangle materialization, "triangle partition" turns out to be much cheaper. For example, the UK dataset contains around 27B triangles, which means each partition in our local cluster should by average take 0.9B triangles (three integers); in comparison, UK's "triangle partition" only maintains an average of 0.16B edges (two integers) according to Table 2.
We use US, GO and LJ as default datasets in the experiments "Exp-1", "Exp-2" and "Exp-3" in order to collect useful feedbacks from successful queries, while we may not present certain cases when they do not give new findings.
Queries. The queries are presented in Figure 4. We also give the partial order under each query for symmetry breaking. The queries except q 7 and q 8 are selected based on all prior works [13,35,37,45,50], while varying in number of vertices, densities, and the vertex cover ratio |V cc Q |/|V Q |, in order to better evaluate the strategies from different perspectives. The three queries q 7 , q 8 and q 9 are relatively challenging given their result scale. For example, the smallest dataset GO contains 2, 168B(illion) q 7 , 330B q 8 and 1, 883B q 9 , respectively. For short of space, we record the number of results of each successful query on each dataset in the appendix. Note that q 7 and q 8 are absent from existing works, while we benchmark q 7 considering the importance of path query in practice, and q 8 considering the varieties of the join plans.
Exp-1: Optimizations. We study the effectiveness of Batching, TrIndexing and Compression for both BINJOIN and WOPTJOIN strategies, by comparing BINJOIN and WOPTJOIN with their respective variants with one optimization off, namely "without Batching", "without Trindexing" and "without Compression". In the following, we use the suffix of "(w.o.b.)", "(w.o.t.)" and "(w.o.c.)" to represent the three variants. We use the queries q 2 and q 5 , and the results of US and LJ are shown in Figure 5. By default, we use the batch size of 1, 000, 000 for both BINJOIN and WOPTJOIN (according to [13]) in this experiment, and we reduce the batch size when it runs out of memory, as will be specified.
While comparing BINJOIN with BINJOIN(w.o.b.), we observe that Batching barely affects the performance of q 2 , but severely for q 5 on LJ (1800s vs 4000s (w.o.b.) ). The reason is that we still apply join-level Batching for BINJOIN(w For WOPTJOIN strategy, Batching has little impact to the performance. Surprisingly, after using TrIndexing to WOPTJOIN, the improvement by average is only around 18%. We do another experiment in the same cluster but using 1GBps switch, which shows WOPTJOIN is over 6 times faster than WOPTJOIN(w.o.t.) for both queries on LJ. Note that Timely uses separate threads to buffer received data from the network. Given the same computing speed, a faster network allows the data to be more fully buffered and hence less wait for the following computation. Similar to BINJOIN, Compression greatly improves the performance while querying on LJ, but the opposite on US.
v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 {(v 1 , v 2 ), (v 1 , v 3 ), (v1, v 4 ), (v 2 , v 4 )} {(v 1 , v 3 ), (v 2 , v 4 )} {(v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 )} q 1 q 2 q 3 v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 )} v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 ), (v 3 , v 4 )} q 6 q 5 q 4 v 1 v 2 v 3 v 4 v 5 v 6 {(v 3 , v 5 )} q 8 v 1 v 2 v 3 v 4 v 5 {(v 1 , v 4 )} v 1 v 2 v 3 v 4 v 5 q 9 v 1 v 2 v 3 v 4 v 5 q 7 v 6 {(v 1 , v 5 )} {(v 2 , v 3 ), (v 2 , v 5 ), (v 2 , v 6 )}
Exp-2 Challenging Queries. We study the challenging queries q 7 , q 8 and q 9 in this experiment. We run this experiment We focus on comparing BINJOIN and WOPTJOIN on GO dataset. On the one hand, WOPTJOIN outperforms BINJOIN for q 7 and q 8 . Their join plans of q 7 are nearly the same except that BINJOIN relies on a global shuffling on v 3 to processing join, while WOPTJOIN sends the partial results to the machine that maintains the vertex to grow. It is hence reasonable to observe BINJOIN's poorer performance for q 7 as shuffling is typically a more costly operation. The case of q 8 is similar, so we do not further discuss. On the other hand, even living with costly shuffling, BINJOIN still performs better for q 9 . Due to the vertex-growing nature, WOPTJOIN's "optimal plan" will have to process the costly sub-query Q({v 1 , v 2 , v 3 , v 4 , v 5 }). On US dataset, WOPTJOIN consistently outperforms BINJOIN for these queries. This is because that US does not produce massive intermediate results as LJ, thus BINJOIN's shuffling cost consistently dominates.
While processing complex queries like q 8 and q 9 , we can study varieties of join plans for BINJOIN and WOPTJOIN. First of all, we want the readers to note that BINJOIN's join plan for q 8 is different from the optimal plan originally given [37]. The original "optimal" plan computes q 8 by joining two tailed triangles (triangle tailed with an edge), while this alternative plan works better by joining the uppers "house-shape" sub-query with the bottom triangle. In theory, the tailed triangle has worse-case bound (AGM bound [44]) of O(M 2 ), smaller than the house's O(M 2.5 ), and BINJOIN's actually favors this plan based on cost estimation. However, we find out that the number of tailed triangles is very close to that of the houses on GO, which renders costly process for the original plan to join two tailed triangles. This indicates insufficiency of both cost estimation proposed in [37] and worst-case optimal bound [13] while computing the join plan, which will be further discussed in Section 6.
Secondly, it is worth noting that we actually report the result of WOPTJOIN for q 9 while using the CrystalJoin plan, as it works better than WOPTJOIN's original "optimal" plan. For q 9 , CrystalJoin will first compute Q(V cc Q ), namely the 2-path {v 1 , v 3 , v 5 }, thereafter it can compress all remaining vertices v 2 , v 4 and v 6 . In comparison, the "optimal" plan can only compress v 2 and v 6 . In this case, CrystalJoin performs better because it configures larger compression. In [45], the authors proved that it renders maximum compression to use the vertex cover as the uncompressed core. However, this may not necessarily result in the best performance, considering that it can be costly to compute the core part. In our experiments, the unlabelled q 4 , q 8 and labelled q 8 are cases that CrystalJoin plan performs worse than the original BiGJoin plan (with Compression optimization), where CrystalJoin plan does not render strictly larger compression while having to process the costly core part. As a result, we only recommend CrystalJoin plan when it leads to strictly larger compression.
The final observation is that the computation time dominates most of the evaluated cases, except BINJOIN's q 8 , WOPTJOIN and SHRCUBE's q 9 on US. We will further discuss this in Exp-3.
Exp-3 All-Around Comparisons. In this experiment, we run q 1 − q 6 using BINJOIN, WOPTJOIN, SHRCUBE and FULLREP across the datasets GP, LJ, UK, EU and FS. We also run WOPTJOIN with CrystalJoin plan in q 4 as it is the only query that renders different CrystalJoin plan from BiGJoin plan, and the results show that the performance with BiGJoin plan is consistently better. We report the results in Figure 7, where the communication time is plotted as gray filling. As a whole, among all 35 test cases, FULLREP achieves the best 85% completion rate, followed by WOPTJOIN and BINJOIN which complete 71.4% and 68.6% respectively, and SHRCUBE performs the worst with just 8.6% completion rate.
BinJoin
WOptjoin FULLREP typically outperforms the other strategies. Observe that WOPTJOIN's performance is often very close to FULLREP. The reason is that the WOPTJOIN's computing plans for these evaluated queries are similar to "DualSim" adopted by FULLREP. The extra communication cost of WOPTJOIN has been reduced to very low while adopting TrIndexing optimization. While comparing WOPTJOIN with BINJOIN, BINJOIN is better for q 3 , a clique query (join unit) that requires no join (a case of embarrassingly parallel). BINJOIN performs worse than WOPTJOIN in most other queries, which, as we mentioned before, is due to the costly shuffling. There is an exception -querying q 1 on GP -where BINJOIN performs better than both FULLREP and WOPTJOIN. We explain this using our best speculation. GP is a very dense graph, where we observe nearly 100 vertices with degree around 10,000. To process We observe that the computation time T comp dominates in most cases as we mentioned in Exp-2. This is trivially true for SHRCUBE and FULLREP, but it may not be clearly so for WOPTJOIN and BINJOIN given that they all need to transfer a massive amount of intermediate data. We investigate this and find out two potential reasons. The first one attributes to Timely's highly optimized communication component, which allows the computation to overlap communication by using extra threads to receive and buffer the data from the network so that it can be mostly ready for the following computation. The second one is the fast network. We re-run these queries using the 1GBps switch, while the results show the opposite trend that the communication time T comm in turn takes over.
Exp-4 Web-Scale. We run the SY datasets in the AWS cluster of 40 instances. Note that FULLREP can not be used as SY is larger than the machine's memory. We use the queries q 2 and q 3 , and present the results of BINJOIN and WOPTJOIN (SHRCUBE fails all cases due to OOM) in Table 3. The results are consistent with the prior experiments, but observe that the gap between BINJOIN and WOPTJOIN while querying q 1 is larger. This is because that we now deploy 40 AWS instances, and BINJOIN's shuffling cost increases.
Labelled Experiments
We use the LDBC social network benchmarking (SNB) [6] for labelled matching experiment due to the lack of labelled big graphs in the public. SNB provides a data generator that generates a synthetic social network of required statistics, and a document [7] that describes the benchmarking tasks, in which the complex tasks are actually subgraph matching. The join plans of BINJOIN and WOPTJOIN for labelled experiments are generated as unlabelled case, but we use the label frequencies to break tie.
Datasets. We list the datasets and their statistics in Table 4. These datasets are generated using the "Facebook" mode with a duration of 3 years. The dataset's name, denoted as DGx, represents a scale factor of x. The labels are preprocessed into integers. (1) and (2), note that our current implementation can support both cases, and we do the adaptations for consistency and simplicity. For (3) and (4), we adapt them because currently they do not conform with the subgraph matching problem studied in this paper. For (5), it is due to our current limitation in supporting property graph. We leave (3), (4) and (5) as interesting future work.
Exp-5 All-Around Comparisons. We now conduct the experiment using all queries on DG10 and DG60, and present the results in Figure 9. Here we compute the join plans for BINJOIN and WOPTJOIN by using the unlabelled method, but further using the label frequencies to break tie. The gray filling again represents communication time. FULLREP outperforms the other strategies in many cases, except that it performs slightly slower than BINJOIN for q 3 and q 5 . This is because that q 3 and q 5 are join units, and BINJOIN processes them locally in each machine as FULLREP, and it does not build indices as "CFLMatch" used in FULLREP. When comparing to WOPTJOIN, Among all these queries, we only have q 8 that configures different CrystalJoin plan (w.r.t. BiGJoin plan) for WOPTJOIN. The results show that the performance of WOPTJOIN drops about 10 times while using CrystalJoin plan. Note that the core part of q 8 is a 5-path of "Psn-City-Cty-City-Psn" with enormous intermediate results. As we mentioned in unlabelled experiments, it may not always be wise to first compute the vertex-cover-induced core.
We now focus on comparing BINJOIN and WOPTJOIN. There are three cases that intrigue us. Firstly, observe that BINJOIN performs much better than WOPTJOIN while querying q 4 . The reason is high intersection cost as we discovered on GP dataset in unlabelled matching. Secondly, BINJOIN performs worse than WOPTJOIN in q 7 , which again is because of BINJOIN's costly shuffling. The third case is q 9 , the most complex query in the experiment. BINJOIN performs much better while querying q 9 . The bad performance of WOPTJOIN comes from the long execution plan together with costly intermediate results.
The two algorithms all expand the three "Psn"s, and then grow via one of the "City"s to "Cty", but BINJOIN approaches this using one join (a triangle a TwinTwig), while WOPTJOIN will first expand to "City" then further "Cty", and the "City" expansion is the culprit of the slower run. 6 Discussions and Future Work.
BinJoin
We discuss our findings and potential future work based on the experiments in Section 5. Eventually, we summarize the findings into a practical guide.
Strategy Selection. FULLREP is obviously the preferred choice when the machine can hold the graph data, while both WOPTJOIN and BINJOIN are good alternatives when the graph is larger than the capacity of the machine. For BINJOIN and WOPTJOIN, on one side, BINJOIN may perform worse than WOPTJOIN (e.g. unlabelled q 2 , q 4 , q 5 ) due to the expensive shuffling operation, on the other side, BINJOIN can also outperform WOPTJOIN (e.g. unlabelled and labelled q 9 ) while avoiding costly sub-queries due to query decomposition. One way to choose between BINJOIN and WOPTJOIN is to compare the cost of their respective join plans, and select the one with less cost. For now, we can either use cost estimation proposed in [37], or summing the worst-case bound, but none of them consistently gives the best solution, as will be discussed in "Optimal Join Plan". Alternatively, we refer to "EmptyHeaded" [11] to study a potential hybrid strategy of BINJOIN and WOPTJOIN. Note that "EmptyHeaded" is developed in single-machine setting, and it does not take into consideration the impact of Compression, we hence leave such hybrid strategy in the distributed context as an interesting future work.
Optimizations. Our experimental results suggest always using Batching, using TrIndexing when each machine has sufficient memory to hold "triangle partition", and using Compression when the data graph is not very sparse (e.g. d G ≥ 5). Batching often does not impact performance, so we recommend always using Batching due to the unpredictability of the size of (intermediate) results. TrIndexing is critical for BINJOIN, and it can greatly improve WOPTJOIN by reducing communication cost, while it requires extra storage to maintain "triangle partition". Amongst the evaluated datasets, each "triangle partition" maintains an average of 30% data in our 10-machine cluster. Thus, we suggest a memory threshold of 60%|E G | (half for graph and half for running algorithm) for TrIndexing in a cluster of the same or larger scale. Note that the threshold does not apply to extremely dense graph. Among the three optimizations, Compression is the primary performance booster that improves the performance of BINJOIN and WOPTJOIN by 5 times on average in all but the cases on the very sparse road networks. For such very sparse data graphs, Compression can render more cost than benefits.
Optimal Join Plan. It is challenging to systematically determine the optimal join plans for both BINJOIN and WOPTJOIN. From the experiments, we identify three impact factors: (1) the worst-case bound; (2) cost estimation based on data statistics; (3) favoring the optimizations, especially Compression. All existing works only partially consider these factors, and we have observed sub-optimal join plans in the experiments. For example, BINJOIN bases the "optimal" join plan on minimizing the cost estimation, but the join plan does not render the best performance for unlabelled q 8 ; WOPTJOIN follows the worst-case optimality, while it may encounter costly sub-queries for labelled and unlabelled q 9 ; CrystalJoin focuses on maximizing the compression, while ignoring the facts that the vertex-coverinduced core part itself can be costly to compute. Additionally, there are other impact factors such as the partial orders of query vertices and the label frequencies, which have not been studied in this work due to short of space. It is another very interesting future work to thoroughly study the optimal join plan while considering all above impact factors.
Computation vs. Communication. We argue that distributed subgraph matching nowadays is a computationintensive task. This claim holds when the cluster configures high-speed network (e.g. ≥ 10GBps), and the data processor can efficiently overlap computation with communication. Note that computation cost (either BINJOIN's join or WOPTJOIN's intersection) is lower-bounded by the output size that is equal to the communication cost. Therefore, computation becomes the bottleneck if the network condition is good to guarantee the data to be delivered in time. Nowadays, the bandwidth of local cluster commonly exceeds 10GBps, and the overlapping of computation and communication is widely used in distributed systems (e.g. Spark [54], Flink [17]). As a result, we tend to see distributed subgraph matching as a computation-intensive task, and we advocate future research to devote more efforts into optimizing the computation while considering the following perspectives: (1) the new advancements of hardware, for example the co-processing on GPU in the coupled CPU-GPU architectures [28] and the SIMD programming model on modern CPU [30]; (2) general computing optimizations such as load balancing strategy and cache-aware graph data accessing [53].
A Practical Guide. Based on the experimental findings, we propose a practical guide for distributed subgraph matching in Figure 10. Note that this program guide is based on current progress of the literature, and future work is needed, for examples to study the hybrid strategy and the impact factors of the optimal join plan, before we can arrive at a solid decision-making to choose between BINJOIN and WOPTJOIN.
Conclusions
In this paper, we implement four strategies and three general-purpose optimizations for distributed subgraph matching based on Timely dataflow system, aiming for a systematic, strategy-level comparisons among the state-of-the-art algorithms. Based on thorough empirical analysis, we summarize a practical guide, and we also motivate interesting future work for distributed subgraph matching.
A Auxiliary Experiments
Exp-6 Scalability of Unlabelled Matching. We vary the number of machines as 1, 2, 4, 6, 8, 10, and run the unlabelled queries q 1 and q 2 to see how each strategy (BINJOIN, WOPTJOIN, SHRCUBE and FULLREP) scales out. We further evaluate "Single Thread", a serial algorithm that is specially implemented for these two queries. According to [42], we define COST of a strategy as the number of workers it needs to outperform the "Single Thread", which is a comprehensive measurement of both efficiency and scalability. In this experiment, we query q 1 and q 2 on the popular dataset LJ, and show the results in Figure 11. Note that we only plot the communication and memory consumption for q 1 , as q 2 follows similar trend. We also test on the other datasets, such as the dense dataset GP, the results are also similar.
BinJoin
WOptjoin ShrCube FullRep Single Thread All strategies demonstrate reasonable scaling regarding both queries. In terms of COST, note that FULLREP is slightly larger than 1, because "DualSim" is implemented in general for arbitrary query, while "SingleThread" uses a hand-tuned implementation. We first analyze the results of q 1 . The COST ranking is FULLREP (1.6), WOPTJOIN (2.0), BINJOIN (3.1) and SHRCUBE (3.7). As expected, WOPTJOIN scales worse than FULLREP, while BINJOIN scales worse than WOPTJOIN because shuffling cost is increasing with the number of machines. In terms of memory consumption, it is trivial that FULLREP constantly consumes memory of graph size. Due to the use of Batching, both BINJOIN and WOPTJOIN consume very low memory for both queries. Observe that SHRCUBE consumes much more memory than WOPTJOIN and BINJOIN, even more than the graph data itself. This is because that certain worker may receive more edges (with duplicates) than the graph itself, which increases the peak memory consumption. For communication cost, both BINJOIN and WOPTJOIN demonstrate reasonable drops as the increment of machines. SHRCUBE renders much less communication as expected, but it shows increasing trend. This is actually a reasonable behavior of SHRCUBE, as more machines also means more data duplicates. For q 2 , the COST ranking is FULLREP (2.4), WOPTJOIN (2.75), BINJOIN (3.82) and SHRCUBE (71.2). Here, SHRCUBE is dramatically larger, with most time spending on deduplication (Section 3.3). The trend of memory consumption and communication cost of q 2 is similar with that of q 1 , thus is not further discussed.
Exp-7 Vary Desities for Labelled Matching. Based on DG10, We generate the datasets with densities 10, 20, 40, 80 and 160 by randomly adding edges into DG10. Note that the density-10 dataset is the original DG10 in Table 4. We use the labelled queries q 4 and q 7 in this experiment, and show the results in Figure 12. Exp-8 Vary Labels for Labelled Matching. We generate the datasets with number of labels 0, 5, 10, 15 and 20 based on DG10. Note that there are 5 labels in labelled queries q 4 and q 7 , which are called the target labels. The 10-label dataset is the original DG10. For the one with 5 labels, we will replace each label not in the target labels as one random target label. For the ones with more than 10 labels, we randomly choose some nodes to change their labels into some other pre-defined labels until they contain the required number of labels. For the one with zero label, it degenerates into unlabelled matching, and we use unlabelled version of q 4 and q 7 instead. The experiment demonstrates the transition from unlabelled matching to labelled matching, where the biggest drop happens for all algorithms. The drops continue with the increment of the number of labels, but less sharply when there are sufficient number of labels (≥ 10). Observe that when there are very few labels, for example, the 5-label case of q 7 , FULLREP actually performs worse than BINJOIN and WOPTJOIN. The "CFLMatch" algorithm [16] used by FULLREP relies heavily on label-based pruning. Fewer labels render larger candidate set and more recursive calls, resulting in performance drop of FULLREP. While fewer labels may enlarge the intermediate results of BINJOIN and WOPTJOIN, but they are relatively small in the labelled case, and does not create much burden for the 10GBps network.
BinJoin
B Auxiliary Materials
All Query Results. In Table 5, We show the number of results of every successful query on each dataset evaluated in this work. Note that DG10 and DG60 record the labelled queries of q 1 − q 9 .
| 13,431 |
1906.11518
|
2954023930
|
Recently there emerge many distributed algorithms that aim at solving subgraph matching at scale. Existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm mainly due to the intertwining of strategy and optimization. In this paper, we identify four strategies and three general-purpose optimizations from representative state-of-the-art works. We implement the four strategies with the optimizations based on the common Timely dataflow system for systematic strategy-level comparison. Our implementation covers all representation algorithms. We conduct extensive experiments for both unlabelled matching and labelled matching to analyze the performance of distributed subgraph matching under various settings, which is finally summarized as a practical guide.
|
The unlabelled case is also known as subgraph listing enumeration, and due to the gigantic (intermediate) results, people have been either seeking scalable algorithms in parallel, or devising techniques to compress the results. Other than the algorithms studied in this paper ( algorithms ), proposed the external-memory-based parallel algorithm DualSim @cite_46 , which maintains the data graph in blocks on the disk, and matches the query graph by swapping in out blocks of data to improve I O efficiency.
|
{
"abstract": [
"Subgraph enumeration is important for many applications such as subgraph frequencies, network motif discovery, graphlet kernel computation, and studying the evolution of social networks. Most earlier work on subgraph enumeration assumes that graphs are resident in memory, which results in serious scalability problems. Recently, efforts to enumerate all subgraphs in a large-scale graph have seemed to enjoy some success by partitioning the data graph and exploiting the distributed frameworks such as MapReduce and distributed graph engines. However, we notice that all existing distributed approaches have serious performance problems for subgraph enumeration due to the explosive number of partial results. In this paper, we design and implement a disk-based, single machine parallel subgraph enumeration solution called DualSim that can handle massive graphs without maintaining exponential numbers of partial results. Specifically, we propose a novel concept of the dual approach for subgraph enumeration. The dual approach swaps the roles of the data graph and the query graph. Specifically, instead of fixing the matching order in the query and then matching data vertices, it fixes the data vertices by fixing a set of disk pages and then finds all subgraph matchings in these pages. This enables us to significantly reduce the number of disk reads. We conduct extensive experiments with various real-world graphs to systematically demonstrate the superiority of DualSim over state-of-the-art distributed subgraph enumeration methods. DualSim outperforms the state-of-the-art methods by up to orders of magnitude, while they fail for many queries due to explosive intermediate results."
],
"cite_N": [
"@cite_46"
],
"mid": [
"2423807589"
]
}
|
A SURVEY AND EXPERIMENTAL ANALYSIS OF DISTRIBUTED SUBGRAPH MATCHING A PREPRINT
|
with no need of exchanging data. As a result, it typically renders much less communication cost than that of BINJOIN and WOPTJOIN algorithms. MultiwayJoin adopts the idea of SHRCUBE for subgraph matching. In order to properly partition the computation without missing results, MultiwayJoin needs to duplicate each edge in multiple workers. As a result, MultiwayJoin can almost carry the whole graph in each worker for certain queries [35,13] and thus scale out poorly.
OTHERS. Shao et al. proposed PSgL [50] that processes subgraph matching via breadth-first-style traversal. Staring from an initial query vertex, PSgL iteratively expands the partial results by merging the matches of certain vertex's unmatched neighbors. It has been pointed out in [35] that PSgL is actually a variant of StarJoin. Very recently, Qiao et al. proposed CrystalJoin [45] that aims at resolving the "output crisis" by compressing the (intermediate) results. The idea is to first compute the matches of the vertex cover of the query graph, then the remaining vertices' matches can be compressed as intersection of the vertex cover's neighbors to avoid costly cartesian product.
Optimizations. Apart from join strategies, existing algorithms also explored a variety of optimizations, some of which are query-or algorithm-specific, while we spotlight three general-purpose optimizations, Batching, TrIndexing and Compression. Batching aims to divide the whole computation into sub-tasks that can be evaluated independently in order to save resource (memory) allocation. TrIndexing precomputes and indices the triangles (3-cycles) of the graph to facilitate pruning. Compression attempts to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost.
Motivations.
In this paper, we survey seven representative algorithms to solve distributed subgraph matching: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. While all these algorithms embody some good merits in theory, existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm due to several reasons. Firstly, the prior experiments did not take into consideration the differences of languages and the cost of the systems on which each implementation is based (Table 1). Secondly, some implementations hardcode query-specific optimizations for each query, which makes it hard to judge whether the observed performance is from the algorithmic advancement or hardcoded optimization. Thirdly, all BINJOIN and WOPTJOIN algorithms (more precisely, their implementations) intertwined join strategy with some optimizations of Batching, TrIndexing and Compression. We show in Table 1 how each optimization has been applied in current implementation. For example, CliqueJoin only adopted TrIndexing and some queryspecific Compression, while BiGJoin considered Batching in general, but TrIndexing only for one specific query (Compression was only discussed in paper, but not implemented). People naturally wonder that "maybe it is better to adopt A strategy with B optimization", but unfortunately none of existing implementation covers that combination. Last but not least, there misses an important benchmarking of the FULLREP strategy, that is to maintain the whole graph in each partition and parallelize embarrassingly [29]. FULLREP strategy requires no communication, and it should be the most efficient strategy when each machine can hold the whole graph (the case for most experimental settings nowadays). [51] BINJOIN No Trinity [49] None MultiwayJoin [12] SHRCUBE N/A Hadoop [35], Myria [20] N/A PSgL [50] OTHERS No Giraph [4] None TwinTwigJoin [35] BINJOIN No Hadoop Compression [36] CliqueJoin [37] BINJOIN Yes (Section 6) Hadoop TrIndexing, some Compression CrystalJoin [45] OTHERS N/A Hadoop TrIndexing, Compression BiGJoin [13] WOPTJOIN Yes [13] Timely Dataflow [43] Batching, specific TrIndexing Table 1 summarizes the surveyed algorithms via the category of strategy, the optimality guarantee, and the status of current implementations including the based platform and how the three optimizations are adopted.
Our Contributions
To address the above issues, we aims at a systematic, strategy-level benchmarking of distributed subgraph matching in this paper. To achieve that goal, we implement all strategies, together with the three general-purpose optimizations for subgraph matching based on the Timely dataflow system [43]. Note that our implementation covers all seven representative algorithms. Here, we use Timely as the base system as it incurs less cost [42] than other popular systems like Giraph [4], Spark [54] and GraphLab [38], so that the system's impact can be reduced to the minimum.
We implement the benchmarking platform using our best effort based on the papers of each algorithm and email communications with the authors. Our implementation is (1) generic to handle arbitrary query, and does not include any hardcoded optimizations; (2) flexible that can configure Batching, TrIndexing and Compression optimizations in any combination for BINJOIN and WOPTJOIN algorithms; and (3) efficient that are comparable to and sometimes even faster than the original hardcoded implementation. Note that the three general-purpose optimizations are mainly used to reduce communication cost, and is not useful to the SHRCUBE and FULLREP strategies, while we still devote a lot of efforts into their implementations. Aware that their performance heavily depends on the local algorithm, we implement and compare the state-of-the-art local subgraph matching algorithms proposed in [34], [11] (for unlabelled matching), and [16] (for labelled matching), and adopt the best-possible implementation. For SHRCUBE, we refer to [20] to implement "Hypercube Optimization" for better hypercube sharing.
We make the following contributions in the paper.
(1) A benchmarking platform based on Timely dataflow system for distributed subgraph matching. We implement four distributed subgraph matching strategies (and the general optimizations) that covers seven state-of-the-art algorithms: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. Our implementation is generic to handle arbitrary query, including the labelled and directed query, and thus can guide practical use.
(2) Three general-purpose optimizations -Batching, TrIndexing and Compression. We investigate the literature on the optimization strategies, and spotlight the three general-purpose optimizations. We propose heuristics to incorporate the three optimizations into BINJOIN and WOPTJOIN strategies, with no need of query-specific adjustments from human experts. The three optimizations can be flexibly configured in any combination.
(3) In-depth experimental studies. In order to extensively evaluate the performance of each strategy and the effectiveness of the optimizations, we use data graphs of different sizes and densities, including sparse road network, dense ego network, and web-scale graph that is larger than each machine's configured memory. We select query graphs of various characteristics that are either from existing works or suitable for benchmarking purpose. In addition to running time, we measure the communication cost, memory usage and other metrics to help reason the performance.
(4) A practical guide of distributed subgraph matching. Through empirical analysis covering the variances of join strategies, optimizations, join plans, we propose a practical guide for distributed subgraph matching. We also inspire interesting future work based on the experimental findings.
Organizations
. The rest of the paper is organized as follows. Section 2 defines the problem of subgraph matching and introduces preliminary knowledge. Section 3 surveys the representative algorithms, and our implementation details following the categories of BINJOIN, WOPTJOIN, SHRCUBE and OTHERS. Section 4 investigates the three general-purpose optimizations and devises heuristics of applying them to BINJOIN and WOPTJOIN algorithms. Section 5 demonstrates the experimental results and our in-depth analysis. Section 7 discusses the related works, and Section 8 concludes the whole paper.
Preliminaries
Problem Definition
Graph Notations. A graph g is defined as a 3-tuple, g = (V g , E g , L g ), where V g is the vertex set and E g ⊆ V g × V g is the edge set of g, and L g is a label function that maps each vertex µ ∈ V g and/or each edge e ∈ E g to a label. Note that for unlabelled graph, L g simply maps all vertices and edges to ∅. For a vertex µ ∈ V g , denote N g (µ) as the set of neighbors, d g (µ) = |N g (µ)| as the degree of µ, d g = 2|Eg| |Vg| and D g = max µ∈V (g) d g (µ) as the average and maximum degree, respectively. A subgraph g of g, denoted g ⊆ g, is a graph that satisfies V g ⊆ V g and E g ⊆ E g .
Given V ⊆ V g , we define induced subgraph g(V ) as the subgraph induced by V , that is
g(V ) = (V , E(V ), L g ), where E(V ) = {e = (µ, µ ) | e ∈ E g , µ ∈ V ∧ µ ∈ V }. We say V ⊆ V g is a vertex cover of g, if ∀ e = (µ, µ ) ∈ E g , µ ∈ V or µ ∈ V . A minimum vertex cover V c
g is a vertex cover of g that contains minimum number of vertices. A connected vertex cover is a vertex cover whose induced subgraph is connected, among which a minimum connected vertex cover, denoted V cc g , is the one with the minimum number of vertices.
Data and Query Graph. We denote the data graph as G, and let N = |V G |, M = |E G |. Denote a data vertex of id i as u i where 1 <= i <= N . Note that the data vertex has been reordered such that if d G (u) < d G (u ), then id(u) < id(u ). We denote the query graph as Q, and let n = |V Q |, m = |E Q |, and V Q = {v 1 , v 2 , · · · , v n }.
Subgraph Matching. Given a data graph G and a query graph Q, we define subgraph isomorphism:
Definition 2.1. (Subgraph Isomorphism.) Subgraph isomorphism is defined as a bijective mapping f : V (Q) → V (G) such that, (1) ∀v ∈ V (Q), L Q (v) = L G (f (v)); (2) ∀(v, v ) ∈ E(Q), (f (v), f (v )) ∈ E(G), and L Q ((v, v )) = L G ((f (v), f (v )))
. A subgraph isomorphism is called a Match in this paper. With the query vertices listed as {v 1 , v 2 , · · · , v n }, we can simply represent a match f as
{u k1 , u k2 , · · · , u kn }, where f (v i ) = u ki for 1 <= i <= n.
The Subgraph Matching problem aims at finding all matches of Q in G. Denote R G (Q) (or R(Q) when the context is clear) as the result set of Q in G. As prior works [35,37,50], we apply symmetry breaking for unlabelled matching to avoid duplicate enumeration caused by automorphism. Specifically, we first assign partial order O Q to the query graph according to [26].
Here, O Q ⊆ V Q × V Q , and (v i , v j ) ∈ O Q means v i < v j .
In unlabelled matching, a match f must satisfy the order constraint:
∀(v, v ) ∈ O Q , it holds f (v) < f (v )
. Note that we do not consider order constraint in labelled matching. Example 2.1. In Figure 1, we present a query graph Q and a data graph G. For unlabelled matching, we give the partial order O Q under the query graph. There are three matches: {u 1 , u 2 , u 6 , u 5 }, {u 2 , u 5 , u 3 , u 6 } and {u 4 , u 3 , u 6 , u 5 }. It is easy to check that these matches satisfy the order constraint. Without the order constraint, there are actually four automorphic 3 matches corresponding to each above match [12]. For labelled matching, we use different fillings to represent the labels. There are two matches accordingly -{u 1 , u 2 , u 6 , u 5 } and By treating the query vertices as attributes and data edges as relational table, we can write subgraph matching query as a multiway-way join of the edge relations. For example, regardless of label and order constraints, the query of Example 2.1 can be written as the following join
{u 4 , u 3 , u 6 , u 5 }. v 1 v 2 v 3 O Q = {(v 1 , v 3 ), (v 2 , v 4 )} u 1 u 3 u 4 u 5 u 6 v 4 u 2R(Q) = E(v1, v2) E(v2, v3) E(v3, v4) E(v1, v4) E(v2, v4).(1)
This motivates researchers to leverage join operation for large-scale subgraph matching, given that join can be easily distributed, and it is natively supported in many distributed data engines like Spark [54] and Flink [17].
Timely Dataflow System
Timely is a distributed data-parallel dataflow system [43]. The minimum processing unit of Timely is a worker, which can be simply seen as a process that occupies a CPU core. Typically, one physical multi-core machine can run several workers. Timely follows the shared-nothing dataflow computation model [22] that abstracts the computation as a dataflow graph. In the dataflow graph, the vertex (a.k.a. operator) defines the computing logics and the edges in between the operators represent the data streams. One operator can accept multiple input streams, feed them to the computing, and produce (typically) one output stream. After the dataflow graph for certain computing task is defined, it is distributed to each worker in the cluster, and further translated into a physical execution plan. Based on the physical plan, each worker can accordingly process the task in parallel while accepting the corresponding input portion.
Algorithm Survey
We survey the distributed subgraph matching algorithms following the categories of BINJOIN, WOPTJOIN, SHRCUBE, and OTHERS. We also show that CliqueJoin is a variant of GenericJoin [44], and is thus worst-case optimal.
BinJoin
The simplest BINJOIN algorithm uses data edges as the base relation, which starts from one edge, and expands by one edge in each join. For example, to solve the join of Equation 1, a simple plan is shown in Figure 2a. The join plan is straightforward, but the intermediate results, especially R 2 (a 3-path), can be huge. To improve the performance of BINJOIN, people devoted their efforts into: (1) using more complex base relations other than edge; (2) devising better join plan P . The base relations B [q] represent the matches of a set of sub-structures [q] of the query graph Q. Each p ∈ [q] is called a join unit, and it must satisfy
⋊ ⋉ E(v 1 , v 2 ) E(v 2 , v 3 ) R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 2 , v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (a) Left-deep join plan E(v 1 , v 2 ) E(v 2 , v 3 ) ⋊ ⋉ R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 3 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (b) Bushy join planV Q = p∈[q] V p and E Q = p∈[q] E p .
With the data graph partitioned across the cluster, [37] constrains the join unit to be the structure whose results can be independently computed within each partition (i.e. embarrassingly parallel [29]). It is not hard to see that when each vertex has full access to the neighbors in the partition, we can compute the matches of a k-star (a star of k leaves) rooted on the vertex u by enumerating all k-combinations within N G (u). Therefore, star is a qualified and indeed widely used join unit.
Given the base relations, the join plan P determines an order of processing binary joins. A join plan is left-deep 4 if there is at least a base relation involved in each join, otherwise it is bushy. For example, the join plan in Figure 2a is left-deep, and a bushy join plan is shown in Figure 2b. Note that the bushy plan avoids the expensive R 2 in the left-deep plan, and is generally better.
StarJoin. As the name suggests, StarJoin uses star as the join unit, and it follows the left-deep join order. To decompose the query graph, it first locates the vertex cover of the query graph, and each vertex in the cover and its unused neighbors naturally form a star [51]. A StarJoin plan for Equation 1 is
(J 1 ) R(Q) = Star(v 2 ; {v 1 , v 3 , v 4 }) Star(v 4 ; {v 2 , v 3 }),
where Star(r; L) denotes a Star relation (the matches of the star) with r as the root, and L as the set of leaves.
TwinTwigJoin. Enumerating a k-star on a vertex of degree d will render O(d k ) cost. We refer star explosion to the case while enumerating stars on a large-degree vertex. Lai et al. proposed TwinTwigJoin [35] to address the issue of StarJoin by forcing the join plan to use TwinTwig (a star of at most two edges) instead of a general star as the join unit. Intuitively, this would help ameliorate the star explosion by constraining the cost of each join unit from d k of arbitrary k to at most d 2 . TwinTwigJoin follows StarJoin to use left-deep join order. The authors proved that TwinTwigJoin is instance optimal to StarJoin, that is given any general StarJoin plan in the left-deep join order, we can rewrite it as an alternative TwinTwigJoin plan that draws no more cost (in the big O sense) than the original StarJoin, where the cost is evaluated based on Erdös-Rényi random graph (ER) model [23]. A TwinTwigJoin plan for Equation 1 is
(J1) R1(v1, v2, v3, v4) = TwinTwig(v1; {v2, v4}) TwinTwig(v2; {v3, v4}); (J2) R(Q) = R1(v1, v2, v3, v4) TwinTwig(v3; {v4}),(2)
where TwinTwig(r; L) denotes a TwinTwig relation with r as the root, and L as the leaves.
CliqueJoin. TwinTwigJoin hampers star explosion to some extent, but still suffers from the problems of long execution (Ω( m 2 ) rounds) and suboptimal left-deep join plan. CliqueJoin resolves the issues by extending StarJoin in two aspects. Firstly, CliqueJoin applies the "triangle partition" strategy (Section 4.2), which enables CliqueJoin to use clique, in addition to star, as the join unit. The use of clique can greatly shorten the execution especially when the query is dense, although it still degenerates to StarJoin when the query contains no clique subgraph. Secondly, CliqueJoin exploits the bushy join plan to approach optimality. A CliqueJoin plan for Equation 1 is:
(J 1 ) R(Q) = Clique({v 1 , v 2 , v 4 }) Clique({v 2 , v 3 , v 4 }),(3)
where Clique(V ) denotes a Clique relation of the involving vertices V .
Implementation Details. We implement the BINJOIN strategy based on the join framework proposed in [37] to cover StarJoin, TwinTwigJoin and CliqueJoin.
We use power-law random graph (PR) model [21] to estimate the cost as [37], and implement the dynamic programming algorithm [37] to compute the cost-optimal join plan. Once the join plan is computed, we translate the plan into Timely dataflow that processes each binary join using a Join operator. We implement the Join operator following Timely's official "pipeline" HashJoin example 5 . We modify it into "batching-style" -the mappers (senders) shuffle the data based on the join key, while the reducers (receivers) maintain the received key-value pairs in a hash table (until mapper completes) for join processing. The reasons that we implement the join as "batching-style" are, (1) its performance is similar to "pipeline" join as a whole; (2) it replays the original implementation in Hadoop; and (3) it favors the Batching optimization (Section 4.1).
WOptJoin
WOPTJOIN strategy processes subgraph matching by matching vertices in a predefined order. Given the query graph Q and V Q = {v 1 , v 2 , · · · , v n } as the matching order, the algorithm starts from an empty set, and computes the matches of the subset {v 1 , · · · , v i } in the i th rounds. Denote the partial results after the i th (i < n) round as R i , and p = {u k1 , u k2 , · · · , u ki } ∈ R i is one of the tuples. In the i + 1 th round, the algorithm expands the results by matching v i+1 with u ki+1 for p iff. ∀ 1≤j≤i (v j , v i+1 ) ∈ E Q , (u kj , u ki+1 ) ∈ E G . It is immediate that the candidate matches of v i+1 , denoted C(v i+1 ), can be obtained by intersecting the relevant neighbors of the matched vertices as
C(v i+1 ) = ∀ 1≤j≤i ∧(vj ,vi+1)∈E Q N G (u kj ).(4)
BiGJoin. BiGJoin adopts the WOPTJOIN strategy in Timely dataflow system. The main challenge is to implement the intersection efficiently using Timely dataflow. For that purpose, the authors designed the following three operators:
• Count: Checking the number of neighbors of each u kj in Equation 4 and recording the location (worker) of the one with the smallest neighbor set. • Propose: Attaching the smallest neighbor set to p as (p; C(v i+1 )).
• Intersect: Sending (p; C(v i+1 )) to the worker that maintains each u kj and update C(
v i+1 ) = C(v i+1 ) ∩ N G (u kj ).
After intersection, we will expand p by pushing into p every vertex of C(v i+1 ).
Implementation Details. We directly use the authors' implementation [5], but slightly modify the codes to use the common graph data structure. We do not consider the dynamic version of BiGJoin in this paper, as the other strategies currently only support static context. The matching order is determined using a greedy heuristic that starts with the vertex of the largest degree, and consequently selects the next vertex with the most connections (id as tie breaker) with already-selected vertices.
ShrCube
SHRCUBE strategy treats the join processing of the query Q as a hypercube of n = |V Q | dimension. It attempts to divide the hypercube evenly across the workers in the cluster, so that each worker can complete its own share without data communication. However, it is normally required that each data tuple is duplicated into multiple workers. This renders a space requirement of M w 1−ρ for each worker, where M is size of the input data, w is the number of workers and 0 < ρ ≤ 1 is a query-dependent parameter. When ρ is close to 1, the algorithm ends up with maintaining the whole input data in each worker.
MultiwayJoin. MultiwayJoin applies the SHRCUBE strategy to solve subgraph matching in one single round. Consider w workers in the cluster, a query graph Q with
V Q = {v 1 , v 2 , . . . , v n } vertices and E Q = {e 1 , e 2 , . . . , e m }, where e i = (v i1 , v i2 ). Regarding each query vertex v i , assign a positive integer as bucket number b i that satisfies n i=1 b i = w.
The algorithm then divides the candidate data vertices for v i evenly into b i parts via a hash function
h : u → z i , where u ∈ V G , 1 ≤ z i ≤ b i .
This accordingly divides the whole computation into w shares, each of which can be indiced via an n-ary tuple (z 1 , z 2 , · · · , z n ), and is assigned to one worker. Afterwards, regarding each query edge
e i = (v i1 , v i2 ), MultiwayJoin maps a data edge (u, u ) as (z 1 , · · · , z i1 = h(u), · · · , z i2 = h(u ), . . . , z n ),
where other than z i1 and z i2 , each above z i iterates through {1, 2, · · · , b i }, and the edge will be routed to the workers accordingly. Taking triangle query with
E Q = {(v 1 , v 2 ), (v 1 , v 3 ), (v 2 , v 3 )
} as an example. According to [12],
b 1 = b 2 = b 3 = b = n
√ w is an optimal bucket number assignment. Each edge (u, u ) is then routed to the workers as:
(1) (h(u), h(u ), z) regarding (v 1 , v 2 ); (2) (h(u), z, h(u )) regarding (v 1 , v 3 ); (3) (z, h(u), h(u )) regarding (v 2 , v 3 ),
where the above z iterates through {1, 2, · · · , b}. Consequently, each data edge is duplicated by roughly 3 3 √ w times, and by expectation each worker will receive 3M w 1−1/3 edges. For unlabelled matching, MultiwayJoin utilizes the partial order of the query graph (Section 2.1) to reduce edge duplication, and details can be found in [12].
Implementation Details. There are two main impact factors of the performance of SHRCUBE. Firstly, the hypercube sharing by assigning proper b i for v i . Beame et al. [15] generalized the problem of computing optimal hypercube sharing for arbitrary query as linear programming. However, the optimal solution may assign fractional bucket number that is unwanted in practice. An easy refinement is to round down to an integer, but it will apparently result in idle workers. Chu et al. [20] addressed this issue via "Hypercube Optimization", that is to enumerate all possible bucket sequences around the optimal solutions, and choose the one that produces shares (product of bucket numbers) closest to the number of workers. We adopt this strategy in our implementation.
Secondly, the local algorithm. When the edges arrive at the worker, we collect them into a local graph (duplicate edges are removed), and use local algorithm to compute the matches. For unlabelled matching. we study the state-of-the-art local algorithms from "EmptyHeaded" [11] and "DualSim" [34]. "EmptyHeaded" is inspired by Ngo's worst-case optimal algorithm [44] that decomposes the query graph via "Hyper-Tree Decomposition", computes each decomposed part using worst-case optimal join and finally glues all parts together using hash join. "DualSim" was proposed by [34] for subgraph matching in the external-memory setting. The idea is to first compute the matches of V cc Q , then the remaining vertices V Q \V cc Q can be efficiently matched by enumerating the intersection of V cc Q 's neighbors. We find out that "DualSim" actually produces the same query plans as "EmptyHeaded" for all our benchmarking queries ( Figure 4) except q 9 . We implement both algorithms for q 9 and "DualSim" performs better than "EmptyHeaded" on the GO, US, GP and LJ datasets ( Table 2). As a result, we adopt "DualSim" as the local algorithm for MultiwayJoin. For labelled matching, we implement "CFLMatch" proposed in [16] that has been shown so far to have the best performance. Now we let each worker independently compute matches in its local graph. Simply doing so will result in duplicates, so we process deduplication as follows: given a match f that is computed in the worker identified by t w , we can recover the tuple t f e of the matched edge (f (v), f (v )) regarding the query edge e = (v, v ), then the match f is retained if and only if t w = t f e for every e ∈ E Q . To explain this, let's consider b = 2, and a match {u 0 , u 1 , u 2 } for a triangle query
(v 0 , v 1 , v 2 ), where h(u 0 ) = h(u 1 ) = h(u 2 ) = 0.
It is easy to see that the match will be computed in workers of (0, 0, 0) and (0, 0, 1), while the match in worker (0, 0, 1) will be eliminated as (u 0 , u 2 ) that matches the query edge (v 0 , v 2 ) can not be hashed to (0, 0, 1) regarding (v 0 , v 2 ). We can also avoid deduplication by separately maintaing each edge regarding different query edges it stands for, and use the local algorithm proposed in [20], but it results in too many edge duplicates that drain our memory even when processing a medium-size graph.
Others
PSgL and its implementation. PSgL iteratively processes subgraph matching via breadth-first traversal. All query vertices are configured three status, "white" (initialized), "gray" (candidate) and "black" (matched). Denote v i as the vertex to match in the i th round. The algorithm starts from matching initial query vertex v 1 , and coloring the neighbors as "gray". In the i th round, the algorithm applies the workload-aware expanding strategy at runtime, that is to select the v i to expand among all current "gray" vertices based on a greedy heuristic to minimize the communication cost [49]; the partial results from previous round R i−1 (specially, R 0 = ∅) will be distributed among the workers based on the candidate data vertices that can match v i ; in the certain worker, the algorithm computes R i by merging R i−1 with the matches of the Star formed by v i and its "white" neighbors N w
Q (v i ), namely Star(v i ; N w Q (v i ));
after v i is matched, v i is colored as "black" and its "white" neighbors will be colored as "gray"; essentially, this process is analogous to StarJoin by processing
R i = R i−1 Star(v i ; N w Q (v i ))
. Thus, PSgL can be seen as an alternative implementation of StarJoin on Pregel [41]. In this work, we also implement PSgL using a Pregel on Timely. Note that we introduce Pregel api to as much as possible replay the implementation of PSgL. In fact, it is simply wrapping Timely's primitive operators such as binary_notify and loop 6 , and barely introduces extra cost to the implementation. Our experimental results demonstrate similar findings as prior work [37] that PSgL's performance is dominated by CliqueJoin [37]. Thus, we will not further discuss this algorithm in this paper.
CrystalJoin and its implementation. CrystalJoin aims at resolving the "output crisis" by compressing the results of subgraph matching [45]. The authors defined a structure called crystal, denoted Q(x, y). A crystal is a subgraph of Q that contains two sets of vertices V x and V y (|V x | = x and |V y | = y), where the induced subgraph Q(V x ) is a x-clique, and every vertex in V y connects to all vertices of V x . We call V x clique vertices, and V y the bud vertices. The algorithm first obtains the minimum vertex cover V c Q , and then applies the Core-Crystal Decomposition to decompose the query graph into the core Q(V c Q ) and a set of crystals
{Q 1 (x 1 , y 1 ), . . . , Q t (x t , y t )}. The crystals must satisfy that ∀1 ≤ i ≤ t, Q(V xi ) ⊆ Q(V c Q )
, namely, the clique part of each crystal is a subgraph of the core. As an example, we plot a query graph and the corresponding core-crystal decomposition in Figure 3. Note that in the example, both crystals have an edge (i.e. 2-clique) as the clique part. With core-crystal decomposition, the computation has accordingly split into three stages:
v 1 v 2 v 3 v 4 v 5 Core: v 3 v 2 v 5 Crystals: v 2 v 5 v 1 v 3 v 5 v 4 Q 1 (2, 1) Q 2 (2, 1) Q Core-Crystal Decomposition
1. Core computation. Given that Q(V c Q ) itself is a query graph, the algorithm can be recursively applied to compute Q(V c Q ) according to [45].
2. Crystal computation. A special case of crystal is Q(x, 1), which is indeed a (x + 1)-clique. Suppose an instance of the Q(V x ) is f x = {u 1 , u 2 , . . . , u x }, we can represent the matches w.r.t. f x as (f x , I y ), where I y = x i=1 N G (u i )
denotes the set of vertices that can match V y . This can naturally be extended to the case with y > 1, where any y-combinations of the vertices of I y together with f x represent a match. This way, the matches of crystals can be largely compressed. 3. One-time assembly. This stage assembles the core instances and the compressed crystal matches to produce the final results. More precisely, this stage is to join the core instance with the crystal matches.
We notice two technical obstacles to implement CrystalJoin according to the paper. Firstly, it is worth noting that the core Q(V c Q ) may be disconnected, a case that can produce exponential number of results. The authors applied a query-specific optimization in the original implementation to resolve this issue. Secondly, the authors proposed to precompute the cliques up to certain k, while it is often cost-prohibitive to do so in practice. Take UK (Table 2) dataset as an example, the triangles, 4-cliques and 5-cliques are respectively about 20, 600 and 40000 times larger than the graph itself. It is worth noting that the main purpose of this paper is not to study how well each algorithm performs for a specific query, which has its theoretical value, but can barely guide practice. After communicating with the authors, we adapt CrystalJoin in the following. Firstly, we replace the core Q(V c Q ) with the induced subgraph of the minimum connected vertex cover Q(V cc Q ). Secondly, instead of implementing CrystalJoin as a strategy, we use it as an alternative join plan (matching order) for WOPTJOIN. According to CrystalJoin, we first match V cc Q , while the matching order inside and outside V cc Q still follows WOPTJOIN's greedy heuristic (Section 3.2). It is worth noting that this adaptation achieves high performance comparable to the original implementation. In fact, we also apply CrystalJoin plan to BINJOIN, while it does not perform as well as the WOPTJOIN version, thus we do not discuss this implementation.
FullRep and its implementation. FULLREP simply maintains a full replica of the graph in each physical machine. Each worker picks one independent share of computation and solves it using existing local algorithm.
The implementation is straightforward. We let each worker pick its share of computation via a Round-Robin strategy, that is we settle an initial query vertex v 1 , and let first worker match v 1 with u 1 to continue the remaining process, and second worker match v 1 with u 2 , and so on. This simple strategy already works very well on balancing the load of our benchmarking queries (Figure 4). We use "DualSim" for unlabelled matching and "CFLMatch" for labelled matching as MultiwayJoin.
Worst-case Optimality.
Given a query Q and the data graph G, we denote the maximum possible result set as R G (Q). Simply speaking, an algorithm is worst-case optimal if the aggregation of the total intermediate results is bounded by Θ(|R G (Q)|). Ngo et al. proposed a class of worst-case optimal join algorithm called GenericJoin [44], and we first overview this algorithm.
GenericJoin. Let the join be R
(V ) = F ⊆Ψ R(F ), where Ψ = {U | U ⊆ V } and V = U ∈Ψ U . Given a vertex subset U ⊆ V , let Ψ U = {V | V ∈ Ψ ∧ V ∩ U = ∅}, and for a tuple t ∈ R(V ), denote t U as t's projection on U .
We then show the GenericJoin in Algorithm 1.
Algorithm 1: GenericJoin(V, Ψ, U ∈Ψ R(U )) 1 R(V ) ← ∅; 2 if |V | = 1 then 3 Return U ∈Ψ R(U ); 4 V ← (I, J), where ∅ = I ⊂ V , and J = V \ I; 5 R(I) ← GenericJoin(I, ΨI , U ∈Ψ I πI (R(U ))); 6 forall tI ∈ R(I) do 7 R(J)w.r.t. t I ← GenericJoin(J, ΨJ , U ∈Ψ J πJ (R(U ) tI )); 8 R(V ) ← R(V ) ∪ {tI } × R(J)w.r.t. t I ; 9 Return R(V );
In Algorithm 1, the original join is recursively decomposed into two parts R(I) and R(J) regarding the disjoint sets I and J. From line 5, it is clear that R(I) will record R(V )'s projection on I, thus we have |R(I)| ≤ |R(V )|, where R(V ) is the maximum possible results of the query. Meanwhile, in line 7, the semi-join R(U ) t I = {r | r ∈ R(U ) ∧ r (U ∪I) = t (U ∪I) } only retains those R(J) w.r.t. t I that can end up in the join result, which infers that the R(J) must also be bounded by the final results. This intuitively explains the worst-case optimality of GenericJoin, while we refer interested readers to [44] for a complete proof.
It is easy to see that BiGJoin is worst-case optimal. In Algorithm 1, we select I in line 4 by popping the edge relation E(v s , v i )(s < i) in the i th step. In line 7, the recursive call to solve the semi-join R(U ) t I actually corresponds to the intersection process.
Worst-case Optimality of CliqueJoin. Note that the two clique relations in Equation 3 interleave one common edge (v 2 , v 4 ) in the query graph. This optimization, called "overlapping decomposition" [37], eventually contributes to CliqueJoin's worst-cast optimality. Note that it is not possible to apply this optimization to StarJoin and TwinTwigJoin. We have the following theorem. Theorem 3.1. CliqueJoin is worst-case optimal while applying "overlapped decomposition".
Proof. We implement CliqueJoin using Algorithm 1 in the following. Note that Q(V ) denotes a subgraph of Q induced by V . In line 2, we change the stopping condition to "Q(I) is either a clique or a star". In line 4, the I is selected such that Q(I) is either a clique or a star. Note that by applying the "overlapping decomposition" in CliqueJoin, the sub-query of the J part must be the J-induced graph Q(J), and it will also include the edges of E Q(I) ∩ E Q(J) , which infers that R(Q(J)) = R(Q(J)) R(Q(I)) , and just reflects the semi-join in line 7. Therefore, CliqueJoin belongs to GenericJoin, and is thus worst-case optimal.
Optimizations
We introduce the three general-purpose optimizations, Batching, TrIndexing and Compression in this section, and how we orthogonally apply them to BINJOIN and WOPTJOIN algorithms. In the rest of the paper, we will use the strategy BINJOIN, WOPTJOIN, SHRCUBE instead of their corresponding algorithms, as we focus on strategy-level comparison.
Batching
Let R(V i ) be the partial results that match the given vertices V i = {v si , v s2 , . . . , v si } (R i for short if V i follows a given order), and R(V j ) denote the more complete results with V i ⊂ V j . Denote R j |R i as the tuples in R j whose projection
on V i equates R i . Let's partition R i into b disjoint parts {R 1 i , R 2 i , . . . , R b i }.
We define Batching on R j |R i as the technique to independently process the following sub-tasks that compute
{R j |R 1 i , R j |R 2 i , . . . , R j |R b i }. Obviously, R j |R i = b k=1 R j |R k i .
WOptJoin. Recall from Section 3.2 that WOPTJOIN progresses according to a predefined matching order {v 1 , v 2 , . . . , v n }. In the i th round, WOPTJOIN will Propose on each p ∈ R i−1 to compute R i . It is not hard to see that we can easily apply Batching to the computation of R i |R i−1 by randomly partitioning R i−1 . For simplicity, the authors implemented Batching on R(Q)|R 1 (v 1 ). Note that R 1 (v 1 ) = V G in unlabelled matching, which means that we can achieve Batching simply by partitioning the data vertices 7 . For short, we also say the strategy batches on v 1 , and call v 1 the batching vertex. We follow the same idea to apply Batching to BINJOIN algorithms.
BinJoin. While it is natural for WOPTJOIN to batch on v 1 , it is non-trivial to pick such a vertex for BINJOIN. Given a decomposition of the query graph {p 1 , p 2 , . . . , p s }, where each p i is a join unit, we have R(Q) = R(p 1 ) R(p 2 ) · · · R(p s ). If we partition R 1 (v) so as to batch on v ∈ V Q , we correspondingly split the join task, and one of the sub-task is R(Q)|R k
1 (v) = R(p 1 )|R k 1 (v) · · · R(p s )|R k 1 (v) (R k 1 (v) is one partition of R 1 (v)).
Observe that if there exists a join unit p where v ∈ V p , we must have R(p) = R(p)|R k 1 (v), which means R(p) have to be fully computed in each sub-task. Let's consider the example query in Equation 2.
R(Q) = T 1 (v 1 , v 2 , v 4 ) T 2 (v 2 , v 3 , v 4 ) T 3 (v 3 , v 4 ).
Suppose we batch on v 1 , the above join can be divided into the following independent sub-tasks:
R(Q)|R 1 1 (v1) = (T1(v1, v2, v4)|R 1 1 (v1)) T2(v2, v3, v4) T3(v3, v4), R(Q)|R 2 1 (v1) = (T1(v1, v2, v4)|R 2 1 (v1)) T2(v2, v3, v4) T3(v3, v4), · · · R(Q)|R b 1 (v1) = (T1(v1, v2, v4)|R b 1 (v1)) T2(v2, v3, v4) T3(v3, v4).
It is not hard to see that we will have to re-compute T 2 (v 2 , v 3 , v 4 ) and T 3 (v 3 , v 4 ) in all the above sub-tasks. Alternatively, if we batch on v 4 , we can avoid such re-computation as T 1 , T 2 and T 3 can all be partitioned in each sub-task. Inspired by this, for BINJOIN, we come up with the heuristic to apply Batching on the vertex that presents in as many join units as possible. Note that such vertex can only be in the join key, as otherwise it must at least not present in one side of the join. For complex query, we can still have join unit that does not contain any vertex for Batching after applying the above heuristic. In this case, we either re-compute the join unit, or cache it on disk. Another problem caused by this is potential memory burden of the join. Thus, we devise the join-level Batching following the idea of external MergeSort. Specifically, we inject a Buffer-and-Batch operator for the two data streams before they arrive at the Join operator. Buffer-and-Batch functions in two parts:
• Buffer: While the operator receives data from the upstream, it buffers the data until reaching a given threshold. Then the buffer is sorted according to the join key's hash value and spilled to the disk. The buffer is reused for the next batch of data. • Batch: After the data to join is fully received, we read back the data from the disk in a batching manner, where each batch must include all join keys whose hash values are within a certain range.
While one batch of data is delivered to the Join operator, Timely allows us to supervise the progress and hold the next batch until the current batch completes. This way, the internal memory requirement is one batch of the data. Note that such join-level Batching is natively implemented in Hadoop's "Shuffle" stage, and we replay this process in Timely to improve the scalability of the algorithm.
Triangle Indexing
As the name suggests, TrIndexing precomputes the triangles of the data graph and indices them along with the graph data to prune infeasible results. The authors of BiGJoin [13] optimized the 4-clique query by using the triangles as base relations to join, which reduces the rounds of join and network communication. In [45], the authors proposed to not only maintain triangles, but all k-cliques up to a given k. As we mentioned earlier, it incurs huge extra cost of maintaining triangles already, let alone larger cliques.
In addition to the default hash partition, Lai et al. proposed "triangle partition" [37] by also incorporating the edges among the neighbors (it forms triangles with the anchor vertex) in the partition. "Triangle partition" allows BINJOIN to use clique as the join unit [37], which greatly reduces the intermediate results of certain queries and improves the performance. "Triangle partition" is in de facto a variant of TrIndexing, which instead of explicitly materializing the triangles, maintains them in the local graph structure (e.g. adjacency list). As we will show in the experiment (Section 5), this will save a lot of space compared to explicit triangle materialization. Therefore, we adopt the "triangle partition" for TrIndexing optimization in this work.
BinJoin. Obviously, BINJOIN becomes CliqueJoin with TrIndexing, and StarJoin (or TwinTwigJoin) otherwise.
With worst-case optimality guarantee (Section 3.5), BINJOIN should perform much better with TrIndexing, which is also observed in "Exp-1" of Section 5.
WOptJoin. In order to match v i in the i th round, WOPTJOIN utilizes Count, Propose and Intersect to process the intersection of Equation 4. For ease of presentation, suppose v i+1 connects to the first s query vertices {v 1 , v 2 , . . . , v s }, and given a partial match,
{f (v 1 ), . . . , f (v s )}, we have C(v i+1 ) = s j=1 N G (f (v j )).
In the original implementation, it is required to send (p; C(v i+1 )) via network to all machines that contain each f (v j )(1 ≤ j ≤ s) to process the intersection, which can render massive communication cost. In order to reduce the communication cost, we implement TrIndexing for WOPTJOIN in the following. We first group {v 1 , . . . , v s } such that for each group U (v x ), we have
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q }.
Because of TrIndexing, we have N G (f (v y )) (∀v y ∈ U (v x )) maintain in f (v x )'s partition. Thus, we only need to send the prefix to f (v x )'s machine, and the intersection within U (v x ) can be done locally. We process the grouping using a greedy strategy that always constructs the largest group from the remaining vertices.
Remark 4.1. The "triangle partition" may result in maintaining a large portion of the data graph in certain partition. Lai et al. pointed out this issue, and proposed a space-efficient alternative by leveraging the vertex orderings [37]. That is, given the partitioned vertex as u, and two neighbors u and u that close a triangle, we place the edge (u , u ) in the partition only when u < u < u . Although this alteration reduces storage, it may affect the effectiveness of TrIndexing for WOPTJOIN and the implementations of Batching and Compression for BINJOIN algorithms. Take WOPTJOIN as an example, after using the space-efficient "triangle partition", we should modify the above grouping as:
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q ∧ (v x , v y ) ∈ O Q }.
Note that the order between query vertices are for symmetry breaking (Section 2.1), and it may not present in certain query, which makes TrIndexing completely useless for WOPTJOIN.
Compression
Subgraph matching is a typical combinatorial problem, and can easily produce results of exponential size. Compression aims to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost. In the following, when we say "compress a query vertex", we mean maintaining its matched data vertices in the form of an array, instead of unfolding them in line with the one-one mapping of a match (Definiton 2.1). Qiao et al. proposed CrystalJoin to study Compression in general for subgraph matching. As we introduced in Section 3.4, CrystalJoin first extracts the minimum vertex cover as uncompressed part, and then it can compress the remaining query vertices as the intersection of certain uncompressed matches' neighbors. Such Compression leverages the fact that all dependencies (edges) of the compressed part that requires further computation are already covered by the uncompressed part, thus it can stay compressed until the actual matches are requested. CrystalJoin inspires a heuristic for doing Compression, that is to compress the vertices whose matches will not be used in any future computation. In the following, we will apply the same heuristic to the other algorithms.
BinJoin. Obviously we can not compress any vertex that presents in the join key. What we need to do is to simply locate the vertices to compress in the join unit, namely star and clique. For star, the root vertex must remain uncompressed, as the leaves' computation depends on it. For clique, we can only compress one vertex, as otherwise the mutual connection between the compressed vertices will be lost. In a word, we compress two types of vertices for BINJOIN, (1) non-key and non-root vertices of a star join unit, (2) one non-key vertex of a clique join unit.
WOptJoin. Based on a predefined join order {v 1 , v 2 , . . . , v n }, we can compress
v i (1 ≤ i ≤ n), if there does not exist v j (i < j) such that (v i , v j ) ∈ E Q .
In other words, v i 's matches will never be involved in any future intersection (computation). Note that v n 's can be trivially compressed. With Compression, when v i is compressed, we will maintain its matches as an array instead of unfolding it into the prefix like a normal vertex.
Experiments
Experimental settings
Environments. We deploy two clusters for the experiments: (1) a local cluster of 10 machines connected via one 10GBps switch and one 1GBps switch. Each machine has 64GB memory, 1 TB disk and 1 Intel Xeon CPU E3-1220 V6 3.00GHz with 4 physical cores; (2) an AWS cluster of 40 "r5-2xlarge" instances connected via a 10GBps switch, each with 64GB memory, 8 vCpus and 500GB Amazon EBS storage. By default we use the local cluster of 10 machines with 10GBps switch. We run 3 workers in each machine in the local cluster, and 6 workers in the AWS cluster for Timely. The codes are implemented based on the open-sourced Timely dataflow system [8] using Rust 1.32. We are still working towards open-sourcing the codes, and the bins together with their usages are temporarily provided 8 to verify the results.
Metrics.
In the experiments, we measure query time T as the slowest worker's wall clock time from an average of three runs. We allow 3 hours as the maximum running time for each test. We use OT and OOM to indicate a test case runs out of the time limit and out of memory, respectively. By default we will not show the OOM results for clear presentation.
We divide T into two parts, the computation time T comp and the communication time T comm . We measure T comp as the time the slowest worker spends on actual computation by timing every computing function. We are aware that the actual communication time is hard to measure as Timely overlaps computation and communication to improve throughput. We consider T − T comp , which mainly records the time the worker waits data from the network channel (a.k.a. communication time). While the other part of communication that overlaps computation is of less interest as it does not affect the query progress. As a result, we simply let T comm = T − T comp in the experiments. We measure the maximum peak memory using Linux's "time -v" in each machine. We define the communication cost as the number of integers a worker receives during the process, and measure the maximum communication cost among the workers accordingly.
Dataset Formats. We preprocess each dataset as follows: we treat it as a simple undirected graph by removing selfloop and duplicate edges, and format it using "Compressed Sparse Row" (CSR) [3]. We relabel the vertex id according to the degree and break the ties arbitrarily.
Compared Strategies. In the experiments, we implement BINJOIN and WOPTJOIN with all Batching, TrIndexing and Compression optimizations (Section 4). SHRCUBE is implemented with "Hypercube Optimization" [20], and "DualSim" (unlabelled) [34] and "CFLMatch" (labelled) [16] as local algorithms. FULLREP is implemented with the same local algorithms as SHRCUBE.
Auxiliary Experiments. We have also conducted several auxiliary experiments in the appendix to study the strategies of BINJOIN, WOPTJOIN, SHRCUBE and FULLREP.
Unlabelled Experiments
Datasets. The datasets used in this experiment are shown in Table 2. All datasets except SY are downloaded from public source, which are indicated by the letter in the bracket (S [9], W [10], D [1]). All statistics are measured as G is an undirected graph. Among the datasets, GO is a small dataset to study cases of extremely large (intermediate) result set; LJ, UK and FS are three popular datasets used in prior works, featuring statistics of real social network and web graph; GP is the google plus ego network, which is exceptionally dense; US and EU, on the other end, are sparse road networks. These datasets vary in number of vertices and edges, densities and maximum degree, as shown in Table 2. We synthesize the SY data according to [18] that generates data with real-graph characteristics. Note that the data occupies roughly 80GB space, and is larger than the configured memory of our machine. We synthesize the data because we do not find public accessible data of this size. Larger dataset like Clueweb [2] is available, but it is beyond the processing power of our current cluster.
Each data is hash partitioned ("hash") across the cluster. We also implement the "triangle partition" ("tri.") for TrIndexing optimization (Section 4.2). To do so, we use BiGJoin to compute the triangles and send the triangle edges to corresponding partition. We record the time T * and average number of edges |E * | of the two partition strategies. The partition statistics are recorded using the local cluster, except for SY that is processed in the AWS cluster. From Table 2, we can see that |E tri. | is noticeably larger, around 1-10 times larger than |E hash |. Note that in GP and UK, which either is dense, or must contain a large dense community, the "triangle partition" can maintain a large portion of data in each partition. While compared to complete triangle materialization, "triangle partition" turns out to be much cheaper. For example, the UK dataset contains around 27B triangles, which means each partition in our local cluster should by average take 0.9B triangles (three integers); in comparison, UK's "triangle partition" only maintains an average of 0.16B edges (two integers) according to Table 2.
We use US, GO and LJ as default datasets in the experiments "Exp-1", "Exp-2" and "Exp-3" in order to collect useful feedbacks from successful queries, while we may not present certain cases when they do not give new findings.
Queries. The queries are presented in Figure 4. We also give the partial order under each query for symmetry breaking. The queries except q 7 and q 8 are selected based on all prior works [13,35,37,45,50], while varying in number of vertices, densities, and the vertex cover ratio |V cc Q |/|V Q |, in order to better evaluate the strategies from different perspectives. The three queries q 7 , q 8 and q 9 are relatively challenging given their result scale. For example, the smallest dataset GO contains 2, 168B(illion) q 7 , 330B q 8 and 1, 883B q 9 , respectively. For short of space, we record the number of results of each successful query on each dataset in the appendix. Note that q 7 and q 8 are absent from existing works, while we benchmark q 7 considering the importance of path query in practice, and q 8 considering the varieties of the join plans.
Exp-1: Optimizations. We study the effectiveness of Batching, TrIndexing and Compression for both BINJOIN and WOPTJOIN strategies, by comparing BINJOIN and WOPTJOIN with their respective variants with one optimization off, namely "without Batching", "without Trindexing" and "without Compression". In the following, we use the suffix of "(w.o.b.)", "(w.o.t.)" and "(w.o.c.)" to represent the three variants. We use the queries q 2 and q 5 , and the results of US and LJ are shown in Figure 5. By default, we use the batch size of 1, 000, 000 for both BINJOIN and WOPTJOIN (according to [13]) in this experiment, and we reduce the batch size when it runs out of memory, as will be specified.
While comparing BINJOIN with BINJOIN(w.o.b.), we observe that Batching barely affects the performance of q 2 , but severely for q 5 on LJ (1800s vs 4000s (w.o.b.) ). The reason is that we still apply join-level Batching for BINJOIN(w For WOPTJOIN strategy, Batching has little impact to the performance. Surprisingly, after using TrIndexing to WOPTJOIN, the improvement by average is only around 18%. We do another experiment in the same cluster but using 1GBps switch, which shows WOPTJOIN is over 6 times faster than WOPTJOIN(w.o.t.) for both queries on LJ. Note that Timely uses separate threads to buffer received data from the network. Given the same computing speed, a faster network allows the data to be more fully buffered and hence less wait for the following computation. Similar to BINJOIN, Compression greatly improves the performance while querying on LJ, but the opposite on US.
v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 {(v 1 , v 2 ), (v 1 , v 3 ), (v1, v 4 ), (v 2 , v 4 )} {(v 1 , v 3 ), (v 2 , v 4 )} {(v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 )} q 1 q 2 q 3 v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 )} v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 ), (v 3 , v 4 )} q 6 q 5 q 4 v 1 v 2 v 3 v 4 v 5 v 6 {(v 3 , v 5 )} q 8 v 1 v 2 v 3 v 4 v 5 {(v 1 , v 4 )} v 1 v 2 v 3 v 4 v 5 q 9 v 1 v 2 v 3 v 4 v 5 q 7 v 6 {(v 1 , v 5 )} {(v 2 , v 3 ), (v 2 , v 5 ), (v 2 , v 6 )}
Exp-2 Challenging Queries. We study the challenging queries q 7 , q 8 and q 9 in this experiment. We run this experiment We focus on comparing BINJOIN and WOPTJOIN on GO dataset. On the one hand, WOPTJOIN outperforms BINJOIN for q 7 and q 8 . Their join plans of q 7 are nearly the same except that BINJOIN relies on a global shuffling on v 3 to processing join, while WOPTJOIN sends the partial results to the machine that maintains the vertex to grow. It is hence reasonable to observe BINJOIN's poorer performance for q 7 as shuffling is typically a more costly operation. The case of q 8 is similar, so we do not further discuss. On the other hand, even living with costly shuffling, BINJOIN still performs better for q 9 . Due to the vertex-growing nature, WOPTJOIN's "optimal plan" will have to process the costly sub-query Q({v 1 , v 2 , v 3 , v 4 , v 5 }). On US dataset, WOPTJOIN consistently outperforms BINJOIN for these queries. This is because that US does not produce massive intermediate results as LJ, thus BINJOIN's shuffling cost consistently dominates.
While processing complex queries like q 8 and q 9 , we can study varieties of join plans for BINJOIN and WOPTJOIN. First of all, we want the readers to note that BINJOIN's join plan for q 8 is different from the optimal plan originally given [37]. The original "optimal" plan computes q 8 by joining two tailed triangles (triangle tailed with an edge), while this alternative plan works better by joining the uppers "house-shape" sub-query with the bottom triangle. In theory, the tailed triangle has worse-case bound (AGM bound [44]) of O(M 2 ), smaller than the house's O(M 2.5 ), and BINJOIN's actually favors this plan based on cost estimation. However, we find out that the number of tailed triangles is very close to that of the houses on GO, which renders costly process for the original plan to join two tailed triangles. This indicates insufficiency of both cost estimation proposed in [37] and worst-case optimal bound [13] while computing the join plan, which will be further discussed in Section 6.
Secondly, it is worth noting that we actually report the result of WOPTJOIN for q 9 while using the CrystalJoin plan, as it works better than WOPTJOIN's original "optimal" plan. For q 9 , CrystalJoin will first compute Q(V cc Q ), namely the 2-path {v 1 , v 3 , v 5 }, thereafter it can compress all remaining vertices v 2 , v 4 and v 6 . In comparison, the "optimal" plan can only compress v 2 and v 6 . In this case, CrystalJoin performs better because it configures larger compression. In [45], the authors proved that it renders maximum compression to use the vertex cover as the uncompressed core. However, this may not necessarily result in the best performance, considering that it can be costly to compute the core part. In our experiments, the unlabelled q 4 , q 8 and labelled q 8 are cases that CrystalJoin plan performs worse than the original BiGJoin plan (with Compression optimization), where CrystalJoin plan does not render strictly larger compression while having to process the costly core part. As a result, we only recommend CrystalJoin plan when it leads to strictly larger compression.
The final observation is that the computation time dominates most of the evaluated cases, except BINJOIN's q 8 , WOPTJOIN and SHRCUBE's q 9 on US. We will further discuss this in Exp-3.
Exp-3 All-Around Comparisons. In this experiment, we run q 1 − q 6 using BINJOIN, WOPTJOIN, SHRCUBE and FULLREP across the datasets GP, LJ, UK, EU and FS. We also run WOPTJOIN with CrystalJoin plan in q 4 as it is the only query that renders different CrystalJoin plan from BiGJoin plan, and the results show that the performance with BiGJoin plan is consistently better. We report the results in Figure 7, where the communication time is plotted as gray filling. As a whole, among all 35 test cases, FULLREP achieves the best 85% completion rate, followed by WOPTJOIN and BINJOIN which complete 71.4% and 68.6% respectively, and SHRCUBE performs the worst with just 8.6% completion rate.
BinJoin
WOptjoin FULLREP typically outperforms the other strategies. Observe that WOPTJOIN's performance is often very close to FULLREP. The reason is that the WOPTJOIN's computing plans for these evaluated queries are similar to "DualSim" adopted by FULLREP. The extra communication cost of WOPTJOIN has been reduced to very low while adopting TrIndexing optimization. While comparing WOPTJOIN with BINJOIN, BINJOIN is better for q 3 , a clique query (join unit) that requires no join (a case of embarrassingly parallel). BINJOIN performs worse than WOPTJOIN in most other queries, which, as we mentioned before, is due to the costly shuffling. There is an exception -querying q 1 on GP -where BINJOIN performs better than both FULLREP and WOPTJOIN. We explain this using our best speculation. GP is a very dense graph, where we observe nearly 100 vertices with degree around 10,000. To process We observe that the computation time T comp dominates in most cases as we mentioned in Exp-2. This is trivially true for SHRCUBE and FULLREP, but it may not be clearly so for WOPTJOIN and BINJOIN given that they all need to transfer a massive amount of intermediate data. We investigate this and find out two potential reasons. The first one attributes to Timely's highly optimized communication component, which allows the computation to overlap communication by using extra threads to receive and buffer the data from the network so that it can be mostly ready for the following computation. The second one is the fast network. We re-run these queries using the 1GBps switch, while the results show the opposite trend that the communication time T comm in turn takes over.
Exp-4 Web-Scale. We run the SY datasets in the AWS cluster of 40 instances. Note that FULLREP can not be used as SY is larger than the machine's memory. We use the queries q 2 and q 3 , and present the results of BINJOIN and WOPTJOIN (SHRCUBE fails all cases due to OOM) in Table 3. The results are consistent with the prior experiments, but observe that the gap between BINJOIN and WOPTJOIN while querying q 1 is larger. This is because that we now deploy 40 AWS instances, and BINJOIN's shuffling cost increases.
Labelled Experiments
We use the LDBC social network benchmarking (SNB) [6] for labelled matching experiment due to the lack of labelled big graphs in the public. SNB provides a data generator that generates a synthetic social network of required statistics, and a document [7] that describes the benchmarking tasks, in which the complex tasks are actually subgraph matching. The join plans of BINJOIN and WOPTJOIN for labelled experiments are generated as unlabelled case, but we use the label frequencies to break tie.
Datasets. We list the datasets and their statistics in Table 4. These datasets are generated using the "Facebook" mode with a duration of 3 years. The dataset's name, denoted as DGx, represents a scale factor of x. The labels are preprocessed into integers. (1) and (2), note that our current implementation can support both cases, and we do the adaptations for consistency and simplicity. For (3) and (4), we adapt them because currently they do not conform with the subgraph matching problem studied in this paper. For (5), it is due to our current limitation in supporting property graph. We leave (3), (4) and (5) as interesting future work.
Exp-5 All-Around Comparisons. We now conduct the experiment using all queries on DG10 and DG60, and present the results in Figure 9. Here we compute the join plans for BINJOIN and WOPTJOIN by using the unlabelled method, but further using the label frequencies to break tie. The gray filling again represents communication time. FULLREP outperforms the other strategies in many cases, except that it performs slightly slower than BINJOIN for q 3 and q 5 . This is because that q 3 and q 5 are join units, and BINJOIN processes them locally in each machine as FULLREP, and it does not build indices as "CFLMatch" used in FULLREP. When comparing to WOPTJOIN, Among all these queries, we only have q 8 that configures different CrystalJoin plan (w.r.t. BiGJoin plan) for WOPTJOIN. The results show that the performance of WOPTJOIN drops about 10 times while using CrystalJoin plan. Note that the core part of q 8 is a 5-path of "Psn-City-Cty-City-Psn" with enormous intermediate results. As we mentioned in unlabelled experiments, it may not always be wise to first compute the vertex-cover-induced core.
We now focus on comparing BINJOIN and WOPTJOIN. There are three cases that intrigue us. Firstly, observe that BINJOIN performs much better than WOPTJOIN while querying q 4 . The reason is high intersection cost as we discovered on GP dataset in unlabelled matching. Secondly, BINJOIN performs worse than WOPTJOIN in q 7 , which again is because of BINJOIN's costly shuffling. The third case is q 9 , the most complex query in the experiment. BINJOIN performs much better while querying q 9 . The bad performance of WOPTJOIN comes from the long execution plan together with costly intermediate results.
The two algorithms all expand the three "Psn"s, and then grow via one of the "City"s to "Cty", but BINJOIN approaches this using one join (a triangle a TwinTwig), while WOPTJOIN will first expand to "City" then further "Cty", and the "City" expansion is the culprit of the slower run. 6 Discussions and Future Work.
BinJoin
We discuss our findings and potential future work based on the experiments in Section 5. Eventually, we summarize the findings into a practical guide.
Strategy Selection. FULLREP is obviously the preferred choice when the machine can hold the graph data, while both WOPTJOIN and BINJOIN are good alternatives when the graph is larger than the capacity of the machine. For BINJOIN and WOPTJOIN, on one side, BINJOIN may perform worse than WOPTJOIN (e.g. unlabelled q 2 , q 4 , q 5 ) due to the expensive shuffling operation, on the other side, BINJOIN can also outperform WOPTJOIN (e.g. unlabelled and labelled q 9 ) while avoiding costly sub-queries due to query decomposition. One way to choose between BINJOIN and WOPTJOIN is to compare the cost of their respective join plans, and select the one with less cost. For now, we can either use cost estimation proposed in [37], or summing the worst-case bound, but none of them consistently gives the best solution, as will be discussed in "Optimal Join Plan". Alternatively, we refer to "EmptyHeaded" [11] to study a potential hybrid strategy of BINJOIN and WOPTJOIN. Note that "EmptyHeaded" is developed in single-machine setting, and it does not take into consideration the impact of Compression, we hence leave such hybrid strategy in the distributed context as an interesting future work.
Optimizations. Our experimental results suggest always using Batching, using TrIndexing when each machine has sufficient memory to hold "triangle partition", and using Compression when the data graph is not very sparse (e.g. d G ≥ 5). Batching often does not impact performance, so we recommend always using Batching due to the unpredictability of the size of (intermediate) results. TrIndexing is critical for BINJOIN, and it can greatly improve WOPTJOIN by reducing communication cost, while it requires extra storage to maintain "triangle partition". Amongst the evaluated datasets, each "triangle partition" maintains an average of 30% data in our 10-machine cluster. Thus, we suggest a memory threshold of 60%|E G | (half for graph and half for running algorithm) for TrIndexing in a cluster of the same or larger scale. Note that the threshold does not apply to extremely dense graph. Among the three optimizations, Compression is the primary performance booster that improves the performance of BINJOIN and WOPTJOIN by 5 times on average in all but the cases on the very sparse road networks. For such very sparse data graphs, Compression can render more cost than benefits.
Optimal Join Plan. It is challenging to systematically determine the optimal join plans for both BINJOIN and WOPTJOIN. From the experiments, we identify three impact factors: (1) the worst-case bound; (2) cost estimation based on data statistics; (3) favoring the optimizations, especially Compression. All existing works only partially consider these factors, and we have observed sub-optimal join plans in the experiments. For example, BINJOIN bases the "optimal" join plan on minimizing the cost estimation, but the join plan does not render the best performance for unlabelled q 8 ; WOPTJOIN follows the worst-case optimality, while it may encounter costly sub-queries for labelled and unlabelled q 9 ; CrystalJoin focuses on maximizing the compression, while ignoring the facts that the vertex-coverinduced core part itself can be costly to compute. Additionally, there are other impact factors such as the partial orders of query vertices and the label frequencies, which have not been studied in this work due to short of space. It is another very interesting future work to thoroughly study the optimal join plan while considering all above impact factors.
Computation vs. Communication. We argue that distributed subgraph matching nowadays is a computationintensive task. This claim holds when the cluster configures high-speed network (e.g. ≥ 10GBps), and the data processor can efficiently overlap computation with communication. Note that computation cost (either BINJOIN's join or WOPTJOIN's intersection) is lower-bounded by the output size that is equal to the communication cost. Therefore, computation becomes the bottleneck if the network condition is good to guarantee the data to be delivered in time. Nowadays, the bandwidth of local cluster commonly exceeds 10GBps, and the overlapping of computation and communication is widely used in distributed systems (e.g. Spark [54], Flink [17]). As a result, we tend to see distributed subgraph matching as a computation-intensive task, and we advocate future research to devote more efforts into optimizing the computation while considering the following perspectives: (1) the new advancements of hardware, for example the co-processing on GPU in the coupled CPU-GPU architectures [28] and the SIMD programming model on modern CPU [30]; (2) general computing optimizations such as load balancing strategy and cache-aware graph data accessing [53].
A Practical Guide. Based on the experimental findings, we propose a practical guide for distributed subgraph matching in Figure 10. Note that this program guide is based on current progress of the literature, and future work is needed, for examples to study the hybrid strategy and the impact factors of the optimal join plan, before we can arrive at a solid decision-making to choose between BINJOIN and WOPTJOIN.
Conclusions
In this paper, we implement four strategies and three general-purpose optimizations for distributed subgraph matching based on Timely dataflow system, aiming for a systematic, strategy-level comparisons among the state-of-the-art algorithms. Based on thorough empirical analysis, we summarize a practical guide, and we also motivate interesting future work for distributed subgraph matching.
A Auxiliary Experiments
Exp-6 Scalability of Unlabelled Matching. We vary the number of machines as 1, 2, 4, 6, 8, 10, and run the unlabelled queries q 1 and q 2 to see how each strategy (BINJOIN, WOPTJOIN, SHRCUBE and FULLREP) scales out. We further evaluate "Single Thread", a serial algorithm that is specially implemented for these two queries. According to [42], we define COST of a strategy as the number of workers it needs to outperform the "Single Thread", which is a comprehensive measurement of both efficiency and scalability. In this experiment, we query q 1 and q 2 on the popular dataset LJ, and show the results in Figure 11. Note that we only plot the communication and memory consumption for q 1 , as q 2 follows similar trend. We also test on the other datasets, such as the dense dataset GP, the results are also similar.
BinJoin
WOptjoin ShrCube FullRep Single Thread All strategies demonstrate reasonable scaling regarding both queries. In terms of COST, note that FULLREP is slightly larger than 1, because "DualSim" is implemented in general for arbitrary query, while "SingleThread" uses a hand-tuned implementation. We first analyze the results of q 1 . The COST ranking is FULLREP (1.6), WOPTJOIN (2.0), BINJOIN (3.1) and SHRCUBE (3.7). As expected, WOPTJOIN scales worse than FULLREP, while BINJOIN scales worse than WOPTJOIN because shuffling cost is increasing with the number of machines. In terms of memory consumption, it is trivial that FULLREP constantly consumes memory of graph size. Due to the use of Batching, both BINJOIN and WOPTJOIN consume very low memory for both queries. Observe that SHRCUBE consumes much more memory than WOPTJOIN and BINJOIN, even more than the graph data itself. This is because that certain worker may receive more edges (with duplicates) than the graph itself, which increases the peak memory consumption. For communication cost, both BINJOIN and WOPTJOIN demonstrate reasonable drops as the increment of machines. SHRCUBE renders much less communication as expected, but it shows increasing trend. This is actually a reasonable behavior of SHRCUBE, as more machines also means more data duplicates. For q 2 , the COST ranking is FULLREP (2.4), WOPTJOIN (2.75), BINJOIN (3.82) and SHRCUBE (71.2). Here, SHRCUBE is dramatically larger, with most time spending on deduplication (Section 3.3). The trend of memory consumption and communication cost of q 2 is similar with that of q 1 , thus is not further discussed.
Exp-7 Vary Desities for Labelled Matching. Based on DG10, We generate the datasets with densities 10, 20, 40, 80 and 160 by randomly adding edges into DG10. Note that the density-10 dataset is the original DG10 in Table 4. We use the labelled queries q 4 and q 7 in this experiment, and show the results in Figure 12. Exp-8 Vary Labels for Labelled Matching. We generate the datasets with number of labels 0, 5, 10, 15 and 20 based on DG10. Note that there are 5 labels in labelled queries q 4 and q 7 , which are called the target labels. The 10-label dataset is the original DG10. For the one with 5 labels, we will replace each label not in the target labels as one random target label. For the ones with more than 10 labels, we randomly choose some nodes to change their labels into some other pre-defined labels until they contain the required number of labels. For the one with zero label, it degenerates into unlabelled matching, and we use unlabelled version of q 4 and q 7 instead. The experiment demonstrates the transition from unlabelled matching to labelled matching, where the biggest drop happens for all algorithms. The drops continue with the increment of the number of labels, but less sharply when there are sufficient number of labels (≥ 10). Observe that when there are very few labels, for example, the 5-label case of q 7 , FULLREP actually performs worse than BINJOIN and WOPTJOIN. The "CFLMatch" algorithm [16] used by FULLREP relies heavily on label-based pruning. Fewer labels render larger candidate set and more recursive calls, resulting in performance drop of FULLREP. While fewer labels may enlarge the intermediate results of BINJOIN and WOPTJOIN, but they are relatively small in the labelled case, and does not create much burden for the 10GBps network.
BinJoin
B Auxiliary Materials
All Query Results. In Table 5, We show the number of results of every successful query on each dataset evaluated in this work. Note that DG10 and DG60 record the labelled queries of q 1 − q 9 .
| 13,431 |
1906.11518
|
2954023930
|
Recently there emerge many distributed algorithms that aim at solving subgraph matching at scale. Existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm mainly due to the intertwining of strategy and optimization. In this paper, we identify four strategies and three general-purpose optimizations from representative state-of-the-art works. We implement the four strategies with the optimizations based on the common Timely dataflow system for systematic strategy-level comparison. Our implementation covers all representation algorithms. We conduct extensive experiments for both unlabelled matching and labelled matching to analyze the performance of distributed subgraph matching under various settings, which is finally summarized as a practical guide.
|
Incremental Subgraph Matching. Computing subgraph matching in a continuous context has recently drawn a lot of attentions. @cite_9 proposed incremental algorithm that identifies a portion of the data graph affected by the update regarding the query. The authors in @cite_38 used the join scheme as algorithms ( edge-at-a-time ). The algorithm maintained a left-deep join tree for the query, with each vertex maintaining a partial query and the corresponding partial results. Then one can compute the incremental answers of each partial query in response to the update, and utilizes the join tree to re-construct the results. Graphflow @cite_4 solved incremental subgraph matching using join, in the sense that the incremental query can be transformed into @math independent joins, where @math is the number of query edges. Then they used the worst-case-optimal join algorithm to solve these joins in parallel. Most recently, proposed TurboFlux that maintains data-centric index for incremental queries, which achieves good tradeoff between performance and storage.
|
{
"abstract": [
"Cyber security is one of the most significant technical challenges in current times. Detecting adversarial activities, prevention of theft of intellectual properties and customer data is a high priority for corporations and government agencies around the world. Cyber defenders need to analyze massive-scale, high-resolution network flows to identify, categorize, and mitigate attacks involving networks spanning institutional and national boundaries. Many of the cyber attacks can be described as subgraph patterns, with prominent examples being insider infiltrations (path queries), denial of service (parallel paths) and malicious spreads (tree queries). This motivates us to explore subgraph matching on streaming graphs in a continuous setting. The novelty of our work lies in using the subgraph distributional statistics collected from the streaming graph to determine the query processingstrategy. Weintroducea“Lazy Search\"algorithmwhere the search strategy is decided on a vertex-to-vertex basis depending on the likelihood of a match in the vertex neighborhood. We also propose a metric named “Relative Selectivity\" that is used to select between different query processing strategies. Our experiments performed on real online news, network traffic stream and a synthetic social network benchmark demonstrate 10-100x speedups over selectivity agnostic approaches.",
"Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
""
],
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_4"
],
"mid": [
"2964293433",
"2142491343",
""
]
}
|
A SURVEY AND EXPERIMENTAL ANALYSIS OF DISTRIBUTED SUBGRAPH MATCHING A PREPRINT
|
with no need of exchanging data. As a result, it typically renders much less communication cost than that of BINJOIN and WOPTJOIN algorithms. MultiwayJoin adopts the idea of SHRCUBE for subgraph matching. In order to properly partition the computation without missing results, MultiwayJoin needs to duplicate each edge in multiple workers. As a result, MultiwayJoin can almost carry the whole graph in each worker for certain queries [35,13] and thus scale out poorly.
OTHERS. Shao et al. proposed PSgL [50] that processes subgraph matching via breadth-first-style traversal. Staring from an initial query vertex, PSgL iteratively expands the partial results by merging the matches of certain vertex's unmatched neighbors. It has been pointed out in [35] that PSgL is actually a variant of StarJoin. Very recently, Qiao et al. proposed CrystalJoin [45] that aims at resolving the "output crisis" by compressing the (intermediate) results. The idea is to first compute the matches of the vertex cover of the query graph, then the remaining vertices' matches can be compressed as intersection of the vertex cover's neighbors to avoid costly cartesian product.
Optimizations. Apart from join strategies, existing algorithms also explored a variety of optimizations, some of which are query-or algorithm-specific, while we spotlight three general-purpose optimizations, Batching, TrIndexing and Compression. Batching aims to divide the whole computation into sub-tasks that can be evaluated independently in order to save resource (memory) allocation. TrIndexing precomputes and indices the triangles (3-cycles) of the graph to facilitate pruning. Compression attempts to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost.
Motivations.
In this paper, we survey seven representative algorithms to solve distributed subgraph matching: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. While all these algorithms embody some good merits in theory, existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm due to several reasons. Firstly, the prior experiments did not take into consideration the differences of languages and the cost of the systems on which each implementation is based (Table 1). Secondly, some implementations hardcode query-specific optimizations for each query, which makes it hard to judge whether the observed performance is from the algorithmic advancement or hardcoded optimization. Thirdly, all BINJOIN and WOPTJOIN algorithms (more precisely, their implementations) intertwined join strategy with some optimizations of Batching, TrIndexing and Compression. We show in Table 1 how each optimization has been applied in current implementation. For example, CliqueJoin only adopted TrIndexing and some queryspecific Compression, while BiGJoin considered Batching in general, but TrIndexing only for one specific query (Compression was only discussed in paper, but not implemented). People naturally wonder that "maybe it is better to adopt A strategy with B optimization", but unfortunately none of existing implementation covers that combination. Last but not least, there misses an important benchmarking of the FULLREP strategy, that is to maintain the whole graph in each partition and parallelize embarrassingly [29]. FULLREP strategy requires no communication, and it should be the most efficient strategy when each machine can hold the whole graph (the case for most experimental settings nowadays). [51] BINJOIN No Trinity [49] None MultiwayJoin [12] SHRCUBE N/A Hadoop [35], Myria [20] N/A PSgL [50] OTHERS No Giraph [4] None TwinTwigJoin [35] BINJOIN No Hadoop Compression [36] CliqueJoin [37] BINJOIN Yes (Section 6) Hadoop TrIndexing, some Compression CrystalJoin [45] OTHERS N/A Hadoop TrIndexing, Compression BiGJoin [13] WOPTJOIN Yes [13] Timely Dataflow [43] Batching, specific TrIndexing Table 1 summarizes the surveyed algorithms via the category of strategy, the optimality guarantee, and the status of current implementations including the based platform and how the three optimizations are adopted.
Our Contributions
To address the above issues, we aims at a systematic, strategy-level benchmarking of distributed subgraph matching in this paper. To achieve that goal, we implement all strategies, together with the three general-purpose optimizations for subgraph matching based on the Timely dataflow system [43]. Note that our implementation covers all seven representative algorithms. Here, we use Timely as the base system as it incurs less cost [42] than other popular systems like Giraph [4], Spark [54] and GraphLab [38], so that the system's impact can be reduced to the minimum.
We implement the benchmarking platform using our best effort based on the papers of each algorithm and email communications with the authors. Our implementation is (1) generic to handle arbitrary query, and does not include any hardcoded optimizations; (2) flexible that can configure Batching, TrIndexing and Compression optimizations in any combination for BINJOIN and WOPTJOIN algorithms; and (3) efficient that are comparable to and sometimes even faster than the original hardcoded implementation. Note that the three general-purpose optimizations are mainly used to reduce communication cost, and is not useful to the SHRCUBE and FULLREP strategies, while we still devote a lot of efforts into their implementations. Aware that their performance heavily depends on the local algorithm, we implement and compare the state-of-the-art local subgraph matching algorithms proposed in [34], [11] (for unlabelled matching), and [16] (for labelled matching), and adopt the best-possible implementation. For SHRCUBE, we refer to [20] to implement "Hypercube Optimization" for better hypercube sharing.
We make the following contributions in the paper.
(1) A benchmarking platform based on Timely dataflow system for distributed subgraph matching. We implement four distributed subgraph matching strategies (and the general optimizations) that covers seven state-of-the-art algorithms: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. Our implementation is generic to handle arbitrary query, including the labelled and directed query, and thus can guide practical use.
(2) Three general-purpose optimizations -Batching, TrIndexing and Compression. We investigate the literature on the optimization strategies, and spotlight the three general-purpose optimizations. We propose heuristics to incorporate the three optimizations into BINJOIN and WOPTJOIN strategies, with no need of query-specific adjustments from human experts. The three optimizations can be flexibly configured in any combination.
(3) In-depth experimental studies. In order to extensively evaluate the performance of each strategy and the effectiveness of the optimizations, we use data graphs of different sizes and densities, including sparse road network, dense ego network, and web-scale graph that is larger than each machine's configured memory. We select query graphs of various characteristics that are either from existing works or suitable for benchmarking purpose. In addition to running time, we measure the communication cost, memory usage and other metrics to help reason the performance.
(4) A practical guide of distributed subgraph matching. Through empirical analysis covering the variances of join strategies, optimizations, join plans, we propose a practical guide for distributed subgraph matching. We also inspire interesting future work based on the experimental findings.
Organizations
. The rest of the paper is organized as follows. Section 2 defines the problem of subgraph matching and introduces preliminary knowledge. Section 3 surveys the representative algorithms, and our implementation details following the categories of BINJOIN, WOPTJOIN, SHRCUBE and OTHERS. Section 4 investigates the three general-purpose optimizations and devises heuristics of applying them to BINJOIN and WOPTJOIN algorithms. Section 5 demonstrates the experimental results and our in-depth analysis. Section 7 discusses the related works, and Section 8 concludes the whole paper.
Preliminaries
Problem Definition
Graph Notations. A graph g is defined as a 3-tuple, g = (V g , E g , L g ), where V g is the vertex set and E g ⊆ V g × V g is the edge set of g, and L g is a label function that maps each vertex µ ∈ V g and/or each edge e ∈ E g to a label. Note that for unlabelled graph, L g simply maps all vertices and edges to ∅. For a vertex µ ∈ V g , denote N g (µ) as the set of neighbors, d g (µ) = |N g (µ)| as the degree of µ, d g = 2|Eg| |Vg| and D g = max µ∈V (g) d g (µ) as the average and maximum degree, respectively. A subgraph g of g, denoted g ⊆ g, is a graph that satisfies V g ⊆ V g and E g ⊆ E g .
Given V ⊆ V g , we define induced subgraph g(V ) as the subgraph induced by V , that is
g(V ) = (V , E(V ), L g ), where E(V ) = {e = (µ, µ ) | e ∈ E g , µ ∈ V ∧ µ ∈ V }. We say V ⊆ V g is a vertex cover of g, if ∀ e = (µ, µ ) ∈ E g , µ ∈ V or µ ∈ V . A minimum vertex cover V c
g is a vertex cover of g that contains minimum number of vertices. A connected vertex cover is a vertex cover whose induced subgraph is connected, among which a minimum connected vertex cover, denoted V cc g , is the one with the minimum number of vertices.
Data and Query Graph. We denote the data graph as G, and let N = |V G |, M = |E G |. Denote a data vertex of id i as u i where 1 <= i <= N . Note that the data vertex has been reordered such that if d G (u) < d G (u ), then id(u) < id(u ). We denote the query graph as Q, and let n = |V Q |, m = |E Q |, and V Q = {v 1 , v 2 , · · · , v n }.
Subgraph Matching. Given a data graph G and a query graph Q, we define subgraph isomorphism:
Definition 2.1. (Subgraph Isomorphism.) Subgraph isomorphism is defined as a bijective mapping f : V (Q) → V (G) such that, (1) ∀v ∈ V (Q), L Q (v) = L G (f (v)); (2) ∀(v, v ) ∈ E(Q), (f (v), f (v )) ∈ E(G), and L Q ((v, v )) = L G ((f (v), f (v )))
. A subgraph isomorphism is called a Match in this paper. With the query vertices listed as {v 1 , v 2 , · · · , v n }, we can simply represent a match f as
{u k1 , u k2 , · · · , u kn }, where f (v i ) = u ki for 1 <= i <= n.
The Subgraph Matching problem aims at finding all matches of Q in G. Denote R G (Q) (or R(Q) when the context is clear) as the result set of Q in G. As prior works [35,37,50], we apply symmetry breaking for unlabelled matching to avoid duplicate enumeration caused by automorphism. Specifically, we first assign partial order O Q to the query graph according to [26].
Here, O Q ⊆ V Q × V Q , and (v i , v j ) ∈ O Q means v i < v j .
In unlabelled matching, a match f must satisfy the order constraint:
∀(v, v ) ∈ O Q , it holds f (v) < f (v )
. Note that we do not consider order constraint in labelled matching. Example 2.1. In Figure 1, we present a query graph Q and a data graph G. For unlabelled matching, we give the partial order O Q under the query graph. There are three matches: {u 1 , u 2 , u 6 , u 5 }, {u 2 , u 5 , u 3 , u 6 } and {u 4 , u 3 , u 6 , u 5 }. It is easy to check that these matches satisfy the order constraint. Without the order constraint, there are actually four automorphic 3 matches corresponding to each above match [12]. For labelled matching, we use different fillings to represent the labels. There are two matches accordingly -{u 1 , u 2 , u 6 , u 5 } and By treating the query vertices as attributes and data edges as relational table, we can write subgraph matching query as a multiway-way join of the edge relations. For example, regardless of label and order constraints, the query of Example 2.1 can be written as the following join
{u 4 , u 3 , u 6 , u 5 }. v 1 v 2 v 3 O Q = {(v 1 , v 3 ), (v 2 , v 4 )} u 1 u 3 u 4 u 5 u 6 v 4 u 2R(Q) = E(v1, v2) E(v2, v3) E(v3, v4) E(v1, v4) E(v2, v4).(1)
This motivates researchers to leverage join operation for large-scale subgraph matching, given that join can be easily distributed, and it is natively supported in many distributed data engines like Spark [54] and Flink [17].
Timely Dataflow System
Timely is a distributed data-parallel dataflow system [43]. The minimum processing unit of Timely is a worker, which can be simply seen as a process that occupies a CPU core. Typically, one physical multi-core machine can run several workers. Timely follows the shared-nothing dataflow computation model [22] that abstracts the computation as a dataflow graph. In the dataflow graph, the vertex (a.k.a. operator) defines the computing logics and the edges in between the operators represent the data streams. One operator can accept multiple input streams, feed them to the computing, and produce (typically) one output stream. After the dataflow graph for certain computing task is defined, it is distributed to each worker in the cluster, and further translated into a physical execution plan. Based on the physical plan, each worker can accordingly process the task in parallel while accepting the corresponding input portion.
Algorithm Survey
We survey the distributed subgraph matching algorithms following the categories of BINJOIN, WOPTJOIN, SHRCUBE, and OTHERS. We also show that CliqueJoin is a variant of GenericJoin [44], and is thus worst-case optimal.
BinJoin
The simplest BINJOIN algorithm uses data edges as the base relation, which starts from one edge, and expands by one edge in each join. For example, to solve the join of Equation 1, a simple plan is shown in Figure 2a. The join plan is straightforward, but the intermediate results, especially R 2 (a 3-path), can be huge. To improve the performance of BINJOIN, people devoted their efforts into: (1) using more complex base relations other than edge; (2) devising better join plan P . The base relations B [q] represent the matches of a set of sub-structures [q] of the query graph Q. Each p ∈ [q] is called a join unit, and it must satisfy
⋊ ⋉ E(v 1 , v 2 ) E(v 2 , v 3 ) R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 2 , v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (a) Left-deep join plan E(v 1 , v 2 ) E(v 2 , v 3 ) ⋊ ⋉ R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 3 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (b) Bushy join planV Q = p∈[q] V p and E Q = p∈[q] E p .
With the data graph partitioned across the cluster, [37] constrains the join unit to be the structure whose results can be independently computed within each partition (i.e. embarrassingly parallel [29]). It is not hard to see that when each vertex has full access to the neighbors in the partition, we can compute the matches of a k-star (a star of k leaves) rooted on the vertex u by enumerating all k-combinations within N G (u). Therefore, star is a qualified and indeed widely used join unit.
Given the base relations, the join plan P determines an order of processing binary joins. A join plan is left-deep 4 if there is at least a base relation involved in each join, otherwise it is bushy. For example, the join plan in Figure 2a is left-deep, and a bushy join plan is shown in Figure 2b. Note that the bushy plan avoids the expensive R 2 in the left-deep plan, and is generally better.
StarJoin. As the name suggests, StarJoin uses star as the join unit, and it follows the left-deep join order. To decompose the query graph, it first locates the vertex cover of the query graph, and each vertex in the cover and its unused neighbors naturally form a star [51]. A StarJoin plan for Equation 1 is
(J 1 ) R(Q) = Star(v 2 ; {v 1 , v 3 , v 4 }) Star(v 4 ; {v 2 , v 3 }),
where Star(r; L) denotes a Star relation (the matches of the star) with r as the root, and L as the set of leaves.
TwinTwigJoin. Enumerating a k-star on a vertex of degree d will render O(d k ) cost. We refer star explosion to the case while enumerating stars on a large-degree vertex. Lai et al. proposed TwinTwigJoin [35] to address the issue of StarJoin by forcing the join plan to use TwinTwig (a star of at most two edges) instead of a general star as the join unit. Intuitively, this would help ameliorate the star explosion by constraining the cost of each join unit from d k of arbitrary k to at most d 2 . TwinTwigJoin follows StarJoin to use left-deep join order. The authors proved that TwinTwigJoin is instance optimal to StarJoin, that is given any general StarJoin plan in the left-deep join order, we can rewrite it as an alternative TwinTwigJoin plan that draws no more cost (in the big O sense) than the original StarJoin, where the cost is evaluated based on Erdös-Rényi random graph (ER) model [23]. A TwinTwigJoin plan for Equation 1 is
(J1) R1(v1, v2, v3, v4) = TwinTwig(v1; {v2, v4}) TwinTwig(v2; {v3, v4}); (J2) R(Q) = R1(v1, v2, v3, v4) TwinTwig(v3; {v4}),(2)
where TwinTwig(r; L) denotes a TwinTwig relation with r as the root, and L as the leaves.
CliqueJoin. TwinTwigJoin hampers star explosion to some extent, but still suffers from the problems of long execution (Ω( m 2 ) rounds) and suboptimal left-deep join plan. CliqueJoin resolves the issues by extending StarJoin in two aspects. Firstly, CliqueJoin applies the "triangle partition" strategy (Section 4.2), which enables CliqueJoin to use clique, in addition to star, as the join unit. The use of clique can greatly shorten the execution especially when the query is dense, although it still degenerates to StarJoin when the query contains no clique subgraph. Secondly, CliqueJoin exploits the bushy join plan to approach optimality. A CliqueJoin plan for Equation 1 is:
(J 1 ) R(Q) = Clique({v 1 , v 2 , v 4 }) Clique({v 2 , v 3 , v 4 }),(3)
where Clique(V ) denotes a Clique relation of the involving vertices V .
Implementation Details. We implement the BINJOIN strategy based on the join framework proposed in [37] to cover StarJoin, TwinTwigJoin and CliqueJoin.
We use power-law random graph (PR) model [21] to estimate the cost as [37], and implement the dynamic programming algorithm [37] to compute the cost-optimal join plan. Once the join plan is computed, we translate the plan into Timely dataflow that processes each binary join using a Join operator. We implement the Join operator following Timely's official "pipeline" HashJoin example 5 . We modify it into "batching-style" -the mappers (senders) shuffle the data based on the join key, while the reducers (receivers) maintain the received key-value pairs in a hash table (until mapper completes) for join processing. The reasons that we implement the join as "batching-style" are, (1) its performance is similar to "pipeline" join as a whole; (2) it replays the original implementation in Hadoop; and (3) it favors the Batching optimization (Section 4.1).
WOptJoin
WOPTJOIN strategy processes subgraph matching by matching vertices in a predefined order. Given the query graph Q and V Q = {v 1 , v 2 , · · · , v n } as the matching order, the algorithm starts from an empty set, and computes the matches of the subset {v 1 , · · · , v i } in the i th rounds. Denote the partial results after the i th (i < n) round as R i , and p = {u k1 , u k2 , · · · , u ki } ∈ R i is one of the tuples. In the i + 1 th round, the algorithm expands the results by matching v i+1 with u ki+1 for p iff. ∀ 1≤j≤i (v j , v i+1 ) ∈ E Q , (u kj , u ki+1 ) ∈ E G . It is immediate that the candidate matches of v i+1 , denoted C(v i+1 ), can be obtained by intersecting the relevant neighbors of the matched vertices as
C(v i+1 ) = ∀ 1≤j≤i ∧(vj ,vi+1)∈E Q N G (u kj ).(4)
BiGJoin. BiGJoin adopts the WOPTJOIN strategy in Timely dataflow system. The main challenge is to implement the intersection efficiently using Timely dataflow. For that purpose, the authors designed the following three operators:
• Count: Checking the number of neighbors of each u kj in Equation 4 and recording the location (worker) of the one with the smallest neighbor set. • Propose: Attaching the smallest neighbor set to p as (p; C(v i+1 )).
• Intersect: Sending (p; C(v i+1 )) to the worker that maintains each u kj and update C(
v i+1 ) = C(v i+1 ) ∩ N G (u kj ).
After intersection, we will expand p by pushing into p every vertex of C(v i+1 ).
Implementation Details. We directly use the authors' implementation [5], but slightly modify the codes to use the common graph data structure. We do not consider the dynamic version of BiGJoin in this paper, as the other strategies currently only support static context. The matching order is determined using a greedy heuristic that starts with the vertex of the largest degree, and consequently selects the next vertex with the most connections (id as tie breaker) with already-selected vertices.
ShrCube
SHRCUBE strategy treats the join processing of the query Q as a hypercube of n = |V Q | dimension. It attempts to divide the hypercube evenly across the workers in the cluster, so that each worker can complete its own share without data communication. However, it is normally required that each data tuple is duplicated into multiple workers. This renders a space requirement of M w 1−ρ for each worker, where M is size of the input data, w is the number of workers and 0 < ρ ≤ 1 is a query-dependent parameter. When ρ is close to 1, the algorithm ends up with maintaining the whole input data in each worker.
MultiwayJoin. MultiwayJoin applies the SHRCUBE strategy to solve subgraph matching in one single round. Consider w workers in the cluster, a query graph Q with
V Q = {v 1 , v 2 , . . . , v n } vertices and E Q = {e 1 , e 2 , . . . , e m }, where e i = (v i1 , v i2 ). Regarding each query vertex v i , assign a positive integer as bucket number b i that satisfies n i=1 b i = w.
The algorithm then divides the candidate data vertices for v i evenly into b i parts via a hash function
h : u → z i , where u ∈ V G , 1 ≤ z i ≤ b i .
This accordingly divides the whole computation into w shares, each of which can be indiced via an n-ary tuple (z 1 , z 2 , · · · , z n ), and is assigned to one worker. Afterwards, regarding each query edge
e i = (v i1 , v i2 ), MultiwayJoin maps a data edge (u, u ) as (z 1 , · · · , z i1 = h(u), · · · , z i2 = h(u ), . . . , z n ),
where other than z i1 and z i2 , each above z i iterates through {1, 2, · · · , b i }, and the edge will be routed to the workers accordingly. Taking triangle query with
E Q = {(v 1 , v 2 ), (v 1 , v 3 ), (v 2 , v 3 )
} as an example. According to [12],
b 1 = b 2 = b 3 = b = n
√ w is an optimal bucket number assignment. Each edge (u, u ) is then routed to the workers as:
(1) (h(u), h(u ), z) regarding (v 1 , v 2 ); (2) (h(u), z, h(u )) regarding (v 1 , v 3 ); (3) (z, h(u), h(u )) regarding (v 2 , v 3 ),
where the above z iterates through {1, 2, · · · , b}. Consequently, each data edge is duplicated by roughly 3 3 √ w times, and by expectation each worker will receive 3M w 1−1/3 edges. For unlabelled matching, MultiwayJoin utilizes the partial order of the query graph (Section 2.1) to reduce edge duplication, and details can be found in [12].
Implementation Details. There are two main impact factors of the performance of SHRCUBE. Firstly, the hypercube sharing by assigning proper b i for v i . Beame et al. [15] generalized the problem of computing optimal hypercube sharing for arbitrary query as linear programming. However, the optimal solution may assign fractional bucket number that is unwanted in practice. An easy refinement is to round down to an integer, but it will apparently result in idle workers. Chu et al. [20] addressed this issue via "Hypercube Optimization", that is to enumerate all possible bucket sequences around the optimal solutions, and choose the one that produces shares (product of bucket numbers) closest to the number of workers. We adopt this strategy in our implementation.
Secondly, the local algorithm. When the edges arrive at the worker, we collect them into a local graph (duplicate edges are removed), and use local algorithm to compute the matches. For unlabelled matching. we study the state-of-the-art local algorithms from "EmptyHeaded" [11] and "DualSim" [34]. "EmptyHeaded" is inspired by Ngo's worst-case optimal algorithm [44] that decomposes the query graph via "Hyper-Tree Decomposition", computes each decomposed part using worst-case optimal join and finally glues all parts together using hash join. "DualSim" was proposed by [34] for subgraph matching in the external-memory setting. The idea is to first compute the matches of V cc Q , then the remaining vertices V Q \V cc Q can be efficiently matched by enumerating the intersection of V cc Q 's neighbors. We find out that "DualSim" actually produces the same query plans as "EmptyHeaded" for all our benchmarking queries ( Figure 4) except q 9 . We implement both algorithms for q 9 and "DualSim" performs better than "EmptyHeaded" on the GO, US, GP and LJ datasets ( Table 2). As a result, we adopt "DualSim" as the local algorithm for MultiwayJoin. For labelled matching, we implement "CFLMatch" proposed in [16] that has been shown so far to have the best performance. Now we let each worker independently compute matches in its local graph. Simply doing so will result in duplicates, so we process deduplication as follows: given a match f that is computed in the worker identified by t w , we can recover the tuple t f e of the matched edge (f (v), f (v )) regarding the query edge e = (v, v ), then the match f is retained if and only if t w = t f e for every e ∈ E Q . To explain this, let's consider b = 2, and a match {u 0 , u 1 , u 2 } for a triangle query
(v 0 , v 1 , v 2 ), where h(u 0 ) = h(u 1 ) = h(u 2 ) = 0.
It is easy to see that the match will be computed in workers of (0, 0, 0) and (0, 0, 1), while the match in worker (0, 0, 1) will be eliminated as (u 0 , u 2 ) that matches the query edge (v 0 , v 2 ) can not be hashed to (0, 0, 1) regarding (v 0 , v 2 ). We can also avoid deduplication by separately maintaing each edge regarding different query edges it stands for, and use the local algorithm proposed in [20], but it results in too many edge duplicates that drain our memory even when processing a medium-size graph.
Others
PSgL and its implementation. PSgL iteratively processes subgraph matching via breadth-first traversal. All query vertices are configured three status, "white" (initialized), "gray" (candidate) and "black" (matched). Denote v i as the vertex to match in the i th round. The algorithm starts from matching initial query vertex v 1 , and coloring the neighbors as "gray". In the i th round, the algorithm applies the workload-aware expanding strategy at runtime, that is to select the v i to expand among all current "gray" vertices based on a greedy heuristic to minimize the communication cost [49]; the partial results from previous round R i−1 (specially, R 0 = ∅) will be distributed among the workers based on the candidate data vertices that can match v i ; in the certain worker, the algorithm computes R i by merging R i−1 with the matches of the Star formed by v i and its "white" neighbors N w
Q (v i ), namely Star(v i ; N w Q (v i ));
after v i is matched, v i is colored as "black" and its "white" neighbors will be colored as "gray"; essentially, this process is analogous to StarJoin by processing
R i = R i−1 Star(v i ; N w Q (v i ))
. Thus, PSgL can be seen as an alternative implementation of StarJoin on Pregel [41]. In this work, we also implement PSgL using a Pregel on Timely. Note that we introduce Pregel api to as much as possible replay the implementation of PSgL. In fact, it is simply wrapping Timely's primitive operators such as binary_notify and loop 6 , and barely introduces extra cost to the implementation. Our experimental results demonstrate similar findings as prior work [37] that PSgL's performance is dominated by CliqueJoin [37]. Thus, we will not further discuss this algorithm in this paper.
CrystalJoin and its implementation. CrystalJoin aims at resolving the "output crisis" by compressing the results of subgraph matching [45]. The authors defined a structure called crystal, denoted Q(x, y). A crystal is a subgraph of Q that contains two sets of vertices V x and V y (|V x | = x and |V y | = y), where the induced subgraph Q(V x ) is a x-clique, and every vertex in V y connects to all vertices of V x . We call V x clique vertices, and V y the bud vertices. The algorithm first obtains the minimum vertex cover V c Q , and then applies the Core-Crystal Decomposition to decompose the query graph into the core Q(V c Q ) and a set of crystals
{Q 1 (x 1 , y 1 ), . . . , Q t (x t , y t )}. The crystals must satisfy that ∀1 ≤ i ≤ t, Q(V xi ) ⊆ Q(V c Q )
, namely, the clique part of each crystal is a subgraph of the core. As an example, we plot a query graph and the corresponding core-crystal decomposition in Figure 3. Note that in the example, both crystals have an edge (i.e. 2-clique) as the clique part. With core-crystal decomposition, the computation has accordingly split into three stages:
v 1 v 2 v 3 v 4 v 5 Core: v 3 v 2 v 5 Crystals: v 2 v 5 v 1 v 3 v 5 v 4 Q 1 (2, 1) Q 2 (2, 1) Q Core-Crystal Decomposition
1. Core computation. Given that Q(V c Q ) itself is a query graph, the algorithm can be recursively applied to compute Q(V c Q ) according to [45].
2. Crystal computation. A special case of crystal is Q(x, 1), which is indeed a (x + 1)-clique. Suppose an instance of the Q(V x ) is f x = {u 1 , u 2 , . . . , u x }, we can represent the matches w.r.t. f x as (f x , I y ), where I y = x i=1 N G (u i )
denotes the set of vertices that can match V y . This can naturally be extended to the case with y > 1, where any y-combinations of the vertices of I y together with f x represent a match. This way, the matches of crystals can be largely compressed. 3. One-time assembly. This stage assembles the core instances and the compressed crystal matches to produce the final results. More precisely, this stage is to join the core instance with the crystal matches.
We notice two technical obstacles to implement CrystalJoin according to the paper. Firstly, it is worth noting that the core Q(V c Q ) may be disconnected, a case that can produce exponential number of results. The authors applied a query-specific optimization in the original implementation to resolve this issue. Secondly, the authors proposed to precompute the cliques up to certain k, while it is often cost-prohibitive to do so in practice. Take UK (Table 2) dataset as an example, the triangles, 4-cliques and 5-cliques are respectively about 20, 600 and 40000 times larger than the graph itself. It is worth noting that the main purpose of this paper is not to study how well each algorithm performs for a specific query, which has its theoretical value, but can barely guide practice. After communicating with the authors, we adapt CrystalJoin in the following. Firstly, we replace the core Q(V c Q ) with the induced subgraph of the minimum connected vertex cover Q(V cc Q ). Secondly, instead of implementing CrystalJoin as a strategy, we use it as an alternative join plan (matching order) for WOPTJOIN. According to CrystalJoin, we first match V cc Q , while the matching order inside and outside V cc Q still follows WOPTJOIN's greedy heuristic (Section 3.2). It is worth noting that this adaptation achieves high performance comparable to the original implementation. In fact, we also apply CrystalJoin plan to BINJOIN, while it does not perform as well as the WOPTJOIN version, thus we do not discuss this implementation.
FullRep and its implementation. FULLREP simply maintains a full replica of the graph in each physical machine. Each worker picks one independent share of computation and solves it using existing local algorithm.
The implementation is straightforward. We let each worker pick its share of computation via a Round-Robin strategy, that is we settle an initial query vertex v 1 , and let first worker match v 1 with u 1 to continue the remaining process, and second worker match v 1 with u 2 , and so on. This simple strategy already works very well on balancing the load of our benchmarking queries (Figure 4). We use "DualSim" for unlabelled matching and "CFLMatch" for labelled matching as MultiwayJoin.
Worst-case Optimality.
Given a query Q and the data graph G, we denote the maximum possible result set as R G (Q). Simply speaking, an algorithm is worst-case optimal if the aggregation of the total intermediate results is bounded by Θ(|R G (Q)|). Ngo et al. proposed a class of worst-case optimal join algorithm called GenericJoin [44], and we first overview this algorithm.
GenericJoin. Let the join be R
(V ) = F ⊆Ψ R(F ), where Ψ = {U | U ⊆ V } and V = U ∈Ψ U . Given a vertex subset U ⊆ V , let Ψ U = {V | V ∈ Ψ ∧ V ∩ U = ∅}, and for a tuple t ∈ R(V ), denote t U as t's projection on U .
We then show the GenericJoin in Algorithm 1.
Algorithm 1: GenericJoin(V, Ψ, U ∈Ψ R(U )) 1 R(V ) ← ∅; 2 if |V | = 1 then 3 Return U ∈Ψ R(U ); 4 V ← (I, J), where ∅ = I ⊂ V , and J = V \ I; 5 R(I) ← GenericJoin(I, ΨI , U ∈Ψ I πI (R(U ))); 6 forall tI ∈ R(I) do 7 R(J)w.r.t. t I ← GenericJoin(J, ΨJ , U ∈Ψ J πJ (R(U ) tI )); 8 R(V ) ← R(V ) ∪ {tI } × R(J)w.r.t. t I ; 9 Return R(V );
In Algorithm 1, the original join is recursively decomposed into two parts R(I) and R(J) regarding the disjoint sets I and J. From line 5, it is clear that R(I) will record R(V )'s projection on I, thus we have |R(I)| ≤ |R(V )|, where R(V ) is the maximum possible results of the query. Meanwhile, in line 7, the semi-join R(U ) t I = {r | r ∈ R(U ) ∧ r (U ∪I) = t (U ∪I) } only retains those R(J) w.r.t. t I that can end up in the join result, which infers that the R(J) must also be bounded by the final results. This intuitively explains the worst-case optimality of GenericJoin, while we refer interested readers to [44] for a complete proof.
It is easy to see that BiGJoin is worst-case optimal. In Algorithm 1, we select I in line 4 by popping the edge relation E(v s , v i )(s < i) in the i th step. In line 7, the recursive call to solve the semi-join R(U ) t I actually corresponds to the intersection process.
Worst-case Optimality of CliqueJoin. Note that the two clique relations in Equation 3 interleave one common edge (v 2 , v 4 ) in the query graph. This optimization, called "overlapping decomposition" [37], eventually contributes to CliqueJoin's worst-cast optimality. Note that it is not possible to apply this optimization to StarJoin and TwinTwigJoin. We have the following theorem. Theorem 3.1. CliqueJoin is worst-case optimal while applying "overlapped decomposition".
Proof. We implement CliqueJoin using Algorithm 1 in the following. Note that Q(V ) denotes a subgraph of Q induced by V . In line 2, we change the stopping condition to "Q(I) is either a clique or a star". In line 4, the I is selected such that Q(I) is either a clique or a star. Note that by applying the "overlapping decomposition" in CliqueJoin, the sub-query of the J part must be the J-induced graph Q(J), and it will also include the edges of E Q(I) ∩ E Q(J) , which infers that R(Q(J)) = R(Q(J)) R(Q(I)) , and just reflects the semi-join in line 7. Therefore, CliqueJoin belongs to GenericJoin, and is thus worst-case optimal.
Optimizations
We introduce the three general-purpose optimizations, Batching, TrIndexing and Compression in this section, and how we orthogonally apply them to BINJOIN and WOPTJOIN algorithms. In the rest of the paper, we will use the strategy BINJOIN, WOPTJOIN, SHRCUBE instead of their corresponding algorithms, as we focus on strategy-level comparison.
Batching
Let R(V i ) be the partial results that match the given vertices V i = {v si , v s2 , . . . , v si } (R i for short if V i follows a given order), and R(V j ) denote the more complete results with V i ⊂ V j . Denote R j |R i as the tuples in R j whose projection
on V i equates R i . Let's partition R i into b disjoint parts {R 1 i , R 2 i , . . . , R b i }.
We define Batching on R j |R i as the technique to independently process the following sub-tasks that compute
{R j |R 1 i , R j |R 2 i , . . . , R j |R b i }. Obviously, R j |R i = b k=1 R j |R k i .
WOptJoin. Recall from Section 3.2 that WOPTJOIN progresses according to a predefined matching order {v 1 , v 2 , . . . , v n }. In the i th round, WOPTJOIN will Propose on each p ∈ R i−1 to compute R i . It is not hard to see that we can easily apply Batching to the computation of R i |R i−1 by randomly partitioning R i−1 . For simplicity, the authors implemented Batching on R(Q)|R 1 (v 1 ). Note that R 1 (v 1 ) = V G in unlabelled matching, which means that we can achieve Batching simply by partitioning the data vertices 7 . For short, we also say the strategy batches on v 1 , and call v 1 the batching vertex. We follow the same idea to apply Batching to BINJOIN algorithms.
BinJoin. While it is natural for WOPTJOIN to batch on v 1 , it is non-trivial to pick such a vertex for BINJOIN. Given a decomposition of the query graph {p 1 , p 2 , . . . , p s }, where each p i is a join unit, we have R(Q) = R(p 1 ) R(p 2 ) · · · R(p s ). If we partition R 1 (v) so as to batch on v ∈ V Q , we correspondingly split the join task, and one of the sub-task is R(Q)|R k
1 (v) = R(p 1 )|R k 1 (v) · · · R(p s )|R k 1 (v) (R k 1 (v) is one partition of R 1 (v)).
Observe that if there exists a join unit p where v ∈ V p , we must have R(p) = R(p)|R k 1 (v), which means R(p) have to be fully computed in each sub-task. Let's consider the example query in Equation 2.
R(Q) = T 1 (v 1 , v 2 , v 4 ) T 2 (v 2 , v 3 , v 4 ) T 3 (v 3 , v 4 ).
Suppose we batch on v 1 , the above join can be divided into the following independent sub-tasks:
R(Q)|R 1 1 (v1) = (T1(v1, v2, v4)|R 1 1 (v1)) T2(v2, v3, v4) T3(v3, v4), R(Q)|R 2 1 (v1) = (T1(v1, v2, v4)|R 2 1 (v1)) T2(v2, v3, v4) T3(v3, v4), · · · R(Q)|R b 1 (v1) = (T1(v1, v2, v4)|R b 1 (v1)) T2(v2, v3, v4) T3(v3, v4).
It is not hard to see that we will have to re-compute T 2 (v 2 , v 3 , v 4 ) and T 3 (v 3 , v 4 ) in all the above sub-tasks. Alternatively, if we batch on v 4 , we can avoid such re-computation as T 1 , T 2 and T 3 can all be partitioned in each sub-task. Inspired by this, for BINJOIN, we come up with the heuristic to apply Batching on the vertex that presents in as many join units as possible. Note that such vertex can only be in the join key, as otherwise it must at least not present in one side of the join. For complex query, we can still have join unit that does not contain any vertex for Batching after applying the above heuristic. In this case, we either re-compute the join unit, or cache it on disk. Another problem caused by this is potential memory burden of the join. Thus, we devise the join-level Batching following the idea of external MergeSort. Specifically, we inject a Buffer-and-Batch operator for the two data streams before they arrive at the Join operator. Buffer-and-Batch functions in two parts:
• Buffer: While the operator receives data from the upstream, it buffers the data until reaching a given threshold. Then the buffer is sorted according to the join key's hash value and spilled to the disk. The buffer is reused for the next batch of data. • Batch: After the data to join is fully received, we read back the data from the disk in a batching manner, where each batch must include all join keys whose hash values are within a certain range.
While one batch of data is delivered to the Join operator, Timely allows us to supervise the progress and hold the next batch until the current batch completes. This way, the internal memory requirement is one batch of the data. Note that such join-level Batching is natively implemented in Hadoop's "Shuffle" stage, and we replay this process in Timely to improve the scalability of the algorithm.
Triangle Indexing
As the name suggests, TrIndexing precomputes the triangles of the data graph and indices them along with the graph data to prune infeasible results. The authors of BiGJoin [13] optimized the 4-clique query by using the triangles as base relations to join, which reduces the rounds of join and network communication. In [45], the authors proposed to not only maintain triangles, but all k-cliques up to a given k. As we mentioned earlier, it incurs huge extra cost of maintaining triangles already, let alone larger cliques.
In addition to the default hash partition, Lai et al. proposed "triangle partition" [37] by also incorporating the edges among the neighbors (it forms triangles with the anchor vertex) in the partition. "Triangle partition" allows BINJOIN to use clique as the join unit [37], which greatly reduces the intermediate results of certain queries and improves the performance. "Triangle partition" is in de facto a variant of TrIndexing, which instead of explicitly materializing the triangles, maintains them in the local graph structure (e.g. adjacency list). As we will show in the experiment (Section 5), this will save a lot of space compared to explicit triangle materialization. Therefore, we adopt the "triangle partition" for TrIndexing optimization in this work.
BinJoin. Obviously, BINJOIN becomes CliqueJoin with TrIndexing, and StarJoin (or TwinTwigJoin) otherwise.
With worst-case optimality guarantee (Section 3.5), BINJOIN should perform much better with TrIndexing, which is also observed in "Exp-1" of Section 5.
WOptJoin. In order to match v i in the i th round, WOPTJOIN utilizes Count, Propose and Intersect to process the intersection of Equation 4. For ease of presentation, suppose v i+1 connects to the first s query vertices {v 1 , v 2 , . . . , v s }, and given a partial match,
{f (v 1 ), . . . , f (v s )}, we have C(v i+1 ) = s j=1 N G (f (v j )).
In the original implementation, it is required to send (p; C(v i+1 )) via network to all machines that contain each f (v j )(1 ≤ j ≤ s) to process the intersection, which can render massive communication cost. In order to reduce the communication cost, we implement TrIndexing for WOPTJOIN in the following. We first group {v 1 , . . . , v s } such that for each group U (v x ), we have
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q }.
Because of TrIndexing, we have N G (f (v y )) (∀v y ∈ U (v x )) maintain in f (v x )'s partition. Thus, we only need to send the prefix to f (v x )'s machine, and the intersection within U (v x ) can be done locally. We process the grouping using a greedy strategy that always constructs the largest group from the remaining vertices.
Remark 4.1. The "triangle partition" may result in maintaining a large portion of the data graph in certain partition. Lai et al. pointed out this issue, and proposed a space-efficient alternative by leveraging the vertex orderings [37]. That is, given the partitioned vertex as u, and two neighbors u and u that close a triangle, we place the edge (u , u ) in the partition only when u < u < u . Although this alteration reduces storage, it may affect the effectiveness of TrIndexing for WOPTJOIN and the implementations of Batching and Compression for BINJOIN algorithms. Take WOPTJOIN as an example, after using the space-efficient "triangle partition", we should modify the above grouping as:
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q ∧ (v x , v y ) ∈ O Q }.
Note that the order between query vertices are for symmetry breaking (Section 2.1), and it may not present in certain query, which makes TrIndexing completely useless for WOPTJOIN.
Compression
Subgraph matching is a typical combinatorial problem, and can easily produce results of exponential size. Compression aims to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost. In the following, when we say "compress a query vertex", we mean maintaining its matched data vertices in the form of an array, instead of unfolding them in line with the one-one mapping of a match (Definiton 2.1). Qiao et al. proposed CrystalJoin to study Compression in general for subgraph matching. As we introduced in Section 3.4, CrystalJoin first extracts the minimum vertex cover as uncompressed part, and then it can compress the remaining query vertices as the intersection of certain uncompressed matches' neighbors. Such Compression leverages the fact that all dependencies (edges) of the compressed part that requires further computation are already covered by the uncompressed part, thus it can stay compressed until the actual matches are requested. CrystalJoin inspires a heuristic for doing Compression, that is to compress the vertices whose matches will not be used in any future computation. In the following, we will apply the same heuristic to the other algorithms.
BinJoin. Obviously we can not compress any vertex that presents in the join key. What we need to do is to simply locate the vertices to compress in the join unit, namely star and clique. For star, the root vertex must remain uncompressed, as the leaves' computation depends on it. For clique, we can only compress one vertex, as otherwise the mutual connection between the compressed vertices will be lost. In a word, we compress two types of vertices for BINJOIN, (1) non-key and non-root vertices of a star join unit, (2) one non-key vertex of a clique join unit.
WOptJoin. Based on a predefined join order {v 1 , v 2 , . . . , v n }, we can compress
v i (1 ≤ i ≤ n), if there does not exist v j (i < j) such that (v i , v j ) ∈ E Q .
In other words, v i 's matches will never be involved in any future intersection (computation). Note that v n 's can be trivially compressed. With Compression, when v i is compressed, we will maintain its matches as an array instead of unfolding it into the prefix like a normal vertex.
Experiments
Experimental settings
Environments. We deploy two clusters for the experiments: (1) a local cluster of 10 machines connected via one 10GBps switch and one 1GBps switch. Each machine has 64GB memory, 1 TB disk and 1 Intel Xeon CPU E3-1220 V6 3.00GHz with 4 physical cores; (2) an AWS cluster of 40 "r5-2xlarge" instances connected via a 10GBps switch, each with 64GB memory, 8 vCpus and 500GB Amazon EBS storage. By default we use the local cluster of 10 machines with 10GBps switch. We run 3 workers in each machine in the local cluster, and 6 workers in the AWS cluster for Timely. The codes are implemented based on the open-sourced Timely dataflow system [8] using Rust 1.32. We are still working towards open-sourcing the codes, and the bins together with their usages are temporarily provided 8 to verify the results.
Metrics.
In the experiments, we measure query time T as the slowest worker's wall clock time from an average of three runs. We allow 3 hours as the maximum running time for each test. We use OT and OOM to indicate a test case runs out of the time limit and out of memory, respectively. By default we will not show the OOM results for clear presentation.
We divide T into two parts, the computation time T comp and the communication time T comm . We measure T comp as the time the slowest worker spends on actual computation by timing every computing function. We are aware that the actual communication time is hard to measure as Timely overlaps computation and communication to improve throughput. We consider T − T comp , which mainly records the time the worker waits data from the network channel (a.k.a. communication time). While the other part of communication that overlaps computation is of less interest as it does not affect the query progress. As a result, we simply let T comm = T − T comp in the experiments. We measure the maximum peak memory using Linux's "time -v" in each machine. We define the communication cost as the number of integers a worker receives during the process, and measure the maximum communication cost among the workers accordingly.
Dataset Formats. We preprocess each dataset as follows: we treat it as a simple undirected graph by removing selfloop and duplicate edges, and format it using "Compressed Sparse Row" (CSR) [3]. We relabel the vertex id according to the degree and break the ties arbitrarily.
Compared Strategies. In the experiments, we implement BINJOIN and WOPTJOIN with all Batching, TrIndexing and Compression optimizations (Section 4). SHRCUBE is implemented with "Hypercube Optimization" [20], and "DualSim" (unlabelled) [34] and "CFLMatch" (labelled) [16] as local algorithms. FULLREP is implemented with the same local algorithms as SHRCUBE.
Auxiliary Experiments. We have also conducted several auxiliary experiments in the appendix to study the strategies of BINJOIN, WOPTJOIN, SHRCUBE and FULLREP.
Unlabelled Experiments
Datasets. The datasets used in this experiment are shown in Table 2. All datasets except SY are downloaded from public source, which are indicated by the letter in the bracket (S [9], W [10], D [1]). All statistics are measured as G is an undirected graph. Among the datasets, GO is a small dataset to study cases of extremely large (intermediate) result set; LJ, UK and FS are three popular datasets used in prior works, featuring statistics of real social network and web graph; GP is the google plus ego network, which is exceptionally dense; US and EU, on the other end, are sparse road networks. These datasets vary in number of vertices and edges, densities and maximum degree, as shown in Table 2. We synthesize the SY data according to [18] that generates data with real-graph characteristics. Note that the data occupies roughly 80GB space, and is larger than the configured memory of our machine. We synthesize the data because we do not find public accessible data of this size. Larger dataset like Clueweb [2] is available, but it is beyond the processing power of our current cluster.
Each data is hash partitioned ("hash") across the cluster. We also implement the "triangle partition" ("tri.") for TrIndexing optimization (Section 4.2). To do so, we use BiGJoin to compute the triangles and send the triangle edges to corresponding partition. We record the time T * and average number of edges |E * | of the two partition strategies. The partition statistics are recorded using the local cluster, except for SY that is processed in the AWS cluster. From Table 2, we can see that |E tri. | is noticeably larger, around 1-10 times larger than |E hash |. Note that in GP and UK, which either is dense, or must contain a large dense community, the "triangle partition" can maintain a large portion of data in each partition. While compared to complete triangle materialization, "triangle partition" turns out to be much cheaper. For example, the UK dataset contains around 27B triangles, which means each partition in our local cluster should by average take 0.9B triangles (three integers); in comparison, UK's "triangle partition" only maintains an average of 0.16B edges (two integers) according to Table 2.
We use US, GO and LJ as default datasets in the experiments "Exp-1", "Exp-2" and "Exp-3" in order to collect useful feedbacks from successful queries, while we may not present certain cases when they do not give new findings.
Queries. The queries are presented in Figure 4. We also give the partial order under each query for symmetry breaking. The queries except q 7 and q 8 are selected based on all prior works [13,35,37,45,50], while varying in number of vertices, densities, and the vertex cover ratio |V cc Q |/|V Q |, in order to better evaluate the strategies from different perspectives. The three queries q 7 , q 8 and q 9 are relatively challenging given their result scale. For example, the smallest dataset GO contains 2, 168B(illion) q 7 , 330B q 8 and 1, 883B q 9 , respectively. For short of space, we record the number of results of each successful query on each dataset in the appendix. Note that q 7 and q 8 are absent from existing works, while we benchmark q 7 considering the importance of path query in practice, and q 8 considering the varieties of the join plans.
Exp-1: Optimizations. We study the effectiveness of Batching, TrIndexing and Compression for both BINJOIN and WOPTJOIN strategies, by comparing BINJOIN and WOPTJOIN with their respective variants with one optimization off, namely "without Batching", "without Trindexing" and "without Compression". In the following, we use the suffix of "(w.o.b.)", "(w.o.t.)" and "(w.o.c.)" to represent the three variants. We use the queries q 2 and q 5 , and the results of US and LJ are shown in Figure 5. By default, we use the batch size of 1, 000, 000 for both BINJOIN and WOPTJOIN (according to [13]) in this experiment, and we reduce the batch size when it runs out of memory, as will be specified.
While comparing BINJOIN with BINJOIN(w.o.b.), we observe that Batching barely affects the performance of q 2 , but severely for q 5 on LJ (1800s vs 4000s (w.o.b.) ). The reason is that we still apply join-level Batching for BINJOIN(w For WOPTJOIN strategy, Batching has little impact to the performance. Surprisingly, after using TrIndexing to WOPTJOIN, the improvement by average is only around 18%. We do another experiment in the same cluster but using 1GBps switch, which shows WOPTJOIN is over 6 times faster than WOPTJOIN(w.o.t.) for both queries on LJ. Note that Timely uses separate threads to buffer received data from the network. Given the same computing speed, a faster network allows the data to be more fully buffered and hence less wait for the following computation. Similar to BINJOIN, Compression greatly improves the performance while querying on LJ, but the opposite on US.
v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 {(v 1 , v 2 ), (v 1 , v 3 ), (v1, v 4 ), (v 2 , v 4 )} {(v 1 , v 3 ), (v 2 , v 4 )} {(v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 )} q 1 q 2 q 3 v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 )} v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 ), (v 3 , v 4 )} q 6 q 5 q 4 v 1 v 2 v 3 v 4 v 5 v 6 {(v 3 , v 5 )} q 8 v 1 v 2 v 3 v 4 v 5 {(v 1 , v 4 )} v 1 v 2 v 3 v 4 v 5 q 9 v 1 v 2 v 3 v 4 v 5 q 7 v 6 {(v 1 , v 5 )} {(v 2 , v 3 ), (v 2 , v 5 ), (v 2 , v 6 )}
Exp-2 Challenging Queries. We study the challenging queries q 7 , q 8 and q 9 in this experiment. We run this experiment We focus on comparing BINJOIN and WOPTJOIN on GO dataset. On the one hand, WOPTJOIN outperforms BINJOIN for q 7 and q 8 . Their join plans of q 7 are nearly the same except that BINJOIN relies on a global shuffling on v 3 to processing join, while WOPTJOIN sends the partial results to the machine that maintains the vertex to grow. It is hence reasonable to observe BINJOIN's poorer performance for q 7 as shuffling is typically a more costly operation. The case of q 8 is similar, so we do not further discuss. On the other hand, even living with costly shuffling, BINJOIN still performs better for q 9 . Due to the vertex-growing nature, WOPTJOIN's "optimal plan" will have to process the costly sub-query Q({v 1 , v 2 , v 3 , v 4 , v 5 }). On US dataset, WOPTJOIN consistently outperforms BINJOIN for these queries. This is because that US does not produce massive intermediate results as LJ, thus BINJOIN's shuffling cost consistently dominates.
While processing complex queries like q 8 and q 9 , we can study varieties of join plans for BINJOIN and WOPTJOIN. First of all, we want the readers to note that BINJOIN's join plan for q 8 is different from the optimal plan originally given [37]. The original "optimal" plan computes q 8 by joining two tailed triangles (triangle tailed with an edge), while this alternative plan works better by joining the uppers "house-shape" sub-query with the bottom triangle. In theory, the tailed triangle has worse-case bound (AGM bound [44]) of O(M 2 ), smaller than the house's O(M 2.5 ), and BINJOIN's actually favors this plan based on cost estimation. However, we find out that the number of tailed triangles is very close to that of the houses on GO, which renders costly process for the original plan to join two tailed triangles. This indicates insufficiency of both cost estimation proposed in [37] and worst-case optimal bound [13] while computing the join plan, which will be further discussed in Section 6.
Secondly, it is worth noting that we actually report the result of WOPTJOIN for q 9 while using the CrystalJoin plan, as it works better than WOPTJOIN's original "optimal" plan. For q 9 , CrystalJoin will first compute Q(V cc Q ), namely the 2-path {v 1 , v 3 , v 5 }, thereafter it can compress all remaining vertices v 2 , v 4 and v 6 . In comparison, the "optimal" plan can only compress v 2 and v 6 . In this case, CrystalJoin performs better because it configures larger compression. In [45], the authors proved that it renders maximum compression to use the vertex cover as the uncompressed core. However, this may not necessarily result in the best performance, considering that it can be costly to compute the core part. In our experiments, the unlabelled q 4 , q 8 and labelled q 8 are cases that CrystalJoin plan performs worse than the original BiGJoin plan (with Compression optimization), where CrystalJoin plan does not render strictly larger compression while having to process the costly core part. As a result, we only recommend CrystalJoin plan when it leads to strictly larger compression.
The final observation is that the computation time dominates most of the evaluated cases, except BINJOIN's q 8 , WOPTJOIN and SHRCUBE's q 9 on US. We will further discuss this in Exp-3.
Exp-3 All-Around Comparisons. In this experiment, we run q 1 − q 6 using BINJOIN, WOPTJOIN, SHRCUBE and FULLREP across the datasets GP, LJ, UK, EU and FS. We also run WOPTJOIN with CrystalJoin plan in q 4 as it is the only query that renders different CrystalJoin plan from BiGJoin plan, and the results show that the performance with BiGJoin plan is consistently better. We report the results in Figure 7, where the communication time is plotted as gray filling. As a whole, among all 35 test cases, FULLREP achieves the best 85% completion rate, followed by WOPTJOIN and BINJOIN which complete 71.4% and 68.6% respectively, and SHRCUBE performs the worst with just 8.6% completion rate.
BinJoin
WOptjoin FULLREP typically outperforms the other strategies. Observe that WOPTJOIN's performance is often very close to FULLREP. The reason is that the WOPTJOIN's computing plans for these evaluated queries are similar to "DualSim" adopted by FULLREP. The extra communication cost of WOPTJOIN has been reduced to very low while adopting TrIndexing optimization. While comparing WOPTJOIN with BINJOIN, BINJOIN is better for q 3 , a clique query (join unit) that requires no join (a case of embarrassingly parallel). BINJOIN performs worse than WOPTJOIN in most other queries, which, as we mentioned before, is due to the costly shuffling. There is an exception -querying q 1 on GP -where BINJOIN performs better than both FULLREP and WOPTJOIN. We explain this using our best speculation. GP is a very dense graph, where we observe nearly 100 vertices with degree around 10,000. To process We observe that the computation time T comp dominates in most cases as we mentioned in Exp-2. This is trivially true for SHRCUBE and FULLREP, but it may not be clearly so for WOPTJOIN and BINJOIN given that they all need to transfer a massive amount of intermediate data. We investigate this and find out two potential reasons. The first one attributes to Timely's highly optimized communication component, which allows the computation to overlap communication by using extra threads to receive and buffer the data from the network so that it can be mostly ready for the following computation. The second one is the fast network. We re-run these queries using the 1GBps switch, while the results show the opposite trend that the communication time T comm in turn takes over.
Exp-4 Web-Scale. We run the SY datasets in the AWS cluster of 40 instances. Note that FULLREP can not be used as SY is larger than the machine's memory. We use the queries q 2 and q 3 , and present the results of BINJOIN and WOPTJOIN (SHRCUBE fails all cases due to OOM) in Table 3. The results are consistent with the prior experiments, but observe that the gap between BINJOIN and WOPTJOIN while querying q 1 is larger. This is because that we now deploy 40 AWS instances, and BINJOIN's shuffling cost increases.
Labelled Experiments
We use the LDBC social network benchmarking (SNB) [6] for labelled matching experiment due to the lack of labelled big graphs in the public. SNB provides a data generator that generates a synthetic social network of required statistics, and a document [7] that describes the benchmarking tasks, in which the complex tasks are actually subgraph matching. The join plans of BINJOIN and WOPTJOIN for labelled experiments are generated as unlabelled case, but we use the label frequencies to break tie.
Datasets. We list the datasets and their statistics in Table 4. These datasets are generated using the "Facebook" mode with a duration of 3 years. The dataset's name, denoted as DGx, represents a scale factor of x. The labels are preprocessed into integers. (1) and (2), note that our current implementation can support both cases, and we do the adaptations for consistency and simplicity. For (3) and (4), we adapt them because currently they do not conform with the subgraph matching problem studied in this paper. For (5), it is due to our current limitation in supporting property graph. We leave (3), (4) and (5) as interesting future work.
Exp-5 All-Around Comparisons. We now conduct the experiment using all queries on DG10 and DG60, and present the results in Figure 9. Here we compute the join plans for BINJOIN and WOPTJOIN by using the unlabelled method, but further using the label frequencies to break tie. The gray filling again represents communication time. FULLREP outperforms the other strategies in many cases, except that it performs slightly slower than BINJOIN for q 3 and q 5 . This is because that q 3 and q 5 are join units, and BINJOIN processes them locally in each machine as FULLREP, and it does not build indices as "CFLMatch" used in FULLREP. When comparing to WOPTJOIN, Among all these queries, we only have q 8 that configures different CrystalJoin plan (w.r.t. BiGJoin plan) for WOPTJOIN. The results show that the performance of WOPTJOIN drops about 10 times while using CrystalJoin plan. Note that the core part of q 8 is a 5-path of "Psn-City-Cty-City-Psn" with enormous intermediate results. As we mentioned in unlabelled experiments, it may not always be wise to first compute the vertex-cover-induced core.
We now focus on comparing BINJOIN and WOPTJOIN. There are three cases that intrigue us. Firstly, observe that BINJOIN performs much better than WOPTJOIN while querying q 4 . The reason is high intersection cost as we discovered on GP dataset in unlabelled matching. Secondly, BINJOIN performs worse than WOPTJOIN in q 7 , which again is because of BINJOIN's costly shuffling. The third case is q 9 , the most complex query in the experiment. BINJOIN performs much better while querying q 9 . The bad performance of WOPTJOIN comes from the long execution plan together with costly intermediate results.
The two algorithms all expand the three "Psn"s, and then grow via one of the "City"s to "Cty", but BINJOIN approaches this using one join (a triangle a TwinTwig), while WOPTJOIN will first expand to "City" then further "Cty", and the "City" expansion is the culprit of the slower run. 6 Discussions and Future Work.
BinJoin
We discuss our findings and potential future work based on the experiments in Section 5. Eventually, we summarize the findings into a practical guide.
Strategy Selection. FULLREP is obviously the preferred choice when the machine can hold the graph data, while both WOPTJOIN and BINJOIN are good alternatives when the graph is larger than the capacity of the machine. For BINJOIN and WOPTJOIN, on one side, BINJOIN may perform worse than WOPTJOIN (e.g. unlabelled q 2 , q 4 , q 5 ) due to the expensive shuffling operation, on the other side, BINJOIN can also outperform WOPTJOIN (e.g. unlabelled and labelled q 9 ) while avoiding costly sub-queries due to query decomposition. One way to choose between BINJOIN and WOPTJOIN is to compare the cost of their respective join plans, and select the one with less cost. For now, we can either use cost estimation proposed in [37], or summing the worst-case bound, but none of them consistently gives the best solution, as will be discussed in "Optimal Join Plan". Alternatively, we refer to "EmptyHeaded" [11] to study a potential hybrid strategy of BINJOIN and WOPTJOIN. Note that "EmptyHeaded" is developed in single-machine setting, and it does not take into consideration the impact of Compression, we hence leave such hybrid strategy in the distributed context as an interesting future work.
Optimizations. Our experimental results suggest always using Batching, using TrIndexing when each machine has sufficient memory to hold "triangle partition", and using Compression when the data graph is not very sparse (e.g. d G ≥ 5). Batching often does not impact performance, so we recommend always using Batching due to the unpredictability of the size of (intermediate) results. TrIndexing is critical for BINJOIN, and it can greatly improve WOPTJOIN by reducing communication cost, while it requires extra storage to maintain "triangle partition". Amongst the evaluated datasets, each "triangle partition" maintains an average of 30% data in our 10-machine cluster. Thus, we suggest a memory threshold of 60%|E G | (half for graph and half for running algorithm) for TrIndexing in a cluster of the same or larger scale. Note that the threshold does not apply to extremely dense graph. Among the three optimizations, Compression is the primary performance booster that improves the performance of BINJOIN and WOPTJOIN by 5 times on average in all but the cases on the very sparse road networks. For such very sparse data graphs, Compression can render more cost than benefits.
Optimal Join Plan. It is challenging to systematically determine the optimal join plans for both BINJOIN and WOPTJOIN. From the experiments, we identify three impact factors: (1) the worst-case bound; (2) cost estimation based on data statistics; (3) favoring the optimizations, especially Compression. All existing works only partially consider these factors, and we have observed sub-optimal join plans in the experiments. For example, BINJOIN bases the "optimal" join plan on minimizing the cost estimation, but the join plan does not render the best performance for unlabelled q 8 ; WOPTJOIN follows the worst-case optimality, while it may encounter costly sub-queries for labelled and unlabelled q 9 ; CrystalJoin focuses on maximizing the compression, while ignoring the facts that the vertex-coverinduced core part itself can be costly to compute. Additionally, there are other impact factors such as the partial orders of query vertices and the label frequencies, which have not been studied in this work due to short of space. It is another very interesting future work to thoroughly study the optimal join plan while considering all above impact factors.
Computation vs. Communication. We argue that distributed subgraph matching nowadays is a computationintensive task. This claim holds when the cluster configures high-speed network (e.g. ≥ 10GBps), and the data processor can efficiently overlap computation with communication. Note that computation cost (either BINJOIN's join or WOPTJOIN's intersection) is lower-bounded by the output size that is equal to the communication cost. Therefore, computation becomes the bottleneck if the network condition is good to guarantee the data to be delivered in time. Nowadays, the bandwidth of local cluster commonly exceeds 10GBps, and the overlapping of computation and communication is widely used in distributed systems (e.g. Spark [54], Flink [17]). As a result, we tend to see distributed subgraph matching as a computation-intensive task, and we advocate future research to devote more efforts into optimizing the computation while considering the following perspectives: (1) the new advancements of hardware, for example the co-processing on GPU in the coupled CPU-GPU architectures [28] and the SIMD programming model on modern CPU [30]; (2) general computing optimizations such as load balancing strategy and cache-aware graph data accessing [53].
A Practical Guide. Based on the experimental findings, we propose a practical guide for distributed subgraph matching in Figure 10. Note that this program guide is based on current progress of the literature, and future work is needed, for examples to study the hybrid strategy and the impact factors of the optimal join plan, before we can arrive at a solid decision-making to choose between BINJOIN and WOPTJOIN.
Conclusions
In this paper, we implement four strategies and three general-purpose optimizations for distributed subgraph matching based on Timely dataflow system, aiming for a systematic, strategy-level comparisons among the state-of-the-art algorithms. Based on thorough empirical analysis, we summarize a practical guide, and we also motivate interesting future work for distributed subgraph matching.
A Auxiliary Experiments
Exp-6 Scalability of Unlabelled Matching. We vary the number of machines as 1, 2, 4, 6, 8, 10, and run the unlabelled queries q 1 and q 2 to see how each strategy (BINJOIN, WOPTJOIN, SHRCUBE and FULLREP) scales out. We further evaluate "Single Thread", a serial algorithm that is specially implemented for these two queries. According to [42], we define COST of a strategy as the number of workers it needs to outperform the "Single Thread", which is a comprehensive measurement of both efficiency and scalability. In this experiment, we query q 1 and q 2 on the popular dataset LJ, and show the results in Figure 11. Note that we only plot the communication and memory consumption for q 1 , as q 2 follows similar trend. We also test on the other datasets, such as the dense dataset GP, the results are also similar.
BinJoin
WOptjoin ShrCube FullRep Single Thread All strategies demonstrate reasonable scaling regarding both queries. In terms of COST, note that FULLREP is slightly larger than 1, because "DualSim" is implemented in general for arbitrary query, while "SingleThread" uses a hand-tuned implementation. We first analyze the results of q 1 . The COST ranking is FULLREP (1.6), WOPTJOIN (2.0), BINJOIN (3.1) and SHRCUBE (3.7). As expected, WOPTJOIN scales worse than FULLREP, while BINJOIN scales worse than WOPTJOIN because shuffling cost is increasing with the number of machines. In terms of memory consumption, it is trivial that FULLREP constantly consumes memory of graph size. Due to the use of Batching, both BINJOIN and WOPTJOIN consume very low memory for both queries. Observe that SHRCUBE consumes much more memory than WOPTJOIN and BINJOIN, even more than the graph data itself. This is because that certain worker may receive more edges (with duplicates) than the graph itself, which increases the peak memory consumption. For communication cost, both BINJOIN and WOPTJOIN demonstrate reasonable drops as the increment of machines. SHRCUBE renders much less communication as expected, but it shows increasing trend. This is actually a reasonable behavior of SHRCUBE, as more machines also means more data duplicates. For q 2 , the COST ranking is FULLREP (2.4), WOPTJOIN (2.75), BINJOIN (3.82) and SHRCUBE (71.2). Here, SHRCUBE is dramatically larger, with most time spending on deduplication (Section 3.3). The trend of memory consumption and communication cost of q 2 is similar with that of q 1 , thus is not further discussed.
Exp-7 Vary Desities for Labelled Matching. Based on DG10, We generate the datasets with densities 10, 20, 40, 80 and 160 by randomly adding edges into DG10. Note that the density-10 dataset is the original DG10 in Table 4. We use the labelled queries q 4 and q 7 in this experiment, and show the results in Figure 12. Exp-8 Vary Labels for Labelled Matching. We generate the datasets with number of labels 0, 5, 10, 15 and 20 based on DG10. Note that there are 5 labels in labelled queries q 4 and q 7 , which are called the target labels. The 10-label dataset is the original DG10. For the one with 5 labels, we will replace each label not in the target labels as one random target label. For the ones with more than 10 labels, we randomly choose some nodes to change their labels into some other pre-defined labels until they contain the required number of labels. For the one with zero label, it degenerates into unlabelled matching, and we use unlabelled version of q 4 and q 7 instead. The experiment demonstrates the transition from unlabelled matching to labelled matching, where the biggest drop happens for all algorithms. The drops continue with the increment of the number of labels, but less sharply when there are sufficient number of labels (≥ 10). Observe that when there are very few labels, for example, the 5-label case of q 7 , FULLREP actually performs worse than BINJOIN and WOPTJOIN. The "CFLMatch" algorithm [16] used by FULLREP relies heavily on label-based pruning. Fewer labels render larger candidate set and more recursive calls, resulting in performance drop of FULLREP. While fewer labels may enlarge the intermediate results of BINJOIN and WOPTJOIN, but they are relatively small in the labelled case, and does not create much burden for the 10GBps network.
BinJoin
B Auxiliary Materials
All Query Results. In Table 5, We show the number of results of every successful query on each dataset evaluated in this work. Note that DG10 and DG60 record the labelled queries of q 1 − q 9 .
| 13,431 |
1906.11518
|
2954023930
|
Recently there emerge many distributed algorithms that aim at solving subgraph matching at scale. Existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm mainly due to the intertwining of strategy and optimization. In this paper, we identify four strategies and three general-purpose optimizations from representative state-of-the-art works. We implement the four strategies with the optimizations based on the common Timely dataflow system for systematic strategy-level comparison. Our implementation covers all representation algorithms. We conduct extensive experiments for both unlabelled matching and labelled matching to analyze the performance of distributed subgraph matching under various settings, which is finally summarized as a practical guide.
|
Query Languages and Systems. As the increasing demand of subgraph matching in graph analysis, people start to investigate easy-use and highly expressive subgraph matching language. Neo4j introduced @cite_32 , and now people are working on standardizing the semantics of subgraph matching based on Cypher @cite_58 . Gradoop @cite_36 is a system based on Apache Hadoop that translates a Cypher query into a MapReduce job. proposed based on relational semantics for graph processing, in which they leveraged worst-case optimal join algorithm to solve subgraph matching. Arabesque @cite_1 was designed to solve graph mining (continuously computing frequent subgraphs) at scale, while it can be configured for single subgraph query.
|
{
"abstract": [
"",
"We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected by directed, labelled edges, and property graphs, where nodes and edges can further have attributes. Next we discuss the two most fundamental graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter, we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed and how they can be combined with graph patterns. We also discuss several semantics under which queries using the previous features can be evaluated, what effects the selection of features and semantics has on complexity, and offer examples of such features in three modern languages that are used to query graphs: SPARQL, Cypher, and Gremlin. We conclude by discussing the importance of formalisation for graph query languages; a summary of what is known about SPARQL, Cypher, and Gremlin in terms of expressivity and complexity; and an outline of possible future directions for the area.",
"Distributed data processing platforms such as MapReduce and Pregel have substantially simplified the design and deployment of certain classes of distributed graph analytics algorithms. However, these platforms do not represent a good match for distributed graph mining problems, as for example finding frequent subgraphs in a graph. Given an input graph, these problems require exploring a very large number of subgraphs and finding patterns that match some \"interestingness\" criteria desired by the user. These algorithms are very important for areas such as social networks, semantic web, and bioinformatics. In this paper, we present Arabesque, the first distributed data processing platform for implementing graph mining algorithms. Arabesque automates the process of exploring a very large number of subgraphs. It defines a high-level filter-process computational model that simplifies the development of scalable graph mining algorithms: Arabesque explores subgraphs and passes them to the application, which must simply compute outputs and decide whether the subgraph should be further extended. We use Arabesque's API to produce distributed solutions to three fundamental graph mining problems: frequent subgraph mining, counting motifs, and finding cliques. Our implementations require a handful of lines of code, scale to trillions of subgraphs, and represent in some cases the first available distributed solutions.",
"With the increasing amount of text data stored in relational databases, there is a demand for RDBMS to support keyword queries over text data. As a search result is often assembled from multiple relational tables, traditional IR-style ranking and query evaluation methods cannot be applied directly. In this paper, we study the effectiveness and the efficiency issues of answering top-k keyword query in relational database systems. We propose a new ranking formula by adapting existing IR techniques based on a natural notion of virtual document. We also propose several efficient query processing methods for the new ranking method. We have conducted extensive experiments on large-scale real databases using two popular RDBMSs. The experimental results demonstrate significant improvement to the alternative approaches in terms of retrieval effectiveness and efficiency."
],
"cite_N": [
"@cite_36",
"@cite_58",
"@cite_1",
"@cite_32"
],
"mid": [
"",
"2668736619",
"1996229963",
"1992735711"
]
}
|
A SURVEY AND EXPERIMENTAL ANALYSIS OF DISTRIBUTED SUBGRAPH MATCHING A PREPRINT
|
with no need of exchanging data. As a result, it typically renders much less communication cost than that of BINJOIN and WOPTJOIN algorithms. MultiwayJoin adopts the idea of SHRCUBE for subgraph matching. In order to properly partition the computation without missing results, MultiwayJoin needs to duplicate each edge in multiple workers. As a result, MultiwayJoin can almost carry the whole graph in each worker for certain queries [35,13] and thus scale out poorly.
OTHERS. Shao et al. proposed PSgL [50] that processes subgraph matching via breadth-first-style traversal. Staring from an initial query vertex, PSgL iteratively expands the partial results by merging the matches of certain vertex's unmatched neighbors. It has been pointed out in [35] that PSgL is actually a variant of StarJoin. Very recently, Qiao et al. proposed CrystalJoin [45] that aims at resolving the "output crisis" by compressing the (intermediate) results. The idea is to first compute the matches of the vertex cover of the query graph, then the remaining vertices' matches can be compressed as intersection of the vertex cover's neighbors to avoid costly cartesian product.
Optimizations. Apart from join strategies, existing algorithms also explored a variety of optimizations, some of which are query-or algorithm-specific, while we spotlight three general-purpose optimizations, Batching, TrIndexing and Compression. Batching aims to divide the whole computation into sub-tasks that can be evaluated independently in order to save resource (memory) allocation. TrIndexing precomputes and indices the triangles (3-cycles) of the graph to facilitate pruning. Compression attempts to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost.
Motivations.
In this paper, we survey seven representative algorithms to solve distributed subgraph matching: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. While all these algorithms embody some good merits in theory, existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm due to several reasons. Firstly, the prior experiments did not take into consideration the differences of languages and the cost of the systems on which each implementation is based (Table 1). Secondly, some implementations hardcode query-specific optimizations for each query, which makes it hard to judge whether the observed performance is from the algorithmic advancement or hardcoded optimization. Thirdly, all BINJOIN and WOPTJOIN algorithms (more precisely, their implementations) intertwined join strategy with some optimizations of Batching, TrIndexing and Compression. We show in Table 1 how each optimization has been applied in current implementation. For example, CliqueJoin only adopted TrIndexing and some queryspecific Compression, while BiGJoin considered Batching in general, but TrIndexing only for one specific query (Compression was only discussed in paper, but not implemented). People naturally wonder that "maybe it is better to adopt A strategy with B optimization", but unfortunately none of existing implementation covers that combination. Last but not least, there misses an important benchmarking of the FULLREP strategy, that is to maintain the whole graph in each partition and parallelize embarrassingly [29]. FULLREP strategy requires no communication, and it should be the most efficient strategy when each machine can hold the whole graph (the case for most experimental settings nowadays). [51] BINJOIN No Trinity [49] None MultiwayJoin [12] SHRCUBE N/A Hadoop [35], Myria [20] N/A PSgL [50] OTHERS No Giraph [4] None TwinTwigJoin [35] BINJOIN No Hadoop Compression [36] CliqueJoin [37] BINJOIN Yes (Section 6) Hadoop TrIndexing, some Compression CrystalJoin [45] OTHERS N/A Hadoop TrIndexing, Compression BiGJoin [13] WOPTJOIN Yes [13] Timely Dataflow [43] Batching, specific TrIndexing Table 1 summarizes the surveyed algorithms via the category of strategy, the optimality guarantee, and the status of current implementations including the based platform and how the three optimizations are adopted.
Our Contributions
To address the above issues, we aims at a systematic, strategy-level benchmarking of distributed subgraph matching in this paper. To achieve that goal, we implement all strategies, together with the three general-purpose optimizations for subgraph matching based on the Timely dataflow system [43]. Note that our implementation covers all seven representative algorithms. Here, we use Timely as the base system as it incurs less cost [42] than other popular systems like Giraph [4], Spark [54] and GraphLab [38], so that the system's impact can be reduced to the minimum.
We implement the benchmarking platform using our best effort based on the papers of each algorithm and email communications with the authors. Our implementation is (1) generic to handle arbitrary query, and does not include any hardcoded optimizations; (2) flexible that can configure Batching, TrIndexing and Compression optimizations in any combination for BINJOIN and WOPTJOIN algorithms; and (3) efficient that are comparable to and sometimes even faster than the original hardcoded implementation. Note that the three general-purpose optimizations are mainly used to reduce communication cost, and is not useful to the SHRCUBE and FULLREP strategies, while we still devote a lot of efforts into their implementations. Aware that their performance heavily depends on the local algorithm, we implement and compare the state-of-the-art local subgraph matching algorithms proposed in [34], [11] (for unlabelled matching), and [16] (for labelled matching), and adopt the best-possible implementation. For SHRCUBE, we refer to [20] to implement "Hypercube Optimization" for better hypercube sharing.
We make the following contributions in the paper.
(1) A benchmarking platform based on Timely dataflow system for distributed subgraph matching. We implement four distributed subgraph matching strategies (and the general optimizations) that covers seven state-of-the-art algorithms: StarJoin [51], MultiwayJoin [12], PSgL [50], TwinTwigJoin [35], CliqueJoin [37], CrystalJoin [45] and BiGJoin [13]. Our implementation is generic to handle arbitrary query, including the labelled and directed query, and thus can guide practical use.
(2) Three general-purpose optimizations -Batching, TrIndexing and Compression. We investigate the literature on the optimization strategies, and spotlight the three general-purpose optimizations. We propose heuristics to incorporate the three optimizations into BINJOIN and WOPTJOIN strategies, with no need of query-specific adjustments from human experts. The three optimizations can be flexibly configured in any combination.
(3) In-depth experimental studies. In order to extensively evaluate the performance of each strategy and the effectiveness of the optimizations, we use data graphs of different sizes and densities, including sparse road network, dense ego network, and web-scale graph that is larger than each machine's configured memory. We select query graphs of various characteristics that are either from existing works or suitable for benchmarking purpose. In addition to running time, we measure the communication cost, memory usage and other metrics to help reason the performance.
(4) A practical guide of distributed subgraph matching. Through empirical analysis covering the variances of join strategies, optimizations, join plans, we propose a practical guide for distributed subgraph matching. We also inspire interesting future work based on the experimental findings.
Organizations
. The rest of the paper is organized as follows. Section 2 defines the problem of subgraph matching and introduces preliminary knowledge. Section 3 surveys the representative algorithms, and our implementation details following the categories of BINJOIN, WOPTJOIN, SHRCUBE and OTHERS. Section 4 investigates the three general-purpose optimizations and devises heuristics of applying them to BINJOIN and WOPTJOIN algorithms. Section 5 demonstrates the experimental results and our in-depth analysis. Section 7 discusses the related works, and Section 8 concludes the whole paper.
Preliminaries
Problem Definition
Graph Notations. A graph g is defined as a 3-tuple, g = (V g , E g , L g ), where V g is the vertex set and E g ⊆ V g × V g is the edge set of g, and L g is a label function that maps each vertex µ ∈ V g and/or each edge e ∈ E g to a label. Note that for unlabelled graph, L g simply maps all vertices and edges to ∅. For a vertex µ ∈ V g , denote N g (µ) as the set of neighbors, d g (µ) = |N g (µ)| as the degree of µ, d g = 2|Eg| |Vg| and D g = max µ∈V (g) d g (µ) as the average and maximum degree, respectively. A subgraph g of g, denoted g ⊆ g, is a graph that satisfies V g ⊆ V g and E g ⊆ E g .
Given V ⊆ V g , we define induced subgraph g(V ) as the subgraph induced by V , that is
g(V ) = (V , E(V ), L g ), where E(V ) = {e = (µ, µ ) | e ∈ E g , µ ∈ V ∧ µ ∈ V }. We say V ⊆ V g is a vertex cover of g, if ∀ e = (µ, µ ) ∈ E g , µ ∈ V or µ ∈ V . A minimum vertex cover V c
g is a vertex cover of g that contains minimum number of vertices. A connected vertex cover is a vertex cover whose induced subgraph is connected, among which a minimum connected vertex cover, denoted V cc g , is the one with the minimum number of vertices.
Data and Query Graph. We denote the data graph as G, and let N = |V G |, M = |E G |. Denote a data vertex of id i as u i where 1 <= i <= N . Note that the data vertex has been reordered such that if d G (u) < d G (u ), then id(u) < id(u ). We denote the query graph as Q, and let n = |V Q |, m = |E Q |, and V Q = {v 1 , v 2 , · · · , v n }.
Subgraph Matching. Given a data graph G and a query graph Q, we define subgraph isomorphism:
Definition 2.1. (Subgraph Isomorphism.) Subgraph isomorphism is defined as a bijective mapping f : V (Q) → V (G) such that, (1) ∀v ∈ V (Q), L Q (v) = L G (f (v)); (2) ∀(v, v ) ∈ E(Q), (f (v), f (v )) ∈ E(G), and L Q ((v, v )) = L G ((f (v), f (v )))
. A subgraph isomorphism is called a Match in this paper. With the query vertices listed as {v 1 , v 2 , · · · , v n }, we can simply represent a match f as
{u k1 , u k2 , · · · , u kn }, where f (v i ) = u ki for 1 <= i <= n.
The Subgraph Matching problem aims at finding all matches of Q in G. Denote R G (Q) (or R(Q) when the context is clear) as the result set of Q in G. As prior works [35,37,50], we apply symmetry breaking for unlabelled matching to avoid duplicate enumeration caused by automorphism. Specifically, we first assign partial order O Q to the query graph according to [26].
Here, O Q ⊆ V Q × V Q , and (v i , v j ) ∈ O Q means v i < v j .
In unlabelled matching, a match f must satisfy the order constraint:
∀(v, v ) ∈ O Q , it holds f (v) < f (v )
. Note that we do not consider order constraint in labelled matching. Example 2.1. In Figure 1, we present a query graph Q and a data graph G. For unlabelled matching, we give the partial order O Q under the query graph. There are three matches: {u 1 , u 2 , u 6 , u 5 }, {u 2 , u 5 , u 3 , u 6 } and {u 4 , u 3 , u 6 , u 5 }. It is easy to check that these matches satisfy the order constraint. Without the order constraint, there are actually four automorphic 3 matches corresponding to each above match [12]. For labelled matching, we use different fillings to represent the labels. There are two matches accordingly -{u 1 , u 2 , u 6 , u 5 } and By treating the query vertices as attributes and data edges as relational table, we can write subgraph matching query as a multiway-way join of the edge relations. For example, regardless of label and order constraints, the query of Example 2.1 can be written as the following join
{u 4 , u 3 , u 6 , u 5 }. v 1 v 2 v 3 O Q = {(v 1 , v 3 ), (v 2 , v 4 )} u 1 u 3 u 4 u 5 u 6 v 4 u 2R(Q) = E(v1, v2) E(v2, v3) E(v3, v4) E(v1, v4) E(v2, v4).(1)
This motivates researchers to leverage join operation for large-scale subgraph matching, given that join can be easily distributed, and it is natively supported in many distributed data engines like Spark [54] and Flink [17].
Timely Dataflow System
Timely is a distributed data-parallel dataflow system [43]. The minimum processing unit of Timely is a worker, which can be simply seen as a process that occupies a CPU core. Typically, one physical multi-core machine can run several workers. Timely follows the shared-nothing dataflow computation model [22] that abstracts the computation as a dataflow graph. In the dataflow graph, the vertex (a.k.a. operator) defines the computing logics and the edges in between the operators represent the data streams. One operator can accept multiple input streams, feed them to the computing, and produce (typically) one output stream. After the dataflow graph for certain computing task is defined, it is distributed to each worker in the cluster, and further translated into a physical execution plan. Based on the physical plan, each worker can accordingly process the task in parallel while accepting the corresponding input portion.
Algorithm Survey
We survey the distributed subgraph matching algorithms following the categories of BINJOIN, WOPTJOIN, SHRCUBE, and OTHERS. We also show that CliqueJoin is a variant of GenericJoin [44], and is thus worst-case optimal.
BinJoin
The simplest BINJOIN algorithm uses data edges as the base relation, which starts from one edge, and expands by one edge in each join. For example, to solve the join of Equation 1, a simple plan is shown in Figure 2a. The join plan is straightforward, but the intermediate results, especially R 2 (a 3-path), can be huge. To improve the performance of BINJOIN, people devoted their efforts into: (1) using more complex base relations other than edge; (2) devising better join plan P . The base relations B [q] represent the matches of a set of sub-structures [q] of the query graph Q. Each p ∈ [q] is called a join unit, and it must satisfy
⋊ ⋉ E(v 1 , v 2 ) E(v 2 , v 3 ) R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 2 , v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (a) Left-deep join plan E(v 1 , v 2 ) E(v 2 , v 3 ) ⋊ ⋉ R 1 (v 1 , v 2 , v 3 ) E(v 3 , v 4 ) E(v 1 , v 4 ) ⋊ ⋉ R 2 (v 1 , v 3 , v 4 ) ⋊ ⋉ R 3 (v 1 , v 2 , v 3 , v 4 ) E(v 2 , v 4 ) ⋊ ⋉ R(Q) J 1 J 2 J 3 J 4 (b) Bushy join planV Q = p∈[q] V p and E Q = p∈[q] E p .
With the data graph partitioned across the cluster, [37] constrains the join unit to be the structure whose results can be independently computed within each partition (i.e. embarrassingly parallel [29]). It is not hard to see that when each vertex has full access to the neighbors in the partition, we can compute the matches of a k-star (a star of k leaves) rooted on the vertex u by enumerating all k-combinations within N G (u). Therefore, star is a qualified and indeed widely used join unit.
Given the base relations, the join plan P determines an order of processing binary joins. A join plan is left-deep 4 if there is at least a base relation involved in each join, otherwise it is bushy. For example, the join plan in Figure 2a is left-deep, and a bushy join plan is shown in Figure 2b. Note that the bushy plan avoids the expensive R 2 in the left-deep plan, and is generally better.
StarJoin. As the name suggests, StarJoin uses star as the join unit, and it follows the left-deep join order. To decompose the query graph, it first locates the vertex cover of the query graph, and each vertex in the cover and its unused neighbors naturally form a star [51]. A StarJoin plan for Equation 1 is
(J 1 ) R(Q) = Star(v 2 ; {v 1 , v 3 , v 4 }) Star(v 4 ; {v 2 , v 3 }),
where Star(r; L) denotes a Star relation (the matches of the star) with r as the root, and L as the set of leaves.
TwinTwigJoin. Enumerating a k-star on a vertex of degree d will render O(d k ) cost. We refer star explosion to the case while enumerating stars on a large-degree vertex. Lai et al. proposed TwinTwigJoin [35] to address the issue of StarJoin by forcing the join plan to use TwinTwig (a star of at most two edges) instead of a general star as the join unit. Intuitively, this would help ameliorate the star explosion by constraining the cost of each join unit from d k of arbitrary k to at most d 2 . TwinTwigJoin follows StarJoin to use left-deep join order. The authors proved that TwinTwigJoin is instance optimal to StarJoin, that is given any general StarJoin plan in the left-deep join order, we can rewrite it as an alternative TwinTwigJoin plan that draws no more cost (in the big O sense) than the original StarJoin, where the cost is evaluated based on Erdös-Rényi random graph (ER) model [23]. A TwinTwigJoin plan for Equation 1 is
(J1) R1(v1, v2, v3, v4) = TwinTwig(v1; {v2, v4}) TwinTwig(v2; {v3, v4}); (J2) R(Q) = R1(v1, v2, v3, v4) TwinTwig(v3; {v4}),(2)
where TwinTwig(r; L) denotes a TwinTwig relation with r as the root, and L as the leaves.
CliqueJoin. TwinTwigJoin hampers star explosion to some extent, but still suffers from the problems of long execution (Ω( m 2 ) rounds) and suboptimal left-deep join plan. CliqueJoin resolves the issues by extending StarJoin in two aspects. Firstly, CliqueJoin applies the "triangle partition" strategy (Section 4.2), which enables CliqueJoin to use clique, in addition to star, as the join unit. The use of clique can greatly shorten the execution especially when the query is dense, although it still degenerates to StarJoin when the query contains no clique subgraph. Secondly, CliqueJoin exploits the bushy join plan to approach optimality. A CliqueJoin plan for Equation 1 is:
(J 1 ) R(Q) = Clique({v 1 , v 2 , v 4 }) Clique({v 2 , v 3 , v 4 }),(3)
where Clique(V ) denotes a Clique relation of the involving vertices V .
Implementation Details. We implement the BINJOIN strategy based on the join framework proposed in [37] to cover StarJoin, TwinTwigJoin and CliqueJoin.
We use power-law random graph (PR) model [21] to estimate the cost as [37], and implement the dynamic programming algorithm [37] to compute the cost-optimal join plan. Once the join plan is computed, we translate the plan into Timely dataflow that processes each binary join using a Join operator. We implement the Join operator following Timely's official "pipeline" HashJoin example 5 . We modify it into "batching-style" -the mappers (senders) shuffle the data based on the join key, while the reducers (receivers) maintain the received key-value pairs in a hash table (until mapper completes) for join processing. The reasons that we implement the join as "batching-style" are, (1) its performance is similar to "pipeline" join as a whole; (2) it replays the original implementation in Hadoop; and (3) it favors the Batching optimization (Section 4.1).
WOptJoin
WOPTJOIN strategy processes subgraph matching by matching vertices in a predefined order. Given the query graph Q and V Q = {v 1 , v 2 , · · · , v n } as the matching order, the algorithm starts from an empty set, and computes the matches of the subset {v 1 , · · · , v i } in the i th rounds. Denote the partial results after the i th (i < n) round as R i , and p = {u k1 , u k2 , · · · , u ki } ∈ R i is one of the tuples. In the i + 1 th round, the algorithm expands the results by matching v i+1 with u ki+1 for p iff. ∀ 1≤j≤i (v j , v i+1 ) ∈ E Q , (u kj , u ki+1 ) ∈ E G . It is immediate that the candidate matches of v i+1 , denoted C(v i+1 ), can be obtained by intersecting the relevant neighbors of the matched vertices as
C(v i+1 ) = ∀ 1≤j≤i ∧(vj ,vi+1)∈E Q N G (u kj ).(4)
BiGJoin. BiGJoin adopts the WOPTJOIN strategy in Timely dataflow system. The main challenge is to implement the intersection efficiently using Timely dataflow. For that purpose, the authors designed the following three operators:
• Count: Checking the number of neighbors of each u kj in Equation 4 and recording the location (worker) of the one with the smallest neighbor set. • Propose: Attaching the smallest neighbor set to p as (p; C(v i+1 )).
• Intersect: Sending (p; C(v i+1 )) to the worker that maintains each u kj and update C(
v i+1 ) = C(v i+1 ) ∩ N G (u kj ).
After intersection, we will expand p by pushing into p every vertex of C(v i+1 ).
Implementation Details. We directly use the authors' implementation [5], but slightly modify the codes to use the common graph data structure. We do not consider the dynamic version of BiGJoin in this paper, as the other strategies currently only support static context. The matching order is determined using a greedy heuristic that starts with the vertex of the largest degree, and consequently selects the next vertex with the most connections (id as tie breaker) with already-selected vertices.
ShrCube
SHRCUBE strategy treats the join processing of the query Q as a hypercube of n = |V Q | dimension. It attempts to divide the hypercube evenly across the workers in the cluster, so that each worker can complete its own share without data communication. However, it is normally required that each data tuple is duplicated into multiple workers. This renders a space requirement of M w 1−ρ for each worker, where M is size of the input data, w is the number of workers and 0 < ρ ≤ 1 is a query-dependent parameter. When ρ is close to 1, the algorithm ends up with maintaining the whole input data in each worker.
MultiwayJoin. MultiwayJoin applies the SHRCUBE strategy to solve subgraph matching in one single round. Consider w workers in the cluster, a query graph Q with
V Q = {v 1 , v 2 , . . . , v n } vertices and E Q = {e 1 , e 2 , . . . , e m }, where e i = (v i1 , v i2 ). Regarding each query vertex v i , assign a positive integer as bucket number b i that satisfies n i=1 b i = w.
The algorithm then divides the candidate data vertices for v i evenly into b i parts via a hash function
h : u → z i , where u ∈ V G , 1 ≤ z i ≤ b i .
This accordingly divides the whole computation into w shares, each of which can be indiced via an n-ary tuple (z 1 , z 2 , · · · , z n ), and is assigned to one worker. Afterwards, regarding each query edge
e i = (v i1 , v i2 ), MultiwayJoin maps a data edge (u, u ) as (z 1 , · · · , z i1 = h(u), · · · , z i2 = h(u ), . . . , z n ),
where other than z i1 and z i2 , each above z i iterates through {1, 2, · · · , b i }, and the edge will be routed to the workers accordingly. Taking triangle query with
E Q = {(v 1 , v 2 ), (v 1 , v 3 ), (v 2 , v 3 )
} as an example. According to [12],
b 1 = b 2 = b 3 = b = n
√ w is an optimal bucket number assignment. Each edge (u, u ) is then routed to the workers as:
(1) (h(u), h(u ), z) regarding (v 1 , v 2 ); (2) (h(u), z, h(u )) regarding (v 1 , v 3 ); (3) (z, h(u), h(u )) regarding (v 2 , v 3 ),
where the above z iterates through {1, 2, · · · , b}. Consequently, each data edge is duplicated by roughly 3 3 √ w times, and by expectation each worker will receive 3M w 1−1/3 edges. For unlabelled matching, MultiwayJoin utilizes the partial order of the query graph (Section 2.1) to reduce edge duplication, and details can be found in [12].
Implementation Details. There are two main impact factors of the performance of SHRCUBE. Firstly, the hypercube sharing by assigning proper b i for v i . Beame et al. [15] generalized the problem of computing optimal hypercube sharing for arbitrary query as linear programming. However, the optimal solution may assign fractional bucket number that is unwanted in practice. An easy refinement is to round down to an integer, but it will apparently result in idle workers. Chu et al. [20] addressed this issue via "Hypercube Optimization", that is to enumerate all possible bucket sequences around the optimal solutions, and choose the one that produces shares (product of bucket numbers) closest to the number of workers. We adopt this strategy in our implementation.
Secondly, the local algorithm. When the edges arrive at the worker, we collect them into a local graph (duplicate edges are removed), and use local algorithm to compute the matches. For unlabelled matching. we study the state-of-the-art local algorithms from "EmptyHeaded" [11] and "DualSim" [34]. "EmptyHeaded" is inspired by Ngo's worst-case optimal algorithm [44] that decomposes the query graph via "Hyper-Tree Decomposition", computes each decomposed part using worst-case optimal join and finally glues all parts together using hash join. "DualSim" was proposed by [34] for subgraph matching in the external-memory setting. The idea is to first compute the matches of V cc Q , then the remaining vertices V Q \V cc Q can be efficiently matched by enumerating the intersection of V cc Q 's neighbors. We find out that "DualSim" actually produces the same query plans as "EmptyHeaded" for all our benchmarking queries ( Figure 4) except q 9 . We implement both algorithms for q 9 and "DualSim" performs better than "EmptyHeaded" on the GO, US, GP and LJ datasets ( Table 2). As a result, we adopt "DualSim" as the local algorithm for MultiwayJoin. For labelled matching, we implement "CFLMatch" proposed in [16] that has been shown so far to have the best performance. Now we let each worker independently compute matches in its local graph. Simply doing so will result in duplicates, so we process deduplication as follows: given a match f that is computed in the worker identified by t w , we can recover the tuple t f e of the matched edge (f (v), f (v )) regarding the query edge e = (v, v ), then the match f is retained if and only if t w = t f e for every e ∈ E Q . To explain this, let's consider b = 2, and a match {u 0 , u 1 , u 2 } for a triangle query
(v 0 , v 1 , v 2 ), where h(u 0 ) = h(u 1 ) = h(u 2 ) = 0.
It is easy to see that the match will be computed in workers of (0, 0, 0) and (0, 0, 1), while the match in worker (0, 0, 1) will be eliminated as (u 0 , u 2 ) that matches the query edge (v 0 , v 2 ) can not be hashed to (0, 0, 1) regarding (v 0 , v 2 ). We can also avoid deduplication by separately maintaing each edge regarding different query edges it stands for, and use the local algorithm proposed in [20], but it results in too many edge duplicates that drain our memory even when processing a medium-size graph.
Others
PSgL and its implementation. PSgL iteratively processes subgraph matching via breadth-first traversal. All query vertices are configured three status, "white" (initialized), "gray" (candidate) and "black" (matched). Denote v i as the vertex to match in the i th round. The algorithm starts from matching initial query vertex v 1 , and coloring the neighbors as "gray". In the i th round, the algorithm applies the workload-aware expanding strategy at runtime, that is to select the v i to expand among all current "gray" vertices based on a greedy heuristic to minimize the communication cost [49]; the partial results from previous round R i−1 (specially, R 0 = ∅) will be distributed among the workers based on the candidate data vertices that can match v i ; in the certain worker, the algorithm computes R i by merging R i−1 with the matches of the Star formed by v i and its "white" neighbors N w
Q (v i ), namely Star(v i ; N w Q (v i ));
after v i is matched, v i is colored as "black" and its "white" neighbors will be colored as "gray"; essentially, this process is analogous to StarJoin by processing
R i = R i−1 Star(v i ; N w Q (v i ))
. Thus, PSgL can be seen as an alternative implementation of StarJoin on Pregel [41]. In this work, we also implement PSgL using a Pregel on Timely. Note that we introduce Pregel api to as much as possible replay the implementation of PSgL. In fact, it is simply wrapping Timely's primitive operators such as binary_notify and loop 6 , and barely introduces extra cost to the implementation. Our experimental results demonstrate similar findings as prior work [37] that PSgL's performance is dominated by CliqueJoin [37]. Thus, we will not further discuss this algorithm in this paper.
CrystalJoin and its implementation. CrystalJoin aims at resolving the "output crisis" by compressing the results of subgraph matching [45]. The authors defined a structure called crystal, denoted Q(x, y). A crystal is a subgraph of Q that contains two sets of vertices V x and V y (|V x | = x and |V y | = y), where the induced subgraph Q(V x ) is a x-clique, and every vertex in V y connects to all vertices of V x . We call V x clique vertices, and V y the bud vertices. The algorithm first obtains the minimum vertex cover V c Q , and then applies the Core-Crystal Decomposition to decompose the query graph into the core Q(V c Q ) and a set of crystals
{Q 1 (x 1 , y 1 ), . . . , Q t (x t , y t )}. The crystals must satisfy that ∀1 ≤ i ≤ t, Q(V xi ) ⊆ Q(V c Q )
, namely, the clique part of each crystal is a subgraph of the core. As an example, we plot a query graph and the corresponding core-crystal decomposition in Figure 3. Note that in the example, both crystals have an edge (i.e. 2-clique) as the clique part. With core-crystal decomposition, the computation has accordingly split into three stages:
v 1 v 2 v 3 v 4 v 5 Core: v 3 v 2 v 5 Crystals: v 2 v 5 v 1 v 3 v 5 v 4 Q 1 (2, 1) Q 2 (2, 1) Q Core-Crystal Decomposition
1. Core computation. Given that Q(V c Q ) itself is a query graph, the algorithm can be recursively applied to compute Q(V c Q ) according to [45].
2. Crystal computation. A special case of crystal is Q(x, 1), which is indeed a (x + 1)-clique. Suppose an instance of the Q(V x ) is f x = {u 1 , u 2 , . . . , u x }, we can represent the matches w.r.t. f x as (f x , I y ), where I y = x i=1 N G (u i )
denotes the set of vertices that can match V y . This can naturally be extended to the case with y > 1, where any y-combinations of the vertices of I y together with f x represent a match. This way, the matches of crystals can be largely compressed. 3. One-time assembly. This stage assembles the core instances and the compressed crystal matches to produce the final results. More precisely, this stage is to join the core instance with the crystal matches.
We notice two technical obstacles to implement CrystalJoin according to the paper. Firstly, it is worth noting that the core Q(V c Q ) may be disconnected, a case that can produce exponential number of results. The authors applied a query-specific optimization in the original implementation to resolve this issue. Secondly, the authors proposed to precompute the cliques up to certain k, while it is often cost-prohibitive to do so in practice. Take UK (Table 2) dataset as an example, the triangles, 4-cliques and 5-cliques are respectively about 20, 600 and 40000 times larger than the graph itself. It is worth noting that the main purpose of this paper is not to study how well each algorithm performs for a specific query, which has its theoretical value, but can barely guide practice. After communicating with the authors, we adapt CrystalJoin in the following. Firstly, we replace the core Q(V c Q ) with the induced subgraph of the minimum connected vertex cover Q(V cc Q ). Secondly, instead of implementing CrystalJoin as a strategy, we use it as an alternative join plan (matching order) for WOPTJOIN. According to CrystalJoin, we first match V cc Q , while the matching order inside and outside V cc Q still follows WOPTJOIN's greedy heuristic (Section 3.2). It is worth noting that this adaptation achieves high performance comparable to the original implementation. In fact, we also apply CrystalJoin plan to BINJOIN, while it does not perform as well as the WOPTJOIN version, thus we do not discuss this implementation.
FullRep and its implementation. FULLREP simply maintains a full replica of the graph in each physical machine. Each worker picks one independent share of computation and solves it using existing local algorithm.
The implementation is straightforward. We let each worker pick its share of computation via a Round-Robin strategy, that is we settle an initial query vertex v 1 , and let first worker match v 1 with u 1 to continue the remaining process, and second worker match v 1 with u 2 , and so on. This simple strategy already works very well on balancing the load of our benchmarking queries (Figure 4). We use "DualSim" for unlabelled matching and "CFLMatch" for labelled matching as MultiwayJoin.
Worst-case Optimality.
Given a query Q and the data graph G, we denote the maximum possible result set as R G (Q). Simply speaking, an algorithm is worst-case optimal if the aggregation of the total intermediate results is bounded by Θ(|R G (Q)|). Ngo et al. proposed a class of worst-case optimal join algorithm called GenericJoin [44], and we first overview this algorithm.
GenericJoin. Let the join be R
(V ) = F ⊆Ψ R(F ), where Ψ = {U | U ⊆ V } and V = U ∈Ψ U . Given a vertex subset U ⊆ V , let Ψ U = {V | V ∈ Ψ ∧ V ∩ U = ∅}, and for a tuple t ∈ R(V ), denote t U as t's projection on U .
We then show the GenericJoin in Algorithm 1.
Algorithm 1: GenericJoin(V, Ψ, U ∈Ψ R(U )) 1 R(V ) ← ∅; 2 if |V | = 1 then 3 Return U ∈Ψ R(U ); 4 V ← (I, J), where ∅ = I ⊂ V , and J = V \ I; 5 R(I) ← GenericJoin(I, ΨI , U ∈Ψ I πI (R(U ))); 6 forall tI ∈ R(I) do 7 R(J)w.r.t. t I ← GenericJoin(J, ΨJ , U ∈Ψ J πJ (R(U ) tI )); 8 R(V ) ← R(V ) ∪ {tI } × R(J)w.r.t. t I ; 9 Return R(V );
In Algorithm 1, the original join is recursively decomposed into two parts R(I) and R(J) regarding the disjoint sets I and J. From line 5, it is clear that R(I) will record R(V )'s projection on I, thus we have |R(I)| ≤ |R(V )|, where R(V ) is the maximum possible results of the query. Meanwhile, in line 7, the semi-join R(U ) t I = {r | r ∈ R(U ) ∧ r (U ∪I) = t (U ∪I) } only retains those R(J) w.r.t. t I that can end up in the join result, which infers that the R(J) must also be bounded by the final results. This intuitively explains the worst-case optimality of GenericJoin, while we refer interested readers to [44] for a complete proof.
It is easy to see that BiGJoin is worst-case optimal. In Algorithm 1, we select I in line 4 by popping the edge relation E(v s , v i )(s < i) in the i th step. In line 7, the recursive call to solve the semi-join R(U ) t I actually corresponds to the intersection process.
Worst-case Optimality of CliqueJoin. Note that the two clique relations in Equation 3 interleave one common edge (v 2 , v 4 ) in the query graph. This optimization, called "overlapping decomposition" [37], eventually contributes to CliqueJoin's worst-cast optimality. Note that it is not possible to apply this optimization to StarJoin and TwinTwigJoin. We have the following theorem. Theorem 3.1. CliqueJoin is worst-case optimal while applying "overlapped decomposition".
Proof. We implement CliqueJoin using Algorithm 1 in the following. Note that Q(V ) denotes a subgraph of Q induced by V . In line 2, we change the stopping condition to "Q(I) is either a clique or a star". In line 4, the I is selected such that Q(I) is either a clique or a star. Note that by applying the "overlapping decomposition" in CliqueJoin, the sub-query of the J part must be the J-induced graph Q(J), and it will also include the edges of E Q(I) ∩ E Q(J) , which infers that R(Q(J)) = R(Q(J)) R(Q(I)) , and just reflects the semi-join in line 7. Therefore, CliqueJoin belongs to GenericJoin, and is thus worst-case optimal.
Optimizations
We introduce the three general-purpose optimizations, Batching, TrIndexing and Compression in this section, and how we orthogonally apply them to BINJOIN and WOPTJOIN algorithms. In the rest of the paper, we will use the strategy BINJOIN, WOPTJOIN, SHRCUBE instead of their corresponding algorithms, as we focus on strategy-level comparison.
Batching
Let R(V i ) be the partial results that match the given vertices V i = {v si , v s2 , . . . , v si } (R i for short if V i follows a given order), and R(V j ) denote the more complete results with V i ⊂ V j . Denote R j |R i as the tuples in R j whose projection
on V i equates R i . Let's partition R i into b disjoint parts {R 1 i , R 2 i , . . . , R b i }.
We define Batching on R j |R i as the technique to independently process the following sub-tasks that compute
{R j |R 1 i , R j |R 2 i , . . . , R j |R b i }. Obviously, R j |R i = b k=1 R j |R k i .
WOptJoin. Recall from Section 3.2 that WOPTJOIN progresses according to a predefined matching order {v 1 , v 2 , . . . , v n }. In the i th round, WOPTJOIN will Propose on each p ∈ R i−1 to compute R i . It is not hard to see that we can easily apply Batching to the computation of R i |R i−1 by randomly partitioning R i−1 . For simplicity, the authors implemented Batching on R(Q)|R 1 (v 1 ). Note that R 1 (v 1 ) = V G in unlabelled matching, which means that we can achieve Batching simply by partitioning the data vertices 7 . For short, we also say the strategy batches on v 1 , and call v 1 the batching vertex. We follow the same idea to apply Batching to BINJOIN algorithms.
BinJoin. While it is natural for WOPTJOIN to batch on v 1 , it is non-trivial to pick such a vertex for BINJOIN. Given a decomposition of the query graph {p 1 , p 2 , . . . , p s }, where each p i is a join unit, we have R(Q) = R(p 1 ) R(p 2 ) · · · R(p s ). If we partition R 1 (v) so as to batch on v ∈ V Q , we correspondingly split the join task, and one of the sub-task is R(Q)|R k
1 (v) = R(p 1 )|R k 1 (v) · · · R(p s )|R k 1 (v) (R k 1 (v) is one partition of R 1 (v)).
Observe that if there exists a join unit p where v ∈ V p , we must have R(p) = R(p)|R k 1 (v), which means R(p) have to be fully computed in each sub-task. Let's consider the example query in Equation 2.
R(Q) = T 1 (v 1 , v 2 , v 4 ) T 2 (v 2 , v 3 , v 4 ) T 3 (v 3 , v 4 ).
Suppose we batch on v 1 , the above join can be divided into the following independent sub-tasks:
R(Q)|R 1 1 (v1) = (T1(v1, v2, v4)|R 1 1 (v1)) T2(v2, v3, v4) T3(v3, v4), R(Q)|R 2 1 (v1) = (T1(v1, v2, v4)|R 2 1 (v1)) T2(v2, v3, v4) T3(v3, v4), · · · R(Q)|R b 1 (v1) = (T1(v1, v2, v4)|R b 1 (v1)) T2(v2, v3, v4) T3(v3, v4).
It is not hard to see that we will have to re-compute T 2 (v 2 , v 3 , v 4 ) and T 3 (v 3 , v 4 ) in all the above sub-tasks. Alternatively, if we batch on v 4 , we can avoid such re-computation as T 1 , T 2 and T 3 can all be partitioned in each sub-task. Inspired by this, for BINJOIN, we come up with the heuristic to apply Batching on the vertex that presents in as many join units as possible. Note that such vertex can only be in the join key, as otherwise it must at least not present in one side of the join. For complex query, we can still have join unit that does not contain any vertex for Batching after applying the above heuristic. In this case, we either re-compute the join unit, or cache it on disk. Another problem caused by this is potential memory burden of the join. Thus, we devise the join-level Batching following the idea of external MergeSort. Specifically, we inject a Buffer-and-Batch operator for the two data streams before they arrive at the Join operator. Buffer-and-Batch functions in two parts:
• Buffer: While the operator receives data from the upstream, it buffers the data until reaching a given threshold. Then the buffer is sorted according to the join key's hash value and spilled to the disk. The buffer is reused for the next batch of data. • Batch: After the data to join is fully received, we read back the data from the disk in a batching manner, where each batch must include all join keys whose hash values are within a certain range.
While one batch of data is delivered to the Join operator, Timely allows us to supervise the progress and hold the next batch until the current batch completes. This way, the internal memory requirement is one batch of the data. Note that such join-level Batching is natively implemented in Hadoop's "Shuffle" stage, and we replay this process in Timely to improve the scalability of the algorithm.
Triangle Indexing
As the name suggests, TrIndexing precomputes the triangles of the data graph and indices them along with the graph data to prune infeasible results. The authors of BiGJoin [13] optimized the 4-clique query by using the triangles as base relations to join, which reduces the rounds of join and network communication. In [45], the authors proposed to not only maintain triangles, but all k-cliques up to a given k. As we mentioned earlier, it incurs huge extra cost of maintaining triangles already, let alone larger cliques.
In addition to the default hash partition, Lai et al. proposed "triangle partition" [37] by also incorporating the edges among the neighbors (it forms triangles with the anchor vertex) in the partition. "Triangle partition" allows BINJOIN to use clique as the join unit [37], which greatly reduces the intermediate results of certain queries and improves the performance. "Triangle partition" is in de facto a variant of TrIndexing, which instead of explicitly materializing the triangles, maintains them in the local graph structure (e.g. adjacency list). As we will show in the experiment (Section 5), this will save a lot of space compared to explicit triangle materialization. Therefore, we adopt the "triangle partition" for TrIndexing optimization in this work.
BinJoin. Obviously, BINJOIN becomes CliqueJoin with TrIndexing, and StarJoin (or TwinTwigJoin) otherwise.
With worst-case optimality guarantee (Section 3.5), BINJOIN should perform much better with TrIndexing, which is also observed in "Exp-1" of Section 5.
WOptJoin. In order to match v i in the i th round, WOPTJOIN utilizes Count, Propose and Intersect to process the intersection of Equation 4. For ease of presentation, suppose v i+1 connects to the first s query vertices {v 1 , v 2 , . . . , v s }, and given a partial match,
{f (v 1 ), . . . , f (v s )}, we have C(v i+1 ) = s j=1 N G (f (v j )).
In the original implementation, it is required to send (p; C(v i+1 )) via network to all machines that contain each f (v j )(1 ≤ j ≤ s) to process the intersection, which can render massive communication cost. In order to reduce the communication cost, we implement TrIndexing for WOPTJOIN in the following. We first group {v 1 , . . . , v s } such that for each group U (v x ), we have
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q }.
Because of TrIndexing, we have N G (f (v y )) (∀v y ∈ U (v x )) maintain in f (v x )'s partition. Thus, we only need to send the prefix to f (v x )'s machine, and the intersection within U (v x ) can be done locally. We process the grouping using a greedy strategy that always constructs the largest group from the remaining vertices.
Remark 4.1. The "triangle partition" may result in maintaining a large portion of the data graph in certain partition. Lai et al. pointed out this issue, and proposed a space-efficient alternative by leveraging the vertex orderings [37]. That is, given the partitioned vertex as u, and two neighbors u and u that close a triangle, we place the edge (u , u ) in the partition only when u < u < u . Although this alteration reduces storage, it may affect the effectiveness of TrIndexing for WOPTJOIN and the implementations of Batching and Compression for BINJOIN algorithms. Take WOPTJOIN as an example, after using the space-efficient "triangle partition", we should modify the above grouping as:
U (v x ) = {v x } ∪ {v y | (v x , v y ) ∈ E Q ∧ (v x , v y ) ∈ O Q }.
Note that the order between query vertices are for symmetry breaking (Section 2.1), and it may not present in certain query, which makes TrIndexing completely useless for WOPTJOIN.
Compression
Subgraph matching is a typical combinatorial problem, and can easily produce results of exponential size. Compression aims to maintain the (intermediate) results in a compressed form to reduce resource allocation and communication cost. In the following, when we say "compress a query vertex", we mean maintaining its matched data vertices in the form of an array, instead of unfolding them in line with the one-one mapping of a match (Definiton 2.1). Qiao et al. proposed CrystalJoin to study Compression in general for subgraph matching. As we introduced in Section 3.4, CrystalJoin first extracts the minimum vertex cover as uncompressed part, and then it can compress the remaining query vertices as the intersection of certain uncompressed matches' neighbors. Such Compression leverages the fact that all dependencies (edges) of the compressed part that requires further computation are already covered by the uncompressed part, thus it can stay compressed until the actual matches are requested. CrystalJoin inspires a heuristic for doing Compression, that is to compress the vertices whose matches will not be used in any future computation. In the following, we will apply the same heuristic to the other algorithms.
BinJoin. Obviously we can not compress any vertex that presents in the join key. What we need to do is to simply locate the vertices to compress in the join unit, namely star and clique. For star, the root vertex must remain uncompressed, as the leaves' computation depends on it. For clique, we can only compress one vertex, as otherwise the mutual connection between the compressed vertices will be lost. In a word, we compress two types of vertices for BINJOIN, (1) non-key and non-root vertices of a star join unit, (2) one non-key vertex of a clique join unit.
WOptJoin. Based on a predefined join order {v 1 , v 2 , . . . , v n }, we can compress
v i (1 ≤ i ≤ n), if there does not exist v j (i < j) such that (v i , v j ) ∈ E Q .
In other words, v i 's matches will never be involved in any future intersection (computation). Note that v n 's can be trivially compressed. With Compression, when v i is compressed, we will maintain its matches as an array instead of unfolding it into the prefix like a normal vertex.
Experiments
Experimental settings
Environments. We deploy two clusters for the experiments: (1) a local cluster of 10 machines connected via one 10GBps switch and one 1GBps switch. Each machine has 64GB memory, 1 TB disk and 1 Intel Xeon CPU E3-1220 V6 3.00GHz with 4 physical cores; (2) an AWS cluster of 40 "r5-2xlarge" instances connected via a 10GBps switch, each with 64GB memory, 8 vCpus and 500GB Amazon EBS storage. By default we use the local cluster of 10 machines with 10GBps switch. We run 3 workers in each machine in the local cluster, and 6 workers in the AWS cluster for Timely. The codes are implemented based on the open-sourced Timely dataflow system [8] using Rust 1.32. We are still working towards open-sourcing the codes, and the bins together with their usages are temporarily provided 8 to verify the results.
Metrics.
In the experiments, we measure query time T as the slowest worker's wall clock time from an average of three runs. We allow 3 hours as the maximum running time for each test. We use OT and OOM to indicate a test case runs out of the time limit and out of memory, respectively. By default we will not show the OOM results for clear presentation.
We divide T into two parts, the computation time T comp and the communication time T comm . We measure T comp as the time the slowest worker spends on actual computation by timing every computing function. We are aware that the actual communication time is hard to measure as Timely overlaps computation and communication to improve throughput. We consider T − T comp , which mainly records the time the worker waits data from the network channel (a.k.a. communication time). While the other part of communication that overlaps computation is of less interest as it does not affect the query progress. As a result, we simply let T comm = T − T comp in the experiments. We measure the maximum peak memory using Linux's "time -v" in each machine. We define the communication cost as the number of integers a worker receives during the process, and measure the maximum communication cost among the workers accordingly.
Dataset Formats. We preprocess each dataset as follows: we treat it as a simple undirected graph by removing selfloop and duplicate edges, and format it using "Compressed Sparse Row" (CSR) [3]. We relabel the vertex id according to the degree and break the ties arbitrarily.
Compared Strategies. In the experiments, we implement BINJOIN and WOPTJOIN with all Batching, TrIndexing and Compression optimizations (Section 4). SHRCUBE is implemented with "Hypercube Optimization" [20], and "DualSim" (unlabelled) [34] and "CFLMatch" (labelled) [16] as local algorithms. FULLREP is implemented with the same local algorithms as SHRCUBE.
Auxiliary Experiments. We have also conducted several auxiliary experiments in the appendix to study the strategies of BINJOIN, WOPTJOIN, SHRCUBE and FULLREP.
Unlabelled Experiments
Datasets. The datasets used in this experiment are shown in Table 2. All datasets except SY are downloaded from public source, which are indicated by the letter in the bracket (S [9], W [10], D [1]). All statistics are measured as G is an undirected graph. Among the datasets, GO is a small dataset to study cases of extremely large (intermediate) result set; LJ, UK and FS are three popular datasets used in prior works, featuring statistics of real social network and web graph; GP is the google plus ego network, which is exceptionally dense; US and EU, on the other end, are sparse road networks. These datasets vary in number of vertices and edges, densities and maximum degree, as shown in Table 2. We synthesize the SY data according to [18] that generates data with real-graph characteristics. Note that the data occupies roughly 80GB space, and is larger than the configured memory of our machine. We synthesize the data because we do not find public accessible data of this size. Larger dataset like Clueweb [2] is available, but it is beyond the processing power of our current cluster.
Each data is hash partitioned ("hash") across the cluster. We also implement the "triangle partition" ("tri.") for TrIndexing optimization (Section 4.2). To do so, we use BiGJoin to compute the triangles and send the triangle edges to corresponding partition. We record the time T * and average number of edges |E * | of the two partition strategies. The partition statistics are recorded using the local cluster, except for SY that is processed in the AWS cluster. From Table 2, we can see that |E tri. | is noticeably larger, around 1-10 times larger than |E hash |. Note that in GP and UK, which either is dense, or must contain a large dense community, the "triangle partition" can maintain a large portion of data in each partition. While compared to complete triangle materialization, "triangle partition" turns out to be much cheaper. For example, the UK dataset contains around 27B triangles, which means each partition in our local cluster should by average take 0.9B triangles (three integers); in comparison, UK's "triangle partition" only maintains an average of 0.16B edges (two integers) according to Table 2.
We use US, GO and LJ as default datasets in the experiments "Exp-1", "Exp-2" and "Exp-3" in order to collect useful feedbacks from successful queries, while we may not present certain cases when they do not give new findings.
Queries. The queries are presented in Figure 4. We also give the partial order under each query for symmetry breaking. The queries except q 7 and q 8 are selected based on all prior works [13,35,37,45,50], while varying in number of vertices, densities, and the vertex cover ratio |V cc Q |/|V Q |, in order to better evaluate the strategies from different perspectives. The three queries q 7 , q 8 and q 9 are relatively challenging given their result scale. For example, the smallest dataset GO contains 2, 168B(illion) q 7 , 330B q 8 and 1, 883B q 9 , respectively. For short of space, we record the number of results of each successful query on each dataset in the appendix. Note that q 7 and q 8 are absent from existing works, while we benchmark q 7 considering the importance of path query in practice, and q 8 considering the varieties of the join plans.
Exp-1: Optimizations. We study the effectiveness of Batching, TrIndexing and Compression for both BINJOIN and WOPTJOIN strategies, by comparing BINJOIN and WOPTJOIN with their respective variants with one optimization off, namely "without Batching", "without Trindexing" and "without Compression". In the following, we use the suffix of "(w.o.b.)", "(w.o.t.)" and "(w.o.c.)" to represent the three variants. We use the queries q 2 and q 5 , and the results of US and LJ are shown in Figure 5. By default, we use the batch size of 1, 000, 000 for both BINJOIN and WOPTJOIN (according to [13]) in this experiment, and we reduce the batch size when it runs out of memory, as will be specified.
While comparing BINJOIN with BINJOIN(w.o.b.), we observe that Batching barely affects the performance of q 2 , but severely for q 5 on LJ (1800s vs 4000s (w.o.b.) ). The reason is that we still apply join-level Batching for BINJOIN(w For WOPTJOIN strategy, Batching has little impact to the performance. Surprisingly, after using TrIndexing to WOPTJOIN, the improvement by average is only around 18%. We do another experiment in the same cluster but using 1GBps switch, which shows WOPTJOIN is over 6 times faster than WOPTJOIN(w.o.t.) for both queries on LJ. Note that Timely uses separate threads to buffer received data from the network. Given the same computing speed, a faster network allows the data to be more fully buffered and hence less wait for the following computation. Similar to BINJOIN, Compression greatly improves the performance while querying on LJ, but the opposite on US.
v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 v 1 v 2 v 3 v 4 {(v 1 , v 2 ), (v 1 , v 3 ), (v1, v 4 ), (v 2 , v 4 )} {(v 1 , v 3 ), (v 2 , v 4 )} {(v 1 , v 2 ), (v 2 , v 3 ), (v 3 , v 4 )} q 1 q 2 q 3 v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 )} v 1 v 2 v 3 v 4 v 5 {(v 2 , v 5 ), (v 3 , v 4 )} q 6 q 5 q 4 v 1 v 2 v 3 v 4 v 5 v 6 {(v 3 , v 5 )} q 8 v 1 v 2 v 3 v 4 v 5 {(v 1 , v 4 )} v 1 v 2 v 3 v 4 v 5 q 9 v 1 v 2 v 3 v 4 v 5 q 7 v 6 {(v 1 , v 5 )} {(v 2 , v 3 ), (v 2 , v 5 ), (v 2 , v 6 )}
Exp-2 Challenging Queries. We study the challenging queries q 7 , q 8 and q 9 in this experiment. We run this experiment We focus on comparing BINJOIN and WOPTJOIN on GO dataset. On the one hand, WOPTJOIN outperforms BINJOIN for q 7 and q 8 . Their join plans of q 7 are nearly the same except that BINJOIN relies on a global shuffling on v 3 to processing join, while WOPTJOIN sends the partial results to the machine that maintains the vertex to grow. It is hence reasonable to observe BINJOIN's poorer performance for q 7 as shuffling is typically a more costly operation. The case of q 8 is similar, so we do not further discuss. On the other hand, even living with costly shuffling, BINJOIN still performs better for q 9 . Due to the vertex-growing nature, WOPTJOIN's "optimal plan" will have to process the costly sub-query Q({v 1 , v 2 , v 3 , v 4 , v 5 }). On US dataset, WOPTJOIN consistently outperforms BINJOIN for these queries. This is because that US does not produce massive intermediate results as LJ, thus BINJOIN's shuffling cost consistently dominates.
While processing complex queries like q 8 and q 9 , we can study varieties of join plans for BINJOIN and WOPTJOIN. First of all, we want the readers to note that BINJOIN's join plan for q 8 is different from the optimal plan originally given [37]. The original "optimal" plan computes q 8 by joining two tailed triangles (triangle tailed with an edge), while this alternative plan works better by joining the uppers "house-shape" sub-query with the bottom triangle. In theory, the tailed triangle has worse-case bound (AGM bound [44]) of O(M 2 ), smaller than the house's O(M 2.5 ), and BINJOIN's actually favors this plan based on cost estimation. However, we find out that the number of tailed triangles is very close to that of the houses on GO, which renders costly process for the original plan to join two tailed triangles. This indicates insufficiency of both cost estimation proposed in [37] and worst-case optimal bound [13] while computing the join plan, which will be further discussed in Section 6.
Secondly, it is worth noting that we actually report the result of WOPTJOIN for q 9 while using the CrystalJoin plan, as it works better than WOPTJOIN's original "optimal" plan. For q 9 , CrystalJoin will first compute Q(V cc Q ), namely the 2-path {v 1 , v 3 , v 5 }, thereafter it can compress all remaining vertices v 2 , v 4 and v 6 . In comparison, the "optimal" plan can only compress v 2 and v 6 . In this case, CrystalJoin performs better because it configures larger compression. In [45], the authors proved that it renders maximum compression to use the vertex cover as the uncompressed core. However, this may not necessarily result in the best performance, considering that it can be costly to compute the core part. In our experiments, the unlabelled q 4 , q 8 and labelled q 8 are cases that CrystalJoin plan performs worse than the original BiGJoin plan (with Compression optimization), where CrystalJoin plan does not render strictly larger compression while having to process the costly core part. As a result, we only recommend CrystalJoin plan when it leads to strictly larger compression.
The final observation is that the computation time dominates most of the evaluated cases, except BINJOIN's q 8 , WOPTJOIN and SHRCUBE's q 9 on US. We will further discuss this in Exp-3.
Exp-3 All-Around Comparisons. In this experiment, we run q 1 − q 6 using BINJOIN, WOPTJOIN, SHRCUBE and FULLREP across the datasets GP, LJ, UK, EU and FS. We also run WOPTJOIN with CrystalJoin plan in q 4 as it is the only query that renders different CrystalJoin plan from BiGJoin plan, and the results show that the performance with BiGJoin plan is consistently better. We report the results in Figure 7, where the communication time is plotted as gray filling. As a whole, among all 35 test cases, FULLREP achieves the best 85% completion rate, followed by WOPTJOIN and BINJOIN which complete 71.4% and 68.6% respectively, and SHRCUBE performs the worst with just 8.6% completion rate.
BinJoin
WOptjoin FULLREP typically outperforms the other strategies. Observe that WOPTJOIN's performance is often very close to FULLREP. The reason is that the WOPTJOIN's computing plans for these evaluated queries are similar to "DualSim" adopted by FULLREP. The extra communication cost of WOPTJOIN has been reduced to very low while adopting TrIndexing optimization. While comparing WOPTJOIN with BINJOIN, BINJOIN is better for q 3 , a clique query (join unit) that requires no join (a case of embarrassingly parallel). BINJOIN performs worse than WOPTJOIN in most other queries, which, as we mentioned before, is due to the costly shuffling. There is an exception -querying q 1 on GP -where BINJOIN performs better than both FULLREP and WOPTJOIN. We explain this using our best speculation. GP is a very dense graph, where we observe nearly 100 vertices with degree around 10,000. To process We observe that the computation time T comp dominates in most cases as we mentioned in Exp-2. This is trivially true for SHRCUBE and FULLREP, but it may not be clearly so for WOPTJOIN and BINJOIN given that they all need to transfer a massive amount of intermediate data. We investigate this and find out two potential reasons. The first one attributes to Timely's highly optimized communication component, which allows the computation to overlap communication by using extra threads to receive and buffer the data from the network so that it can be mostly ready for the following computation. The second one is the fast network. We re-run these queries using the 1GBps switch, while the results show the opposite trend that the communication time T comm in turn takes over.
Exp-4 Web-Scale. We run the SY datasets in the AWS cluster of 40 instances. Note that FULLREP can not be used as SY is larger than the machine's memory. We use the queries q 2 and q 3 , and present the results of BINJOIN and WOPTJOIN (SHRCUBE fails all cases due to OOM) in Table 3. The results are consistent with the prior experiments, but observe that the gap between BINJOIN and WOPTJOIN while querying q 1 is larger. This is because that we now deploy 40 AWS instances, and BINJOIN's shuffling cost increases.
Labelled Experiments
We use the LDBC social network benchmarking (SNB) [6] for labelled matching experiment due to the lack of labelled big graphs in the public. SNB provides a data generator that generates a synthetic social network of required statistics, and a document [7] that describes the benchmarking tasks, in which the complex tasks are actually subgraph matching. The join plans of BINJOIN and WOPTJOIN for labelled experiments are generated as unlabelled case, but we use the label frequencies to break tie.
Datasets. We list the datasets and their statistics in Table 4. These datasets are generated using the "Facebook" mode with a duration of 3 years. The dataset's name, denoted as DGx, represents a scale factor of x. The labels are preprocessed into integers. (1) and (2), note that our current implementation can support both cases, and we do the adaptations for consistency and simplicity. For (3) and (4), we adapt them because currently they do not conform with the subgraph matching problem studied in this paper. For (5), it is due to our current limitation in supporting property graph. We leave (3), (4) and (5) as interesting future work.
Exp-5 All-Around Comparisons. We now conduct the experiment using all queries on DG10 and DG60, and present the results in Figure 9. Here we compute the join plans for BINJOIN and WOPTJOIN by using the unlabelled method, but further using the label frequencies to break tie. The gray filling again represents communication time. FULLREP outperforms the other strategies in many cases, except that it performs slightly slower than BINJOIN for q 3 and q 5 . This is because that q 3 and q 5 are join units, and BINJOIN processes them locally in each machine as FULLREP, and it does not build indices as "CFLMatch" used in FULLREP. When comparing to WOPTJOIN, Among all these queries, we only have q 8 that configures different CrystalJoin plan (w.r.t. BiGJoin plan) for WOPTJOIN. The results show that the performance of WOPTJOIN drops about 10 times while using CrystalJoin plan. Note that the core part of q 8 is a 5-path of "Psn-City-Cty-City-Psn" with enormous intermediate results. As we mentioned in unlabelled experiments, it may not always be wise to first compute the vertex-cover-induced core.
We now focus on comparing BINJOIN and WOPTJOIN. There are three cases that intrigue us. Firstly, observe that BINJOIN performs much better than WOPTJOIN while querying q 4 . The reason is high intersection cost as we discovered on GP dataset in unlabelled matching. Secondly, BINJOIN performs worse than WOPTJOIN in q 7 , which again is because of BINJOIN's costly shuffling. The third case is q 9 , the most complex query in the experiment. BINJOIN performs much better while querying q 9 . The bad performance of WOPTJOIN comes from the long execution plan together with costly intermediate results.
The two algorithms all expand the three "Psn"s, and then grow via one of the "City"s to "Cty", but BINJOIN approaches this using one join (a triangle a TwinTwig), while WOPTJOIN will first expand to "City" then further "Cty", and the "City" expansion is the culprit of the slower run. 6 Discussions and Future Work.
BinJoin
We discuss our findings and potential future work based on the experiments in Section 5. Eventually, we summarize the findings into a practical guide.
Strategy Selection. FULLREP is obviously the preferred choice when the machine can hold the graph data, while both WOPTJOIN and BINJOIN are good alternatives when the graph is larger than the capacity of the machine. For BINJOIN and WOPTJOIN, on one side, BINJOIN may perform worse than WOPTJOIN (e.g. unlabelled q 2 , q 4 , q 5 ) due to the expensive shuffling operation, on the other side, BINJOIN can also outperform WOPTJOIN (e.g. unlabelled and labelled q 9 ) while avoiding costly sub-queries due to query decomposition. One way to choose between BINJOIN and WOPTJOIN is to compare the cost of their respective join plans, and select the one with less cost. For now, we can either use cost estimation proposed in [37], or summing the worst-case bound, but none of them consistently gives the best solution, as will be discussed in "Optimal Join Plan". Alternatively, we refer to "EmptyHeaded" [11] to study a potential hybrid strategy of BINJOIN and WOPTJOIN. Note that "EmptyHeaded" is developed in single-machine setting, and it does not take into consideration the impact of Compression, we hence leave such hybrid strategy in the distributed context as an interesting future work.
Optimizations. Our experimental results suggest always using Batching, using TrIndexing when each machine has sufficient memory to hold "triangle partition", and using Compression when the data graph is not very sparse (e.g. d G ≥ 5). Batching often does not impact performance, so we recommend always using Batching due to the unpredictability of the size of (intermediate) results. TrIndexing is critical for BINJOIN, and it can greatly improve WOPTJOIN by reducing communication cost, while it requires extra storage to maintain "triangle partition". Amongst the evaluated datasets, each "triangle partition" maintains an average of 30% data in our 10-machine cluster. Thus, we suggest a memory threshold of 60%|E G | (half for graph and half for running algorithm) for TrIndexing in a cluster of the same or larger scale. Note that the threshold does not apply to extremely dense graph. Among the three optimizations, Compression is the primary performance booster that improves the performance of BINJOIN and WOPTJOIN by 5 times on average in all but the cases on the very sparse road networks. For such very sparse data graphs, Compression can render more cost than benefits.
Optimal Join Plan. It is challenging to systematically determine the optimal join plans for both BINJOIN and WOPTJOIN. From the experiments, we identify three impact factors: (1) the worst-case bound; (2) cost estimation based on data statistics; (3) favoring the optimizations, especially Compression. All existing works only partially consider these factors, and we have observed sub-optimal join plans in the experiments. For example, BINJOIN bases the "optimal" join plan on minimizing the cost estimation, but the join plan does not render the best performance for unlabelled q 8 ; WOPTJOIN follows the worst-case optimality, while it may encounter costly sub-queries for labelled and unlabelled q 9 ; CrystalJoin focuses on maximizing the compression, while ignoring the facts that the vertex-coverinduced core part itself can be costly to compute. Additionally, there are other impact factors such as the partial orders of query vertices and the label frequencies, which have not been studied in this work due to short of space. It is another very interesting future work to thoroughly study the optimal join plan while considering all above impact factors.
Computation vs. Communication. We argue that distributed subgraph matching nowadays is a computationintensive task. This claim holds when the cluster configures high-speed network (e.g. ≥ 10GBps), and the data processor can efficiently overlap computation with communication. Note that computation cost (either BINJOIN's join or WOPTJOIN's intersection) is lower-bounded by the output size that is equal to the communication cost. Therefore, computation becomes the bottleneck if the network condition is good to guarantee the data to be delivered in time. Nowadays, the bandwidth of local cluster commonly exceeds 10GBps, and the overlapping of computation and communication is widely used in distributed systems (e.g. Spark [54], Flink [17]). As a result, we tend to see distributed subgraph matching as a computation-intensive task, and we advocate future research to devote more efforts into optimizing the computation while considering the following perspectives: (1) the new advancements of hardware, for example the co-processing on GPU in the coupled CPU-GPU architectures [28] and the SIMD programming model on modern CPU [30]; (2) general computing optimizations such as load balancing strategy and cache-aware graph data accessing [53].
A Practical Guide. Based on the experimental findings, we propose a practical guide for distributed subgraph matching in Figure 10. Note that this program guide is based on current progress of the literature, and future work is needed, for examples to study the hybrid strategy and the impact factors of the optimal join plan, before we can arrive at a solid decision-making to choose between BINJOIN and WOPTJOIN.
Conclusions
In this paper, we implement four strategies and three general-purpose optimizations for distributed subgraph matching based on Timely dataflow system, aiming for a systematic, strategy-level comparisons among the state-of-the-art algorithms. Based on thorough empirical analysis, we summarize a practical guide, and we also motivate interesting future work for distributed subgraph matching.
A Auxiliary Experiments
Exp-6 Scalability of Unlabelled Matching. We vary the number of machines as 1, 2, 4, 6, 8, 10, and run the unlabelled queries q 1 and q 2 to see how each strategy (BINJOIN, WOPTJOIN, SHRCUBE and FULLREP) scales out. We further evaluate "Single Thread", a serial algorithm that is specially implemented for these two queries. According to [42], we define COST of a strategy as the number of workers it needs to outperform the "Single Thread", which is a comprehensive measurement of both efficiency and scalability. In this experiment, we query q 1 and q 2 on the popular dataset LJ, and show the results in Figure 11. Note that we only plot the communication and memory consumption for q 1 , as q 2 follows similar trend. We also test on the other datasets, such as the dense dataset GP, the results are also similar.
BinJoin
WOptjoin ShrCube FullRep Single Thread All strategies demonstrate reasonable scaling regarding both queries. In terms of COST, note that FULLREP is slightly larger than 1, because "DualSim" is implemented in general for arbitrary query, while "SingleThread" uses a hand-tuned implementation. We first analyze the results of q 1 . The COST ranking is FULLREP (1.6), WOPTJOIN (2.0), BINJOIN (3.1) and SHRCUBE (3.7). As expected, WOPTJOIN scales worse than FULLREP, while BINJOIN scales worse than WOPTJOIN because shuffling cost is increasing with the number of machines. In terms of memory consumption, it is trivial that FULLREP constantly consumes memory of graph size. Due to the use of Batching, both BINJOIN and WOPTJOIN consume very low memory for both queries. Observe that SHRCUBE consumes much more memory than WOPTJOIN and BINJOIN, even more than the graph data itself. This is because that certain worker may receive more edges (with duplicates) than the graph itself, which increases the peak memory consumption. For communication cost, both BINJOIN and WOPTJOIN demonstrate reasonable drops as the increment of machines. SHRCUBE renders much less communication as expected, but it shows increasing trend. This is actually a reasonable behavior of SHRCUBE, as more machines also means more data duplicates. For q 2 , the COST ranking is FULLREP (2.4), WOPTJOIN (2.75), BINJOIN (3.82) and SHRCUBE (71.2). Here, SHRCUBE is dramatically larger, with most time spending on deduplication (Section 3.3). The trend of memory consumption and communication cost of q 2 is similar with that of q 1 , thus is not further discussed.
Exp-7 Vary Desities for Labelled Matching. Based on DG10, We generate the datasets with densities 10, 20, 40, 80 and 160 by randomly adding edges into DG10. Note that the density-10 dataset is the original DG10 in Table 4. We use the labelled queries q 4 and q 7 in this experiment, and show the results in Figure 12. Exp-8 Vary Labels for Labelled Matching. We generate the datasets with number of labels 0, 5, 10, 15 and 20 based on DG10. Note that there are 5 labels in labelled queries q 4 and q 7 , which are called the target labels. The 10-label dataset is the original DG10. For the one with 5 labels, we will replace each label not in the target labels as one random target label. For the ones with more than 10 labels, we randomly choose some nodes to change their labels into some other pre-defined labels until they contain the required number of labels. For the one with zero label, it degenerates into unlabelled matching, and we use unlabelled version of q 4 and q 7 instead. The experiment demonstrates the transition from unlabelled matching to labelled matching, where the biggest drop happens for all algorithms. The drops continue with the increment of the number of labels, but less sharply when there are sufficient number of labels (≥ 10). Observe that when there are very few labels, for example, the 5-label case of q 7 , FULLREP actually performs worse than BINJOIN and WOPTJOIN. The "CFLMatch" algorithm [16] used by FULLREP relies heavily on label-based pruning. Fewer labels render larger candidate set and more recursive calls, resulting in performance drop of FULLREP. While fewer labels may enlarge the intermediate results of BINJOIN and WOPTJOIN, but they are relatively small in the labelled case, and does not create much burden for the 10GBps network.
BinJoin
B Auxiliary Materials
All Query Results. In Table 5, We show the number of results of every successful query on each dataset evaluated in this work. Note that DG10 and DG60 record the labelled queries of q 1 − q 9 .
| 13,431 |
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
The scene text proposal idea is mainly inspired by the success of object proposal in many object detection systems. It has advantage in locating more possible text regions to offer higher detection recall. It's often evaluated according to the recall rate as well as the number of needed proposals - typically the smaller the better at a similar recall level @cite_62 . False-positive scene text proposals are usually eliminated by either a text nontext classifier @cite_39 @cite_3 or a scene text recognition model @cite_34 @cite_11 in end-to-end scene text reading systems.
|
{
"abstract": [
"Current top performing Pascal VOC object detectors employ detection proposals to guide the search for objects thereby avoiding exhaustive sliding window search across images. Despite the popularity of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in depth analysis of ten object proposal methods along with four baselines regarding ground truth annotation recall (on Pascal VOC 2007 and ImageNet 2013), repeatability, and impact on DPM detector performance. Our findings show common weaknesses of existing methods, and provide insights to choose the most adequate method for different settings.",
"In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"Recently, a variety of real-world applications have triggered huge demand for techniques that can extract textual information from natural scenes. Therefore, scene text detection and recognition have become active research topics in computer vision. In this work, we investigate the problem of scene text detection from an alternative perspective and propose a novel algorithm for it. Different from traditional methods, which mainly make use of the properties of single characters or strokes, the proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images. The experiments on the latest ICDAR benchmarks demonstrate that the proposed algorithm achieves state-of-the-art performance. Moreover, compared to conventional approaches, the proposed algorithm shows stronger adaptability to texts in challenging scenarios.",
"Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (, 2013; , 2014; , 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (, 2014; , 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 F -score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https: github.com lluisgomez TextProposals .",
"Text proposal has been gaining interest in recent years due to the great success of object proposal in categoriesindependent object localization. In this paper, we present a novel text-specific proposal technique that provides superior bounding boxes for accurate text localization in scenes. The proposed technique, which we call Text Edge Box (TEB), uses a binary edge map, a gradient map and an orientation map of an image as inputs. Connected components are first found within the binary edge map, which are scored by two proposed low-cue text features that are extracted in the gradient map and the orientation map, respectively. These scores present text probability of connected components and are aggregated in a text edge image. Scene texts proposals are finally generated by grouping the connected components and estimating their likelihood of being words. The proposed TEB has been evaluated on the two public scene text datasets: the Robust Reading Competition 2013 dataset (ICDAR 2013) dataset and the Street View Text (SVT) dataset. Experiments show that the proposed TEB outperforms the state-of-the-art techniques greatly."
],
"cite_N": [
"@cite_62",
"@cite_3",
"@cite_39",
"@cite_34",
"@cite_11"
],
"mid": [
"2104446196",
"2704256938",
"1935817682",
"2962984063",
"2607175958"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
Different scene text proposal approaches have been explored. One widely adopted approach combines generic object proposal techniques with text-specific features for scene text proposal generation. For example, EdgeBoxes @cite_55 is combined with two text-specific features for scene text proposal generation @cite_11 . In another work @cite_49 , EdgeBoxes is combined with the Aggregate Channel Feature (ACF) and AdaBoost classifiers to search for text regions. In @cite_34 , Selective Search @cite_4 is combined with Maximally Stable Extremal Regions (MSER) to extract texture features for dendrogram grouping. A text-specific symmetry feature is explored in @cite_39 to search for text line proposals directly, where false text line proposals are removed by training a CNN classifier. Deep features have also been used for scene text proposal due to its superior performance in recent years. For example, inception layers are built on top of the last convolution layer of the VGG16 for generating text proposal candidates in @cite_3 . The Region Proposal Network (RPN) and Faster R-CNN structure are adopted for scene text proposal generation in @cite_63 @cite_35 .
|
{
"abstract": [
"In this paper, we propose a novel method called Rotational Region CNN (R2CNN) for detecting arbitrary-oriented texts in natural scene images. The framework is based on Faster R-CNN [1] architecture. First, we use the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations. Second, for each axis-aligned text box proposed by RPN, we extract its pooled features with different pooled sizes and the concatenated features are used to simultaneously predict the text non-text score, axis-aligned box and inclined minimum area box. At last, we use an inclined non-maximum suppression to get the detection results. Our approach achieves competitive results on text detection benchmarks: ICDAR 2015 and ICDAR 2013.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"Recently, a variety of real-world applications have triggered huge demand for techniques that can extract textual information from natural scenes. Therefore, scene text detection and recognition have become active research topics in computer vision. In this work, we investigate the problem of scene text detection from an alternative perspective and propose a novel algorithm for it. Different from traditional methods, which mainly make use of the properties of single characters or strokes, the proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images. The experiments on the latest ICDAR benchmarks demonstrate that the proposed algorithm achieves state-of-the-art performance. Moreover, compared to conventional approaches, the proposed algorithm shows stronger adaptability to texts in challenging scenarios.",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .",
"Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (, 2013; , 2014; , 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (, 2014; , 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 F -score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https: github.com lluisgomez TextProposals .",
"Text proposal has been gaining interest in recent years due to the great success of object proposal in categoriesindependent object localization. In this paper, we present a novel text-specific proposal technique that provides superior bounding boxes for accurate text localization in scenes. The proposed technique, which we call Text Edge Box (TEB), uses a binary edge map, a gradient map and an orientation map of an image as inputs. Connected components are first found within the binary edge map, which are scored by two proposed low-cue text features that are extracted in the gradient map and the orientation map, respectively. These scores present text probability of connected components and are aggregated in a text edge image. Scene texts proposals are finally generated by grouping the connected components and estimating their likelihood of being words. The proposed TEB has been evaluated on the two public scene text datasets: the Robust Reading Competition 2013 dataset (ICDAR 2013) dataset and the Street View Text (SVT) dataset. Experiments show that the proposed TEB outperforms the state-of-the-art techniques greatly."
],
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_55",
"@cite_3",
"@cite_39",
"@cite_49",
"@cite_63",
"@cite_34",
"@cite_11"
],
"mid": [
"2725486421",
"2088049833",
"7746136",
"2704256938",
"1935817682",
"1922126009",
"2519818067",
"2962984063",
"2607175958"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
Most existing scene text proposal techniques have various limitations. For example, the EdgeBoxes based technique @cite_49 is efficient but often generate a large number of false-positive proposals. The hand-crafted text-specific features rely heavily on object boundaries which are sensitive to image noise and degradation @cite_19 . Techniques using heuristic rules and parameters @cite_11 do not adapt well across datasets. The deep learning based technique @cite_3 produces a small number of proposals but the recall rate becomes unstable when the Intersection over Union (IoU) threshold increases. As a comparison, our proposed proposal technique does not leverage heuristic parameters and obtains a high recall rate with a small number of false-positive proposals.
|
{
"abstract": [
"This paper presents a scene text extraction technique that automatically detects and segments texts from scene images. Three text-specific features are designed over image edges with which a set of candidate text boundaries is first detected. For each detected candidate text boundary, one or more candidate characters are then extracted by using a local threshold that is estimated based on the surrounding image pixels. The real characters and words are finally identified by a support vector regression model that is trained using bags-of-words representation. The proposed technique has been evaluated over the latest ICDAR-2013 Robust Reading Competition dataset. Experiments show that it obtains superior F-measures of 78.19 and 75.24 (on atom level), respectively, for the scene text detection and segmentation tasks.",
"In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"Text proposal has been gaining interest in recent years due to the great success of object proposal in categoriesindependent object localization. In this paper, we present a novel text-specific proposal technique that provides superior bounding boxes for accurate text localization in scenes. The proposed technique, which we call Text Edge Box (TEB), uses a binary edge map, a gradient map and an orientation map of an image as inputs. Connected components are first found within the binary edge map, which are scored by two proposed low-cue text features that are extracted in the gradient map and the orientation map, respectively. These scores present text probability of connected components and are aggregated in a text edge image. Scene texts proposals are finally generated by grouping the connected components and estimating their likelihood of being words. The proposed TEB has been evaluated on the two public scene text datasets: the Robust Reading Competition 2013 dataset (ICDAR 2013) dataset and the Street View Text (SVT) dataset. Experiments show that the proposed TEB outperforms the state-of-the-art techniques greatly."
],
"cite_N": [
"@cite_19",
"@cite_3",
"@cite_49",
"@cite_11"
],
"mid": [
"1967140047",
"2704256938",
"1922126009",
"2607175958"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
A large number of scene text detection techniques have been reported in the literature. Sliding window has been widely used to search for texts in scene images @cite_51 @cite_30 @cite_9 . However, it usually has a low efficiency because it adopts an exhaustive search process by using multiple windows of different sizes and aspect ratios. Region based techniques have been proposed to overcome the low efficiency constraint. For example, the Maximal Stable External Regions (MSRE) has been widely used @cite_17 @cite_28 @cite_46 @cite_52 for scene text detection. In addition, various hand-craft text-specific features have also been extensively investigated such as Stroke Width Transform (SWT) @cite_12 , Stroke Feature Transform (SFT) @cite_14 , text edge specific features @cite_19 , Stroke End Keypoints (SEK), Stroke Bend Keypoints (SBK) @cite_29 , and deep features based regions @cite_36 @cite_58 @cite_27 . Different post-processing schemes have also been designed to remove false positives, e.g heuristic rules based classifier @cite_22 @cite_52 @cite_23 @cite_47 , graph processing @cite_51 @cite_30 , support vector regression @cite_19 , convolutional K-mean @cite_30 , distance metric learning @cite_17 , AdaBoost @cite_46 @cite_2 , random forest @cite_12 @cite_14 , convolution neural network @cite_9 @cite_28 , etc.
|
{
"abstract": [
"We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14 , 90.26 , and 86.77 respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.",
"This paper presents a robust text detection approach based on color-enhanced contrasting extremal region (CER) and neural networks. Given a color natural scene image, six component-trees are built from its grayscale image, hue and saturation channel images in a perception-based illumination invariant color space, and their inverted images, respectively. From each component-tree, color-enhanced CERs are extracted as character candidates. By using a \"divide-and-conquer\" strategy, each candidate image patch is labeled reliably by rules as one of five types, namely, Long, Thin, Fill, Square-large and Square-small, and classified as text or non-text by a corresponding neural network, which is trained by an ambiguity-free learning strategy. After pruning unambiguous non-text components, repeating components in each component-tree are pruned further. Remaining components are then grouped into candidate text-lines and verified by another set of neural networks. Finally, results from six component-trees are combined, and a post-processing step is used to recover lost characters. Our proposed method achieves superior performance on both ICDAR-2011 and ICDAR-2013 \"Reading Text in Scene Images\" test sets. Several open problems in this topic are discussed and we present our solution.Color-enhanced CERs are effective to be candidate-text-connected-components.Neural networks work very well for the challenging text non-text classification.The \"ambiguity-free learning\" strategy addresses the ambiguity problem properly.The \"divide-and-conquer\" strategy solves the size normalization problem well.",
"In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"Text detection in videos is challenging due to low resolution and complex background of videos. Besides, an arbitrary orientation of scene text lines in video makes the problem more complex and challenging. This paper presents a new method that extracts text lines of any orientations based on gradient vector flow (GVF) and neighbor component grouping. The GVF of edge pixels in the Sobel edge map of the input frame is explored to identify the dominant edge pixels which represent text components. The method extracts edge components corresponding to dominant pixels in the Sobel edge map, which we call text candidates (TC) of the text lines. We propose two grouping schemes. The first finds nearest neighbors based on geometrical properties of TC to group broken segments and neighboring characters which results in word patches. The end and junction points of skeleton of the word patches are considered to eliminate false positives, which output the candidate text components (CTC). The second is based on the direction and the size of the CTC to extract neighboring CTC and to restore missing CTC, which enables arbitrarily oriented text line detection in video frame. Experimental results on different datasets, including arbitrarily oriented text data, nonhorizontal and horizontal text data, Hua's data and ICDAR-03 data (camera images), show that the proposed method outperforms existing methods in terms of recall, precision and f-measure.",
"We introduce an algorithm for text detection and localization (\"spotting\") that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82 and 83 , respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.",
"Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text.",
"Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.",
"",
"The maximally stable extremal region (MSER) method has been widely used to extract character candidates, but because of its requirement for maximum stability, high text detection performance is difficult to obtain. To overcome this problem, we propose a robust character candidate extraction method that performs ER tree construction, sub-path partitioning, sub-path pruning, and character candidate selection sequentially. Then, we use the AdaBoost trained character classifier to verify the extracted character candidates. Then, we use heuristics to refine the classified character candidates and group the refined character candidates into text regions according to their geometric adjacency and color similarity. We also apply the proposed text detection method to two different color channels C r and C b and obtain the final detection result by combining the detection results on the three different channels. The proposed text detection method on ICDAR 2013 dataset achieved 8 , 1 , and 4 improvements in recall rate, precision rate and f-score, respectively, compared to the state-of-the-art methods.",
"This paper presents a scene text extraction technique that automatically detects and segments texts from scene images. Three text-specific features are designed over image edges with which a set of candidate text boundaries is first detected. For each detected candidate text boundary, one or more candidate characters are then extracted by using a local threshold that is estimated based on the surrounding image pixels. The real characters and words are finally identified by a support vector regression model that is trained using bags-of-words representation. The proposed technique has been evaluated over the latest ICDAR-2013 Robust Reading Competition dataset. Experiments show that it obtains superior F-measures of 78.19 and 75.24 (on atom level), respectively, for the scene text detection and segmentation tasks.",
"Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.",
"We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.",
"This paper presents a novel scene text detection algorithm, Canny Text Detector, which takes advantage of the similarity between image edge and text for effective text localization with improved recall rate. As closely related edge pixels construct the structural information of an object, we observe that cohesive characters compose a meaningful word sentence sharing similar properties such as spatial location, size, color, and stroke width regardless of language. However, prevalent scene text detection approaches have not fully utilized such similarity, but mostly rely on the characters classified with high confidence, leading to low recall rate. By exploiting the similarity, our approach can quickly and robustly localize a variety of texts. Inspired by the original Canny edge detector, our algorithm makes use of double threshold and hysteresis tracking to detect texts of low confidence. Experimental results on public datasets demonstrate that our algorithm outperforms the state-of the-art scene text detection methods in terms of detection rate.",
"In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.",
"The prevalent scene text detection approach follows four sequential steps comprising character candidate detection, false character candidate removal, text line extraction, and text line verification. However, errors occur and accumulate throughout each of these sequential steps which often lead to low detection performance. To address these issues, we propose a unified scene text detection system, namely Text Flow, by utilizing the minimum cost (min-cost) flow network model. With character candidates detected by cascade boosting, the min-cost flow network model integrates the last three sequential steps into a single process which solves the error accumulation problem at both character level and text line level effectively. The proposed technique has been tested on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a multilingual dataset and it outperforms the state-of-the-art methods on all three datasets with much higher recall and F-score. The good performance on the multilingual dataset shows that the proposed technique can be used for the detection of texts in different languages.",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76 , much better than the state-of-the-art performance of 71 . Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method."
],
"cite_N": [
"@cite_30",
"@cite_47",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_36",
"@cite_9",
"@cite_29",
"@cite_52",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_46",
"@cite_58",
"@cite_51",
"@cite_12",
"@cite_17"
],
"mid": [
"2472159136",
"2020964295",
"2128854450",
"1978854150",
"2300131423",
"2464918637",
"1607307044",
"",
"2113295334",
"1967140047",
"2217433794",
"2166949156",
"1998384060",
"2468724597",
"2952365771",
"2949728256",
"2142159465",
"2148214126"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
With the advance of convolutional neural network (CNN), different CNN models have been exploited for the scene text detection tasks. For example, the DeepText makes use of convolutional layers for deep features extraction and inception layers for bounding boxes predictions @cite_3 . The TextBoxes @cite_59 adopts the Single Shot Multiboxex Detector (SSD) @cite_21 to deal with multi-scale texts in scenes. Quadrilateral anchor boxes have also been proposed for detecting tighter scene text boxes @cite_40 . In addition, direct regression solution has also been proposed @cite_31 to remove the hand-crafted anchor boxes. Different CNN based detection and learning schemes have also been explored. For example, some work adopts a bottom-up approach that first detection characters and then group them to words or text lines @cite_63 @cite_57 @cite_41 . Some system instead defines a text boundary class for pixel-level scene text detection @cite_25 @cite_48 . In addition, weakly supervised and semi-supervised learning approach @cite_37 has also been studied to address the image annotation constraint @cite_7 .
|
{
"abstract": [
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.",
"The requiring of large amounts of annotated training data has become a common constraint on various deep learning systems. In this paper, we propose a weakly supervised scene text detection method (WeText) that trains robust and accurate scene text detection models by learning from unannotated or weakly annotated data. With a \"light\" supervised model trained on a small fully annotated dataset, we explore semi-supervised and weakly supervised learning on a large unannotated dataset and a large weakly annotated dataset, respectively. For the unsupervised learning, the light supervised model is applied to the unannotated dataset to search for more character training samples, which are further combined with the small annotated dataset to retrain a superior character detection model. For the weakly supervised learning, the character searching is guided by high-level annotations of words text lines that are widely available and also much easier to prepare. In addition, we design an unified scene character detector by adapting regression based deep networks, which greatly relieves the error accumulation issue that widely exists in most traditional approaches. Extensive experiments across different unannotated and weakly annotated datasets show that the scene text detection performance can be clearly boosted under both scenarios, where the weakly supervised learning can achieve the state-of-the-art performance by using only 229 fully annotated scene text images.",
"Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"",
"",
"In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.",
"Detecting incidental scene text is a challenging task because of multi-orientation, perspective distortion, and variation of text size, color and scale. Retrospective research has only focused on using rectangular bounding box or horizontal sliding window to localize text, which may result in redundant background noise, unnecessary overlap or even information loss. To address these issues, we propose a new Convolutional Neural Networks (CNNs) based method, named Deep Matching Prior Network (DMPNet), to detect text with tighter quadrangle. First, we use quadrilateral sliding windows in several specific intermediate convolutional layers to roughly recall the text with higher overlapping area and then a shared Monte-Carlo method is proposed for fast and accurate computing of the polygonal areas. After that, we designed a sequential protocol for relative regression which can exactly predict text with compact quadrangle. Moreover, a auxiliary smooth Ln loss is also proposed for further regressing the position of text, which has better overall performance than L2 loss and smooth L1 loss in terms of robustness and stability. The effectiveness of our approach is evaluated on a public word-level, multi-oriented scene text database, ICDAR 2015 Robust Reading Competition Challenge 4 Incidental scene text localization. The performance of our method is evaluated by using F-measure and found to be 70.64 , outperforming the existing state-of-the-art method with F-measure 63.76 .",
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .",
"In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. Then we analyze the drawbacks of the indirect regression, which the recent state-of-the-art detection structures like Faster-RCNN and SSD follows, for multi-oriented scene text detection, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial for localizing incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure of 81 , which is a new state-of-the-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.",
"In recent years, text recognition has achieved remarkable success in recognizing scanned document text. However, word recognition in natural images is still an open problem, which generally requires time consuming post-processing steps. We present a novel architecture for individual word detection in scene images based on semantic segmentation. Our contributions are twofold: the concept of WordFence, which detects border areas surrounding each individual word and a novel pixelwise weighted softmax loss function which penalizes background and emphasizes small text regions. WordFence ensures that each word is detected individually, and the new loss function provides a strong training signal to both text and word border localization. The proposed technique avoids intensive post-processing, producing an end-to-end word detection system. We achieve superior localization recall on common benchmark datasets - 92 recall on ICDAR11 and ICDAR13 and 63 recall on SVT. Furthermore, our end-to-end word recognition system achieves state-of-the-art 86 F-Score on ICDAR13."
],
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_41",
"@cite_48",
"@cite_21",
"@cite_3",
"@cite_57",
"@cite_40",
"@cite_59",
"@cite_63",
"@cite_31",
"@cite_25"
],
"mid": [
"2221898772",
"2962935569",
"2749425057",
"",
"",
"2704256938",
"2950143680",
"2604243686",
"2962773189",
"2519818067",
"2952662639",
"2951183746"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
Quite a number of CNN based end-to-end scene text reading systems have been reported in recent years. In @cite_9 @cite_45 , a CNN based character recognition model is developed where word information is extracted from text saliency map using sliding windows. The same framework has been implemented in @cite_60 , where a more robust end-to-end scene text reading system is developed by training a model handling three functions including text and non-text classification, case-insensitive characters recognition, and case-sensitive characters recognition. In @cite_59 , an advanced end-to-end scene text reading system is designed where the Single Shot Multiboxes Detector (SSD) is employed for scene text detection and a transcription model proposed in @cite_10 is adopted for recognition. End-to-end trainable scene text reading system has also been proposed which can concurrently produce texts location and text transcription @cite_61
|
{
"abstract": [
"A method for scene text localization and recognition is proposed. The novelties include: training of both text detection and recognition in a single end-to-end pass, the structure of the recognition CNN and the geometry of its input layer that preserves the aspect of the text and adapts its resolution to the data.,,The proposed method achieves state-of-the-art accuracy in the end-to-end text recognition on two standard datasets – ICDAR 2013 and ICDAR 2015, whilst being an order of magnitude faster than competing methods - the whole pipeline runs at 10 frames per second on an NVidia K80 GPU.",
"The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.",
"Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.",
"In this work, we tackle the problem of car license plate detection and recognition in natural scene images. Inspired by the success of deep neural networks (DNNs) in various vision applications, here we leverage DNNs to learn high-level features in a cascade framework, which lead to improved performance on both detection and recognition. Firstly, we train a @math -class convolutional neural network (CNN) to detect all characters in an image, which results in a high recall, compared with conventional approaches such as training a binary text non-text classifier. False positives are then eliminated by the second plate non-plate CNN classifier. Bounding box refinement is then carried out based on the edge information of the license plates, in order to improve the intersection-over-union (IoU) ratio. The proposed cascade framework extracts license plates effectively with both high recall and precision. Last, we propose to recognize the license characters as a sequence labelling problem. A recurrent neural network (RNN) with long short-term memory (LSTM) is trained to recognize the sequential features extracted from the whole license plate via CNNs. The main advantage of this approach is that it is segmentation free. By exploring context information and avoiding errors caused by segmentation, the RNN method performs better than a baseline method of combining segmentation and deep CNN classification; and achieves state-of-the-art recognition accuracy.",
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it."
],
"cite_N": [
"@cite_61",
"@cite_60",
"@cite_9",
"@cite_45",
"@cite_59",
"@cite_10"
],
"mid": [
"2777652944",
"70975097",
"1607307044",
"2279655419",
"2962773189",
"2194187530"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.10003
|
2896049206
|
Abstract Automatic reading texts in scenes has attracted increasing interest in recent years as texts often carry rich semantic information that is useful for scene understanding. In this paper, we propose a novel scene text proposal technique aiming for accurate reading texts in scenes. Inspired by the pooling layer in the deep neural network architecture, a pooling based scene text proposal technique is developed. A novel score function is designed which exploits the histogram of oriented gradients and is capable of ranking the proposals according to their probabilities of being text. An end-to-end scene text reading system has also been developed by incorporating the proposed scene text proposal technique where false alarms elimination and words recognition are performed simultaneously. Extensive experiments over several public datasets show that the proposed technique can handle multi-orientation and multi-language scene texts and obtains outstanding proposal performance. The developed end-to-end systems also achieve very competitive scene text spotting and reading performance.
|
Our developed end-to-end scene text reading system adopts a similar framework as presented in @cite_34 @cite_49 that exploits proposals and existing scene text recognition models. One unique feature is that it uses only around one-fifth of the number of proposals that prior proposal based end-to-end systems use thanks to our proposed pooling based proposal technique and gradient histogram based proposal ranking.
|
{
"abstract": [
"Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (, 2013; , 2014; , 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (, 2014; , 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 F -score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https: github.com lluisgomez TextProposals .",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query."
],
"cite_N": [
"@cite_34",
"@cite_49"
],
"mid": [
"2962984063",
"1922126009"
]
}
|
A pooling based scene text proposal technique for scene text reading in the wild
| 0 |
|
1811.09747
|
2901872144
|
We develop methods for efficient amortized approximate Bayesian inference over posterior distributions of probabilistic clustering models, such as Dirichlet process mixture models. The approach is based on mapping distributed, symmetry-invariant representations of cluster arrangements into conditional probabilities. The method parallelizes easily, yields iid samples from the approximate posterior of cluster assignments with the same computational cost of a single Gibbs sampler sweep, and can easily be applied to both conjugate and non-conjugate models, as training only requires samples from the generative model.
|
The work @cite_7 provides an overview of deterministic clustering based on neural networks, and @cite_12 proposes a biologically inspired network for online clustering. Our work differs from previous approaches in its use of neural networks to explicitly approximate fully Bayesian inference in a probabilistic generative clustering model. Similar amortized approaches to Bayesian inference have been explored in Bayesian networks @cite_9 , sequential Monte Carlo @cite_1 , probabilistic programming @cite_13 @cite_15 and particle tracking @cite_3 . The representation of a set via a sum (or mean) of encoding vectors was also used in @cite_14 @cite_4 @cite_20 @cite_18 .
|
{
"abstract": [
"",
"",
"An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.",
"Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.",
"We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model's joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses can be used to invert each of the computation steps leading to an observation, sampling backwards in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse MCMC algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets.",
"We introduce a new approach for amortizing inference in directed graphical models by learning heuristic approximations to stochastic inverses, designed specifically for use as proposal distributions in sequential Monte Carlo methods. We describe a procedure for constructing and learning a structured neural network which represents an inverse factorization of the graphical model, resulting in a conditional density estimator that takes as input particular values of the observed random variables, and returns an approximation to the distribution of the latent variables. This recognition model can be learned offline, independent from any particular dataset, prior to performing inference. The output of these networks can be used as automatically-learned high-quality proposal distributions to accelerate sequential Monte Carlo across a diverse range of problem settings.",
"Many important datasets in physics, chemistry, and biology consist of noisy sequences of images of multiple moving overlapping particles. In many cases, the observed particles are indistinguishable, leading to unavoidable uncertainty about nearby particles9 identities. Exact Bayesian inference is intractable in this setting, and previous approximate Bayesian methods scale poorly. Non-Bayesian approaches that output a single \"best\" estimate of the particle tracks (thus discarding important uncertainty information) are therefore dominant in practice. Here we propose a flexible and scalable amortized approach for Bayesian inference on this task. We introduce a novel neural network method to approximate the (intractable) filter-backward-sample-forward algorithm for Bayesian inference in this setting. By varying the simulated training data for the network, we can perform inference on a wide variety of data types. This approach is therefore highly flexible and improves on the state of the art in terms of accuracy; provides uncertainty estimates about the particle locations and identities; and has a test runtime that scales linearly as a function of the data length and number of particles, thus enabling Bayesian inference in arbitrarily large particle tracking datasets.",
"We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods. We call what we do \"compilation of inference\" because our method transforms a denotational specification of an inference problem in the form of a probabilistic program written in a universal programming language into a trained neural network denoted in a neural network specification language. When at test time this neural network is fed observational data and executed, it performs approximate inference in the original model specified by the probabilistic program. Our training objective and learning procedure are designed to allow the trained neural network to be used as a proposal distribution in a sequential importance sampling inference engine. We illustrate our method on mixture models and Captcha solving and show significant speedups in the efficiency of inference.",
"",
"Probabilistic programming languages (PPLs) are a powerful modeling tool, able to represent any computable probability distribution. Unfortunately, probabilistic program inference is often intractable, and existing PPLs mostly rely on expensive, approximate sampling-based methods. To alleviate this problem, one could try to learn from past inferences, so that future inferences run faster. This strategy is known as amortized inference; it has recently been applied to Bayesian networks and deep generative models. This paper proposes a system for amortized inference in PPLs. In our system, amortization comes in the form of a parameterized guide program. Guide programs have similar structure to the original program, but can have richer data flow, including neural network components. These networks can be optimized so that the guide approximately samples from the posterior distribution defined by the original program. We present a flexible interface for defining guide programs and a stochastic gradient-based scheme for optimizing guide parameters, as well as some preliminary results on automatically deriving guide programs. We explore in detail the common machine learning pattern in which a 'local' model is specified by 'global' random values and used to generate independent observed data points; this gives rise to amortized local inference supporting global model learning.",
"A key step in insect olfaction is the transformation of a dense representation of odors in a small population of neurons - projection neurons (PNs) of the antennal lobe - into a sparse representation in a much larger population of neurons - Kenyon cells (KCs) of the mushroom body. What computational purpose does this transformation serve? We propose that the PN-KC network implements an online clustering algorithm which we derive from the k-means cost function. The vector of PN-KC synaptic weights converging onto a given KC represents the corresponding cluster centroid. KC activities represent attribution indices, i.e. the degree to which a given odor presentation is attributed to each cluster. Remarkably, such clustering view of the PN-KC circuit naturally accounts for several of its salient features. First, attribution indices are nonnegative thus rationalizing rectification in KCs. Second, the constraint on the total sum of attribution indices for each presentation is enforced by a Lagrange multiplier identified with the activity of a single inhibitory interneuron reciprocally connected with KCs. Third, the soft-clustering version of our algorithm reproduces observed sparsity and overcompleteness of the KC representation which may optimize supervised classification downstream."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"",
"2412589713",
"2160422165",
"2098694378",
"2417988721",
"2790015441",
"2950999850",
"",
"2537265010",
"2773928715"
]
}
|
Amortized Bayesian inference for clustering models
|
Unsupervised clustering is a key tool in many areas of statistics and machine learning, and analyses based on probabilistic generative models are crucial whenever there is irreducible uncertainty about the number of clusters and their members.
Popular posterior inference methods in these models fall into two broad classes. On the one hand, MCMC methods [1,2,3] are asymptotically accurate but time-consuming, with convergence that is difficult to assess. Models whose likelihood and prior are non-conjugate are particularly challenging, since in these cases the model parameters cannot be marginalized and must be kept as part of the state of the Markov chain. On the other hand, variational methods [4,5,6] are typically much faster but do not come with accuracy guarantees.
In this work we propose a novel approximate amortized approach, based on training neural networks to map distributed, symmetry-invariant representations of cluster arrangements into conditional probabilities. The method can be applied to both conjugate and non-conjugate models, and after training the network with samples from a particular generative model, we can obtain independent, GPUparallelizable, approximate posterior samples of cluster assignments for any new set of observations of arbitrary size, with no need for expensive MCMC steps.
The Neural Clustering Process
Probabilistic models for clustering [7] introduce random variables c i denoting the cluster number to which the data point x i is assigned, and assume a generating process of the form
c 1 . . . c N ∼ p(c 1 , . . . , c N ) (2.1) µ k ∼ p(µ k ) k = 1 . . . K (2.2) x i ∼ p(x i µ ci ) i = 1 . . . N (2.3)
setting include Mixtures of Finite Mixtures [8] and many Bayesian nonparametric models, such as Dirichlet process mixture models (DPMM) (see [9] for a recent overview).
Given N data points x = {x i }, we are interested in sampling the c i 's, using a decomposition
p(c 1∶N x) = p(c 1 x)p(c 2 c 1 , x) . . . p(c N c 1∶N −1 , x). (2.4)
Note that p(c 1 x) = 1, since the first data point is always assigned to the first cluster. To motivate our approach, it is useful to consider the joint distribution of the assignments of the first n data points,
p(c 1 , . . . , c n x) . (2.5)
We are interested in representations of x that keep the symmetries of (2.5):
• Permutations within a cluster: (2.5) is invariant under permutations of x i 's belonging to the same cluster. If there are K clusters, each of them can be represented by
H k = i∶ci=k h(x i ) k = 1 . . . K , (2.6)
where h ∶ R dx → R d h is a function we will learn from data. This type of encoding has been shown in [10] to be necessary to represent functions with permutation symmetries. • Permutations between clusters: (2.5) is invariant under permutations of the cluster labels.
In terms of the within-cluster invariants H k , this symmetry can be captured by
G = K k=1 g(H k ), (2.7)
where g ∶ R d h → R dg . • Permutations of the unassigned data points: (2.5) is also invariant under permutations of the N − n unassigned data points. This can be captured by
Q = N i=n+1
h(x i ).
(2.8)
Note that G and Q provide fixed-dimensional, symmetry-invariant representations of all the assigned and non-assigned data points, respectively, for any number of N data points and K clusters. Consider now the conditional distribution that interests us,
p(c n c 1∶n−1 , x) = p(c 1 . . . c n x) K+1 c ′ n =1 p(c 1 . . . c ′ n x)
.
(2.9)
Assuming K different values in c 1∶n−1 , then c n can take K + 1 values, corresponding to x n joining any of the K existing clusters, or forming its own new cluster. Let us denote by G k the value of (2.7) for each of these K + 1 configurations. In terms of the G k 's and Q, we propose to model (2.9) as
p θ (c n = k c 1∶n−1 , x) = e f (G k ,Q,hn) ∑ K+1 k ′ =1 e f (G k ′ ,Q,hn) (2.10) for k = 1 . . . K + 1, where h n = h(x n )
and θ denotes all the parameters in the functions h, g and f , that will be represented with neural networks. Note that this expression preserves the symmetries of the numerator and denominator in the rhs of (2.9). By storing and updating H k and G for successive values of n, the computational cost of a full sample of c 1∶N is O(N K), the same of a full Gibbs sweep. See Algorithm 1 for details; we term this approach the Neural Clustering Process (NCP).
Global permutation symmetry
There is yet another symmetry present in the lhs of (2.4) that is not evident in the rhs: a global simultaneous permutation of the c i 's. If our model learns the correct form for the conditional probabilities, this symmetry should be (approximately) satisfied. We monitor this symmetry during training.
1: h i ← h(x i ) i = 1 . . . N 2: Q ← ∑ N i=2 h i ▷ Initialize unassigned set 3: H 1 ← h 1 ▷ Create first cluster with x 1 4: G ← g(H 1 ) 5: K ← 1, c 1 ← 1 6: for n ← 2 . . . N do 7: Q ← Q − h n
▷ Remove x n from unassigned set 8:
H K+1 ← 0 ▷ We define g(0) = 0 9:
for k ← 1 . . . K + 1 do 10:
G ← G + g(H k + h n ) − g(H k ) ▷ Add x n 11: p k ← e f (G,Q,hn)
12:
G ← G − g(H k + h n ) + g(H k ) ▷ Remove x n 13:
end for 14:
p k ← p k ∑ K+1 k ′ =1 p k ′ ▷ Normalize probabilities 15: c n ∼ p k ▷ Sample assignment for x n 16:
if c n = K + 1 then 17:
K ← K + 1 18: end if 19: G ← G − g(H cn ) + g(H cn + h n ) ▷ Add point x n 20: H cn ← H cn + h n 21: end for 22: return c 1 . . . c N
Learning
In order to learn the parameters θ, we use stochastic gradient descent to minimize the expected negative log-likelihood,
L(θ) = −E p(N ) E p(c1,...,c N ,x) E p(π) N n=2 log p θ (c πn c π1∶πn−1 , x) , (3.1)
where p(N ) and p(π) are uniform over a range of integers and over N -permutations, and samples from p(c 1 , . . . , c N , x) are obtained from the generative model (2.1)-(2.3), irrespective of the model being conjugate. In Appendix C we show that (3.1) can be partially Rao-Blackwellized.
Results
In this section we present examples of NCP clustering. The functions g and f have the same neural architecture in all cases, and for different data types we only change the encoding function h. More details are in Appendix A, where we also show that during training the variance of the joint likelihood (2.4) for different orderings of the data points drops to negligible values. Figure 1 shows results for a DPMM of 2D conjugate Gaussians. In particular, we compare the estimated assignment probabilities for a last observation of a set, c N , against their exact values, which are computable for conjugate models, showing excellent agreement. 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 and a uniform discrete base measure over the 10 labels. Conditioned on a label, observations are sampled uniformly from the MNIST training set. The figure shows N = 20 observations, generated similarly from the MNIST test set. The six rows below the observations show six samples of c1∶20 from the NCP posterior of these 20 images. Most samples from the NCP yield the first row of assignments, which has very low negativeloglikelihood (NLL) and is consistent with the true labels. The next five rows correspond to more rare samples from the NCP, with higher NLL, each capturing some ambiguity suggested by the form of particular digits. In this case we drew 39 samples: 34 corresponding to the first row, and one to each of the next five rows.
Outlook
We have introduced a new approach to sample from (approximate) posterior distributions of probabilistic clustering models. Our first results show reasonable agreement with Gibbs sampling, with major improvements in speed and model flexibility.
We implemented the functions g and f as six-layered MLPs with PReLU non-linearities [21], with 128 neurons in each layer, and final layers of dimensions d g = 512 for g and 1 for f . We used stochastic gradient descent with ADAM [22], with a step-size of 10 −4 for the first 1000 iterations, and 10 −5 afterwards. The number of Monte Carlo samples from (3.1) in each mini-batch were: 1 for p(N ), 8 for p(π), 1 for p(c 1∶N ) and 48 for p(µ k ) and p(x µ).
A.1 Low-dimensional conjugate Gaussian models
The generative model for the examples in Figure 1 is
N ∼ Uniform[5, 100] (A.1) c 1 . . . c N ∼ DPMM(α) (A.2) µ k ∼ N (0, σ 2 µ 1 2 ) k = 1 . . . K (A.3) x i ∼ N (µ ci , σ 2 1 2 ) i = 1 . . . N (A.4)
with α = 0.7, σ µ = 10, σ = 1, and d x = 2. The encoding function h(x) is a five-layered MLPs with PReLU non-linearities, with 128 neurons in the inner layers and a last layer with d h = 256 neurons.
A.2 High-dimensional MNIST data
The generative model for the example in Figure 2 is
N ∼ Uniform[5, 100] (A.5) c 1 . . . c N ∼ DPMM(α) (A.6) l k ∼ Uniform[0, 9] k = 1 . . . K (A.7) x i ∼ Uniform[MNIST digits with label l ci ] i = 1 . . . N (A.8)
with α = 0.7, d x = 28 × 28. The architecture for h(x) was: two layers of [convolutional + maxpool + ReLU] followed by [fully connected(256) + ReLU + fully connected(d h )], with d h = 256.
A.3 Invariance under global permutations
As mentioned in Section 2.1, if the conditional probabilities (2.9) are learned correctly, invariance of the joint probability (2.4) under global permutations should hold. Figure 3 shows estimates of the variance of the joint probability under permutations as learning progresses, showing that it diminishes to negligible values.
B Importance Sampling
Samples from the NCP can be used either as approximate samples from the posterior, or as highquality importance samples. (Alternatively, we could use samples from the NCP to seed an exact MCMC sampler; we have not yet explored this direction systematically.) In the latter case, the expectation of a function r(c) is given by
E p(c x) [r(c)] ≃ ∑ S s=1 p(cs,x) p θ (cs x) r(c s ) ∑ S s=1 p(cs,x) p θ (cs x) (B.1)
where each c s is a sample from p θ (c x). Figure 4 shows a comparison between an expectation obtained from Gibbs samples vs importance NCP samples.
C Rao-Blackwellization
With some more computational effort, it is possible to partially Rao-Blackwellize the expectation in (3.1) and reduce its variance.
C.1 Conjugate Models
For given N and x, a generic term in (3.1) can be written is
c p(c x) log p θ (c n c 1∶n−1 , x) = c p(c n∶N c 1∶n−1 , x)p(c 1∶n−1 x) log p θ (c n c 1∶n−1 , x) ≃ c n∶N p(c n∶N c 1∶n−1 , x) log p θ (c n c 1∶n−1 , x) (C.1) = cn p(c n c 1∶n−1 , x) log p θ (c n c 1∶n−1 , x) (C.2)
where we took here π i = i to simplify the notation. In (C.1) we replaced the expectation under p(c 1∶n−1 x) with a sample of c 1∶n−1 , and in (C.2) we summed over c n+1∶N . The expectation in (C.2) has lower variance than using a sample of c n instead. In this simple example the variance of Gibbs and NCP are comparable, but the average CPU/GPU running time was 184 secs. for each NCP run of 20,000 samples, and 1969 secs. for each Gibbs run (with additional 1000 burn-in samples). The time advantage of NCP is due to the fact that since all samples are iid, NCP can be massively parallelized over GPUs, while in naive implementations of the Gibbs sampler the samples must be obtained sequentially.
The observation model is
p(µ λ) = N (0, σ 2 µ = λ 2 ) (C.4)
p(x µ, σ) = N (µ, σ 2 x = σ 2 ) (C.5) with λ and σ fixed. In this case we get
p(x c) = K k=1 dµ k N (µ k 0, λ 2 ) i∶ci=k N (x i µ k , σ 2 ) (C.6) = K k=1 σ k λ exp σ 2 k (∑ i k x i k ) 2 2σ 4 exp − ∑ i k x 2 i k 2σ 2 (C.7)
where {i k } = {i ∶ c i = k} and σ −2 k = λ −2 + n k σ −2 , with n k = i k , and
p(c 1∶N ) = α K N ∏ K N k=1 (n k − 1)! ∏ N i=1 (i − 1 + α) (C.8)
with α the Dirichlet process concentration parameter.
C.2 Nonconjugate Case
This case is similar, using
| 2,412 |
1906.10963
|
2954017757
|
Creating a highly parallel and flexible discrete element software requires an interdisciplinary approach, where expertise from different disciplines is combined. On the one hand domain specialists provide interaction models between particles. On the other hand high-performance computing specialists optimize the code to achieve good performance on different hardware architectures. In particular, the software must be carefully crafted to achieve good scaling on massively parallel supercomputers. Combining all this in a flexible and extensible, widely usable software is a challenging task. In this article we outline the design decisions and concepts of a newly developed particle dynamics code MESA-PD that is implemented as part of the waLBerla multi-physics framework. Extensibility, flexibility, but also performance and scalability are primary design goals for the new software framework. In particular, the new modular architecture is designed such that physical models can be modified and extended by domain scientists without understanding all details of the parallel computing functionality and the underlying distributed data structures that are needed to achieve good performance on current supercomputer architectures. This goal is achieved by combining the high performance simulation framework waLBerla with code generation techniques. All code and the code generator are released as open source under GPLv3 within the publicly available waLBerla framework (this http URL).
|
Another approach to let the user extend the software framework is shown by some molecular dynamics packages. They use high level languages like Python or their own embedded domain specific language (DSL) to describe the simulation. In order to achieve good performance on various hardware architectures they use just-in-time compilation to generate user specific executables for various architectures. However, to support MPI or CUDA additional wrapper libraries like pyMPI and PyCUDA are needed. Many packages using this technique claim that this can be done with almost no loss in performance compared to native C++ code. Packages that provide such capabilities with a varying degree of just-in-time compilation are for example @cite_12 , @cite_13 , @cite_6 and @cite_4 .
|
{
"abstract": [
"Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics chemistry biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a “Separation of Concerns” approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.",
"Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.",
"",
"OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community."
],
"cite_N": [
"@cite_4",
"@cite_13",
"@cite_6",
"@cite_12"
],
"mid": [
"2609184096",
"2078391824",
"",
"2949223833"
]
}
|
PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON DISCRETE ELEMENT METHODS (DEM8) A MODULAR AND EXTENSIBLE SOFTWARE ARCHITECTURE FOR PARTICLE DYNAMICS
|
be created that will fulfill as many requirements as possible. Here we will follow this approach in the design of a new simulation software for particle dynamics. Such a simulation software should be easy to use and it should be extensible by the end user. At the same time, high performance is desirable and a parallelization suitable for current supercomputers is necessary. Also, maintainability and portability are essential features since the software framework will often be used for a long time, exceeding the life span of computer systems. In this paper, we will discuss some of the requirements that we have identified for our software framework. We explain the implications and our design decisions to satisfy the requirements. In particular, we try to identify and modularize different components of particle dynamics simulations such that specialists can work on them independently. In the following, we present three technical requirements that guide the development of "Modular and Extensible Software Architecture for Particle Dynamics" (MESA-PD).
• flexible domain partitioning Current state-of-the-art parallelization of particle dynamics software uses spatial domain partitioning. This domain partitioning is crucial for an efficient parallel execution. The way the domain is partitioned has impact on how well the workload can be balanced between the different processes and how much communication is needed during synchronization. Since different simulation scenarios will lead to different optimal domain partitionings, a flexible approach is needed to achieve good performance in all cases. A flexible domain partitioning is also important for coupled simulations which run in parallel. Ideally all coupled simulation components must share the same partitioning to avoid communication overheads. Therefore it is a great advantage when the domain partitioning can be easily adapted to that of other simulation modules.
• extensible particle data structure Different interaction models between the particles require the particle to have certain properties. These can be material parameters, electric charge, temperature, and many more. Also, the framework user might want to store additional information when extending the functionality. However, all these possible properties will not be needed for every simulation. The general conventional implementation would have to provide all possible properties in the particle data structure. In most applications, however, only few features are used, therefore, many properties are unused. This leads to a potentially huge amount of memory that is wasted and that slows down the simulation if it has to be copied or sent over the network. Also, the maintainability of the particle data structure gets harder due to its size and dependencies. Here, a flexible approach is needed that allows to add and remove individual particle properties as needed for every specific simulation scenario.
• interaction model MESA-PD is intended to be used as a scientific research tool. This makes it essential that particle interaction models can be adapted. It should be possible to implement new models easily within a short time to be able to test different approaches (rapid prototyping). In the best case, domain specialists who are not profoundly familiar with the framework should be able to modify existing or develop new interaction models. To achieve this, the interaction models must be decoupled from the rest of the framework as much as possible. Note, however, that this can easily lead to a design conflict regarding optimization. The possibility should still exist for a computer scientist to optimize the source code of the interaction model to achieve the best possible performance.
After identifying these requirements we will discuss our approach to fulfill them in Sec. 4-6.
OUR CONTRIBUTION
The plugin model to extend an existing framework is limited by the API provided by the framework. This indirect access to the simulation data may be a limiting factor if data is needed that cannot be accessed via the API. Since the API cannot be extended by the plugin itself either workarounds must be implemented or the extension is not possible. Depending on the internal data structures of the simulation framework, the interaction may be slow if data has to be converted between the framework and the plugin.
In the MESA-PD design we therefore favor a single C++ codebase with no indirections. We use the waLBerla multi-physics framework [5] as a basis for the implementation of MESA-PD. waLBerla is an open source high-performance framework written in C++ that has shown excellent performance and scalability for Lattice Boltzmann [6,7] and particle dynamics simulations [8,9]. Using this framework as a starting point, gives instant access to many utility functions like storing results into SQLite databases, writing vtk output files, etc. Additionally, advanced functionality like load balancing [10] can be used. We extend this framework with the new MESA-PD module which is the result of the requirements formulated in Sec. 1. Writing a native C++ module requires deep knowledge about the framework as well as profound programming skills. To achieve our requirements, however, we want the user to be able to make changes quickly. There are some proposals for the future C++ standard like reflections, metaclasses, and concepts which can make writing modules easier. Unfortunately they are not available in the current C++ standard. We, therefore, introduce an additional code generation step which generates parts of the module automatically. This additional step of compile-time transformations extends the possibilities the programmer has over bare C++ code. The code generation itself is performed via Jinja templates 4 . Jinja templates allow us to introduce placeholders in the source code which will be filled later with the correct piece of code. But Jinja templates not only allow simple placeholder but also control structures like if and for. With these control structures one can disable replacements as well as generating a new line of source code for every item in a list. This alone, however, is not enough to make the coding much simpler. On top of the Jinja templates we introduce a Python library. The purpose of this Python library is to collect information about the simulation the user wants to conduct in a single place and then forward the information to all templates that work on that piece of information. In the end this allows the user of the framework to specify the simulation properties in a very high level representation. The distribution of the information is then handled by the Python library and the low level C++ source code is automatically generated by the Jinja template engine. This workflow is visualized in Fig. 1.
The code generation has to be run once by the user before the application gets compiled. With this approach we aim to leverage the full potential of a highly optimized framework and a unified code basis. Simultaneously, we can profit from the increased flexibility and straightforwardness gained from the code generation approach.
In the rest of this paper we will discuss the implementation satisfying the requirements presented. We will also point out where the new code generation approach greatly simplifies the programming task. A flexible domain partitioning approach using an interface is presented in Sec. 4. The handling of particle data and the extension with user supplied data is discussed in Sec. 5. Finally we present our kernel interface in Sec. 6. This allows to introduce new kernels without any deeper knowledge about the rest of the framework.
FLEXIBLE DOMAIN PARTITIONING
When running simulations in parallel with a spatial domain partitioning approach each process is responsible for a specific subpart of the simulation domain. All particle interactions within this region can be easily detected and resolved since the process has all necessary information. However, at subdomain interfaces, the available information is insufficient. Particles from neighboring subdomains might overlap with the local subdomain but information about the particle is not locally present. Typically this problem is solved by introducing ghost particles. Ghost particles are copies of particles located at other processes which do not belong to the subdomain of the process. In particle dynamics simulations where interactions occur as soon as particles collide, ghost particles are created when they overlap the subdomain. These ghost particles are created and updated by the synchronization algorithm. State-of-the-art synchronization algorithms, as described in [11,9], are independent of the actual subdomain geometry. The information they need is the ownership of a particle, i.e., which process stores the data, which is typically the process which handles the subdomain the geometrical center of the particle lies in. And additionally, if the particle overlaps adjacent subdomains which is commonly done by checking the bounding volume [12,13] against the neighboring subdomains. So all tasks can be reduced to geometric functions that check points and bounding volumes against subdomains. As long as one can provide these functions the synchronization algorithm can be implemented without specific knowledge about the domain partitioning. For this purpose MESA-PD introduces an interface to the domain partitioning that covers all functionality that is needed for an efficient parallelization of the simulation. Thus the algorithms become independent of the domain partitioning. Different implementations of this interface can take care of various peculiarities of the simulation carried out. Three exemplary domain partitionings realized with this interface can be seen in Fig. 2. Within the waLBerla multi-physics framework an implementation for the distributed forest of octrees domain partitioning [6,7] is available. This allows an easy interaction with additional waLBerla modules like the Lattice Boltzmann Method implementation within a single application [14,15]. The interface is also implemented by the HyTeG finite elements multigrid framework which uses unstructured tetrahedral meshes for the domain partitioning [16]. An artificial domain partitioning into spherical shells is also shown in Fig. 2. : This series of illustrations shows various domain partitionings for particle simulations. The particles are colored according to the MPI rank they belong to. The different subdomains are also colored accordingly. All domain partitionings can be used in a parallel simulation without changing the particle dynamics code. Only the domain interface has to be implemented for every partitioning.
PARTICLE DATA STRUCTURE
A flexible particle data structure that can be adapted according to the needs of the current simulation scenario is important to keep the memory footprint small and performance high. However, changing the properties of the particles also involves adapting some algorithms. For example, the algorithm which updates all ghost particles needs to know what particle properties it should synchronize. To simplify the process of adding and removing particle properties and to update all algorithms accordingly we use our additional code generation library. The library offers high level functions to add properties to the particle data structure. This greatly reduces the workload for the programmer and it is also less demanding on the programming skills. A new particle property can be added in the following form:
1 addProperty ( name , datatype , defValue , syncMode ) name names the property and datatype specifies the data type of the property. The data type can be any valid C++ data type. The defValue is the value the property gets initialized with when a new particle is created. The most interesting parameter for parallel simulations is the syncMode. This parameter controls when and how this property gets synchronized between different processes. Different modes are available, namely:
NEVER This property gets never synchronized. This is useful if it is used only to store intermediate results.
COPY This property is copied exactly once when a new ghost particle is created. This is typically used when the property does not change but is different for every particle. Depending on the simulation this can be something like mass, particle radius, etc.
MIGRATION Properties annotated with this syncMode are synchronized when the ownership of a particle changes (i.e. the particle leaves the current subdomain and is now in the subdomain of a different process). This can be used for example to synchronize the old force in a velocity verlet integration scheme.
ALWAYS This property is synchronized in every iteration. This is used for properties which change frequently like position.
During the code generation, the library passes this information to the templates that need it. The templates are then translated into C++ source code. In the following we want to illustrate this process. First, the user defines that the particle data structure should contain three properties: position, radius and force. The position should be synchronized in every synchronization step whereas the radius only needs to be synchronized when a new (ghost) particle is created. The force is recalculated in every time step and is therefore never synchronized. All this information is specified with the following three lines: The for loop prints the enclosed source code for every property into the C++ file. Additionally, the if statement selects only specific properties that are needed in this context. With the information provided by the user the template gets expanded into:
1 buf << p a r t i c l e . g e t P o s i t i o n ( ) ; 2 buf << p a r t i c l e . g e t R a d i u s ( ) ;
One of the major benefits of this approach is that the user only has to specify the particle properties once. According to what the user specified, multiple source files as well as many occurrences within one source file are adapted. This greatly reduces the burden of remembering all places in the source code which need to be adapted to work with this particular set of properties. It is also less error prone since pairs like packing and unpacking are both generated. This eliminates possible inconsistencies. But not only all algorithms are adapted automatically also the user gets exactly the particle data structure which perfectly suits the scenario. However, one additional step is needed. Before compiling the application code the user has to run the code generation once. After that the application can be compiled like usual.
PARTICLE INTERACTION MODELS
In this section, we discuss how particle interactions can be implemented in MESA-PD. Saunders et al. [4] used the fact that most operations in their molecular dynamics code are carried out either on every molecule (apply gravity, do time integration) or on every pair of molecules (interactions). They proposed to separate these operations into individual functions which they called kernels. With this approach one can isolate the interaction models from the rest of the code. The idea that all operations concerning molecules can be written as kernels can also be applied to general particle dynamics codes. Using this concept, a domain specialist can implement an interaction model by writing a function which takes two particles as input and calculates the interaction. In our approach we go even further and also decouple the kernel code from the actual data. We use a so called accessor interface that maps between the kernels and the actual data structure. When a kernel accesses particle properties it does so via the accessor. The accessor than locates the data and passes it to the kernel. The accessor is used for reading as well as writing particle data. This allows to switch the implementation of the accessor interface without touching the kernels. With this approach one can change the particle data structure independently of the kernels. Only the implementation of the accessor interface has to be adapted. Listing 1: Example of an Euler integration kernel. All accesses to particle properties are handled by the Accessor template. This way the kernel is completely independent of the data structure and can be used with whatever data structure as long as an appropriate accessor implementation is available.
This approach comes with many benefits. First of all, domain specialists who write the kernel code can do so without worrying about where data is stored and how to access it. All kernel accesses to the outside world are represented as give me this information and store that information. This way the kernel code is completely independent of the rest of the framework. It is also possible to use the kernels with a different framework as long as the data structures of the framework can be accessed via a particle accessor interface making the kernels widely usable. This greatly increases the flexibility of the kernels and also offers more possibilities in coupling different simulation frameworks.
CONCLUSION
In this paper we presented a new approach to extend the high-performance framework waLBerla with a new particle dynamics module MESA-PD. MESA-PD employs a code generation step to simplify the task of writing modules for highly optimized frameworks. This additional step is realized by using a combination of Jinja templates and a newly designed Python library. With this approach, the user can give a high level description of the simulation using the Python library. In a second code generation step C++ source files are created by the Jinja template engine using the information the user has provided. This way the C++ source code files are tailored exactly to the description the user has given.
For the design of the module we have identified requirements that are essential for a modern particle dynamics framework. We then presented our resolution of these requirements within our newly developed module. The new module allows a more flexible domain partitioning in parallel simulations. This simplifies the task of coupling simulations as well as experimenting with more efficient domain partitionings tailored for a specific situation. We also introduced an advanced approach to create individual data structures for every simulation without manual code rewrites. Finally, we showed our approach to decouple the code for particle interactions not only from the rest of the framework but also make it independent of the data structures used to store the properties of the particles.
The new design has many benefits for the user of the framework. The code generation approach greatly reduces the lines of code the user has to write himself. If a single piece of information is needed at multiple places throughout the source code the code generation takes care of adapting all files accordingly. For example, after defining a particle property with just one line in Python the correct packing and unpacking functions to MPI buffers, debug output to the terminal, vtk output, output to databases, etc. are generated automatically. It not only saves time for the user it also ensures that there are no inconsistencies between the functions which could possibly lead to hard to track down errors.
The strict separation of data and kernels via the accessor interface allows to change parts of the code without interfering with other parts. With this approach specialists do not have to know the whole code base to introduce their knowledge. Additionally, due to the clear separation of all source code parts they might also be transferable to other frameworks.
| 3,102 |
1906.11030
|
2955591928
|
String data are often disseminated to support applications such as location-based service provision or DNA sequence analysis. This dissemination, however, may expose sensitive patterns that model confidential knowledge (e.g., trips to mental health clinics from a string representing a user's location history). In this paper, we consider the problem of sanitizing a string by concealing the occurrences of sensitive patterns, while maintaining data utility. First, we propose a time-optimal algorithm, TFS-ALGO, to construct the shortest string preserving the order of appearance and the frequency of all non-sensitive patterns. Such a string allows accurately performing tasks based on the sequential nature and pattern frequencies of the string. Second, we propose a time-optimal algorithm, PFS-ALGO, which preserves a partial order of appearance of non-sensitive patterns but produces a much shorter string that can be analyzed more efficiently. The strings produced by either of these algorithms may reveal the location of sensitive patterns. In response, we propose a heuristic, MCSR-ALGO, which replaces letters in these strings with carefully selected letters, so that sensitive patterns are not reinstated and occurrences of spurious patterns are prevented. We implemented our sanitization approach that applies TFS-ALGO, PFS-ALGO and then MCSR-ALGO and experimentally show that it is effective and efficient.
|
Data sanitization ( knowledge hiding) aims at concealing patterns modeling confidential knowledge by limiting their frequency, so that they are not easily mined from the data. Existing methods are applied to: (I) a of set-valued data (transactions) @cite_4 or spatiotemporal data (trajectories) @cite_16 ; (II) a of sequences @cite_11 @cite_0 ; or (III) a sequence @cite_5 @cite_2 @cite_12 . Yet, none of these methods follows our CSD setting: Methods in category I are not applicable to string data, and those in categories II and III do not have guarantees on privacy-related constraints @cite_12 or on utility-related properties @cite_11 @cite_0 @cite_5 @cite_2 . Specifically, unlike our approach, @cite_12 cannot guarantee that all sensitive patterns are concealed (constraint C1 ), while @cite_11 @cite_0 @cite_5 @cite_2 do not guarantee the satisfaction of utility properties ( @math and P2 ).
|
{
"abstract": [
"",
"",
"Frequent event mining is a fundamental task to extract insight from an event sequence (long sequence of events that are associated with time points). However, it may expose sensitive events that leak confidential business knowledge or lead to intrusive inferences about groups of individuals. In this work, we aim to prevent this threat, by deleting occurrences of sensitive events, while preserving the utility of the event sequence. To quantify utility, we propose a model that captures changes, caused by deletion, to the probability distribution of events across the sequence. Based on the model, we define the problem of sanitizing an event sequence as an optimization problem. Solving the problem is important to preserve the output of many mining tasks, including frequent pattern mining and sequence segmentation. However, this is also challenging, due to the exponential number of ways to apply deletion to the sequence. To optimally solve the problem when there is one sensitive event, we develop an efficient algorithm based on dynamic programming. The algorithm also forms the basis of a simple, iterative method that optimally sanitizes an event sequence, when there are multiple sensitive events. Experiments on real and synthetic datasets show the effectiveness and efficiency of our method.",
"Fine-grained, personal data has been largely, continuously generated nowadays, such as location check-ins, web histories, physical activities, etc. Those data sequences are typically shared with untrusted parties for data analysis and promotional services. However, the individually-generated sequential data contains behavior patterns and may disclose sensitive information if not properly sanitized. Furthermore, the utility of the released sequence can be adversely affected by sanitization techniques. In this paper, we study the problem of individual sequence data sanitization with minimum utility loss, given user-specified sensitive patterns. We propose a privacy notion based on information theory and sanitize sequence data via generalization. We show the optimization problem is hard and develop two efficient heuristic solutions. Extensive experimental evaluations are conducted on real-world datasets and the results demonstrate the efficiency and effectiveness of our solutions.",
"The process of discovering relevant patterns holding in a database was first indicated as a threat to database security by O'Leary in. Since then, many different approaches for knowledge hiding have emerged over the years, mainly in the context of association rules and frequent item sets mining. Following many real-world data and application demands, in this paper, we shift the problem of knowledge hiding to contexts where both the data and the extracted knowledge have a sequential structure. We define the problem of hiding sequential patterns and show its NP-hardness. Thus, we devise heuristics and a polynomial sanitization algorithm. Starting from this framework, we specialize it to the more complex case of spatiotemporal patterns extracted from moving objects databases. Finally, we discuss a possible kind of attack to our model, which exploits the knowledge of the underlying road network, and enhance our model to protect from this kind of attack. An exhaustive experiential analysis on real-world data sets shows the effectiveness of our proposal.",
"Complex Event Processing (CEP) has emerged as a technology for monitoring event streams in search of user specified event patterns. When a CEP system is deployed in sensitive environments the user may wish to mitigate leaks of private information while ensuring that useful nonsensitive patterns are still reported. In this paper we consider how to suppress events in a stream to reduce the disclosure of sensitive patterns while maximizing the detection of nonsensitive patterns. We first formally define the problem of utility-maximizing event suppression with privacy preferences, and analyze its computational hardness. We then design a suite of real-time solutions to solve this problem. Our first solution optimally solves the problem at the event-type level. The second solution, at the event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our user study and experimental evaluation over both real-world and synthetic event streams show that our algorithms are effective in maximizing utility yet still efficient enough to offer near real-time system responsiveness.",
"Sequence datasets are encountered in a plethora of applications spanning from web usage analysis to healthcare studies and ubiquitous computing. Disseminating such datasets offers remarkable opportunities for discovering interesting knowledge patterns, but may lead to serious privacy violations if sensitive patterns, such as business secrets, are disclosed. In this work, we consider how to sanitize data to prevent the disclosure of sensitive patterns during sequential pattern mining, while ensuring that the nonsensitive patterns can still be discovered. First, we re-define the problem of sequential pattern hiding to capture the information loss incurred by sanitization in terms of both events' modification (distortion) and lost nonsensitive knowledge patterns (side-effects). Second, we model sequences as graphs and propose two algorithms to solve the problem by operating on the graphs. The first algorithm attempts to sanitize data with minimal distortion, whereas the second focuses on reducing the side-effects. Extensive experiments show that our algorithms outperform the existing solution in terms of data distortion and side-effects and are more efficient."
],
"cite_N": [
"@cite_4",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"2169273428",
"2270418173",
"2156345098",
"2094206427",
"2165924229"
]
}
| 0 |
||
1906.11030
|
2955591928
|
String data are often disseminated to support applications such as location-based service provision or DNA sequence analysis. This dissemination, however, may expose sensitive patterns that model confidential knowledge (e.g., trips to mental health clinics from a string representing a user's location history). In this paper, we consider the problem of sanitizing a string by concealing the occurrences of sensitive patterns, while maintaining data utility. First, we propose a time-optimal algorithm, TFS-ALGO, to construct the shortest string preserving the order of appearance and the frequency of all non-sensitive patterns. Such a string allows accurately performing tasks based on the sequential nature and pattern frequencies of the string. Second, we propose a time-optimal algorithm, PFS-ALGO, which preserves a partial order of appearance of non-sensitive patterns but produces a much shorter string that can be analyzed more efficiently. The strings produced by either of these algorithms may reveal the location of sensitive patterns. In response, we propose a heuristic, MCSR-ALGO, which replaces letters in these strings with carefully selected letters, so that sensitive patterns are not reinstated and occurrences of spurious patterns are prevented. We implemented our sanitization approach that applies TFS-ALGO, PFS-ALGO and then MCSR-ALGO and experimentally show that it is effective and efficient.
|
Anonymization aims to prevent the disclosure of individuals' identity and or information that individuals are not willing to be associated with @cite_22 @cite_6 . Anonymization works such as @cite_22 @cite_18 @cite_14 are thus not alternatives to our work (see the appendix).
|
{
"abstract": [
"Frequent sequential pattern mining is a central task in many fields such as biology and finance. However, release of these patterns is raising increasing concerns on individual privacy. In this paper, we study the sequential pattern mining problem under the differential privacy framework which provides formal and provable guarantees of privacy. Due to the nature of the differential privacy mechanism which perturbs the frequency results with noise, and the high dimensionality of the pattern space, this mining problem is particularly challenging. In this work, we propose a novel two-phase algorithm for mining both prefixes and substring patterns. In the first phase, our approach takes advantage of the statistical properties of the data to construct a model-based prefix tree which is used to mine prefixes and a candidate set of substring patterns. The frequency of the substring patterns is further refined in the successive phase where we employ a novel transformation of the original data to reduce the perturbation noise. Extensive experiment results using real datasets showed that our approach is effective for mining both substring and prefix patterns in comparison to the state-of-the-art solutions.",
"Sequential data is being increasingly used in a variety of applications. Publishing sequential data is of vital importance to the advancement of these applications. However, as shown by the re-identification attacks on the AOL and Netflix datasets, releasing sequential data may pose considerable threats to individual privacy. Recent research has indicated the failure of existing sanitization techniques to provide claimed privacy guarantees. It is therefore urgent to respond to this failure by developing new schemes with provable privacy guarantees. Differential privacy is one of the only models that can be used to provide such guarantees. Due to the inherent sequentiality and high-dimensionality, it is challenging to apply differential privacy to sequential data. In this paper, we address this challenge by employing a variable-length n-gram model, which extracts the essential information of a sequential database in terms of a set of variable-length n-grams. Our approach makes use of a carefully designed exploration tree structure and a set of novel techniques based on the Markov assumption in order to lower the magnitude of added noise. The published n-grams are useful for many purposes. Furthermore, we develop a solution for generating a synthetic database, which enables a wider spectrum of data analysis tasks. Extensive experiments on real-life datasets demonstrate that our approach substantially outperforms the state-of-the-art techniques.",
"In recent years, privacy preserving data mining has become an important problem because of the large amount of personal data which is tracked by many business applications. An important method for privacy preserving data mining is the method of condensation. This method is often used in the case of multi-dimensional data in which pseudo-data is generated to mask the true values of the records. However, these methods are not easily applicable to the case of string data, since they require the use of multi-dimensional statistics in order to generate the pseudo-data. String data are especially important in the privacy preserving data-mining domain because most DNA and biological data are coded as strings. In this article, we will discuss a new method for privacy preserving mining of string data with the use of simple template-based models. The template-based model turns out to be effective in practice, and preserves important statistical characteristics of the strings such as intra-record distances. We will explore the behavior in the context of a classification application, and show that the accuracy of the application is not affected significantly by the anonymization process.",
"We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_6"
],
"mid": [
"1966862198",
"2108215372",
"1998986980",
"2517104773"
]
}
| 0 |
||
1906.10982
|
2956057537
|
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+eps)-approximations in f(k,eps)n^ O(1) time where k is some parameter of the input. We obtain the following results on parameterized approximability: 1) In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant eps>0 and integer k>0, in time f(k,eps)n^ g(eps) , either outputs a solution of size at least k (1+eps), or declares that the optimum solution has size less than k. 2) In the (2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of TDK with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factors are 558 325+eps and 4 3+eps, resp. [, FOCS'17]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for TDKR. For all considered problems, getting time f(k,eps)n^ O(1) , rather than f(k,eps)n^ g(eps) , would give FPT time f'(k)n^ O(1) exact algorithms using eps=1 (k+1), contradicting W[1]-hardness.
|
One of the first fruitful connections between parameterized complexity and approximability was observed independently by Bazgan @cite_23 and Cesati and Trevisan @cite_29 : They showed that EPTASs, i.e., @math -approximation algorithms with @math time, imply fixed-parameter tractability for the decision version. Thus, proofs for 1 -hardness of the decision version became a strong tool for ruling out improvements of PTASs, with running time @math , to EPTASs. More recently, @cite_25 improved this approach by directly proving 1 -hardness of obtaining a @math -approximation, thus bypassing the requirement of a 1 -hard decision version (see also @cite_33 ).
|
{
"abstract": [
"A polynomial time approximation scheme (PTAS) for an optimization problem A is an algorithm that given in input an instance of A and e > 0 finds a (1 + e)-approximate solution in time that is polynomial for each fixed e. Typical running times are nO(1e) or 21eO(1) n. While algorithms of the former kind tend to be impractical, the latter ones are more interesting. In several cases, the development of algorithms of the second type required considerably new, and sometimes harder, techniques. For some interesting problems, only nO(1e) approximation schemes are known. Under likely assumptions, we prove that for some problems (including natural ones) there cannot be approximation schemes running in time f(1 ϵ) n p0(1), no matter how fast function f grows. Our result relies on a connection with Parameterized Complexity Theory, and we show that this connection is necessary.",
"In the Closest String problem one is given a family S of equal-length strings over some fixed alphabet, and the task is to find a string y that minimizes the maximum Hamming distance between y and a string from S. While polynomial-time approximation schemes (PTASes) for this problem are known for a long time [; J. ACM'02], no efficient polynomial-time approximation scheme (EPTAS) has been proposed so far. In this paper, we prove that the existence of an EPTAS for Closest String is in fact unlikely, as it would imply that FPT=W[1], a highly unexpected collapse in the hierarchy of parameterized complexity classes. Our proof also shows that the existence of a PTAS for Closest String with running time f(eps) n^o(1 eps), for any computable function f, would contradict the Exponential Time Hypothesis.",
"Given n length-L strings S = s1, …, s n over a constant size alphabet Σ together with an integer l, where l ≤ L, the objective of Consensus Patterns is to find a length-l string s, a substring t i of each s i in S such that ∑ ∀ id(t i , s) is minimized. Here d(x, y) denotes the Hamming distance between the two strings x and y. Consensus Patterns admits a PTAS [, JCSS 2002] is fixed parameter tractable when parameterized by the objective function value [Marx, SICOMP 2008], and although it is a well-studied problem, improvement of the PTAS to an EPTAS seemed elusive. We prove that Consensus Patterns does not admit an EPTAS unless FPT=W[1], answering an open problem from [, STACS 2002, Combinatorica 2006]. To the best of our knowledge, Consensus Patterns is the first problem that admits a PTAS, and is fixed parameter tractable when parameterized by the value of the objective function but does not admit an EPTAS under plausible complexity assumptions. The proof of our hardness of approximation result combines parameterized reductions and gap preserving reductions in a novel manner.",
""
],
"cite_N": [
"@cite_29",
"@cite_33",
"@cite_25",
"@cite_23"
],
"mid": [
"1972853793",
"2963006872",
"2281706401",
""
]
}
|
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack
|
a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2dk with rotations (2dkr), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2 + ε [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese,SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2dkr. For all considered problems, getting time f (k, ε)n O(1) , rather than f (k, ε)n g (ε) , would give FPT time f (k)n O(1) exact algorithms by setting ε = 1/(k + 1), contradicting W[1]-hardness. Instead, for each fixed ε > 0, our PASs give (1 + ε)-approximate solutions in FPT time.
For both misr and 2dkr our techniques also give rise to preprocessing algorithms that take n g(ε) time and return a subset of at most k g(ε) rectangles/items that contains a solution of size at least k/(1 + ε) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al.,STOC'17].
Introduction
Approximation algorithms and parameterized algorithms are two well-established ways to deal with NP-hard problems. An α-approximation for an optimization problem is a polynomialtime algorithm that computes a feasible solution whose cost is within a factor α (that might be a function of the input size n) of the optimal cost. In particular, a polynomial-time approximation scheme (PTAS) is a (1 + ε)-approximation algorithm running in time n g(ε) , where ε > 0 is a given constant and g is some computable function. In parameterized algorithms we identify a parameter k of the input, that we informally assume to be much smaller than n. The goal here is to solve the problem optimally in fixed-parameter tractable (FPT) time f (k)n O(1) , where f is some computable function. Recently, researchers started to combine the two notions (see, e.g., the survey by Marx [34]). The idea is to design approximation algorithms that run in FPT (rather than polynomial) time, e.g., to get (1 + ε)-approximate solutions in time f (k, ε)n O(1) . In this paper we continue this line of research on parameterized approximation, and apply it to two fundamental rectangle packing problems.
Our results and techniques
Our focus is on parameterized approximation algorithms. Unfortunately, as observed by Marx [34], when the parameter k is the desired solution size, computing (1 + ε)-approximate solutions in time f (k, ε)n O(1) implies fixed-parameter tractability. Indeed, setting ε = 1/(k+1) guarantees to find an optimal solution when that value equals to k ∈ N and we get time f (k, 1/(k + 1))n O(1) = f (k)n O(1) . Since the considered problems are W[1]-hard (in part, this is established in our work), they are unlikely to be FPT and similarly unlikely to have such nice approximation schemes. Instead, we construct algorithms (for two maximization problems) that, given ε > 0 and an integer k, take time f (k, ε)n g (ε) and either return a solution of size at least k/(1 + ε) or declare that the optimum is less than k. We call such an algorithm a parameterized approximation scheme (PAS). Note that if we run such an algorithm for each k ≤ k then we can guarantee that we compute a solution with cardinality at least min{k, OPT}/(1 + ε) where OPT denotes the size of the optimal solution. So intuitively, for each ε > 0, we have an FPT-algorithm for getting a (1 + ε)-approximate solution.
In this paper we consider the following two geometric packing problems, and design PASs for them.
Maximum Independent Set of Rectangles.
In the maximum independent set of rectangles problem (misr) we are given a set of n axis-parallel rectangles R = {R 1 , . . . , R n } in the two-dimensional plane, where R i is the open set of points (x
(1) i , x (2) i ) × (y (1) i , y(2)
i ). A feasible solution is a subset of rectangles R ⊆ R such that for any two rectangles R, R ∈ R we have R ∩ R = ∅. Our objective is to find a feasible solution of maximum cardinality |R |. W.l.o.g. we assume that x
(1) i , y (1) i , x(2)
i , y
(2) i ∈ {0, . . . , 2n − 1} for each R i ∈ R (see e.g. [1]). misr is very well-studied in the area of approximation algorithms. The problem is known to be NP-hard [24], and the current best polynomial-time approximation factor is O(log log n) for the cardinality case [11] (addressed in this paper), and O(log n/ log log n) for the natural generalization with rectangle weights [12]. The cardinality case also admits a (1 + ε)-approximation with a running time of n poly(log log(n/ε)) [15] and there is a (slower) QPTAS known for the weighted case [1]. The problem is also known to be W[1]-hard w.r.t. the number k of rectangles in the solution [33], and thus unlikely to be solvable in FPT time f (k)n O(1) .
In this paper we achieve the following main result:
Theorem 1. There is a PAS for misr with running time k O(k/ 8 ) n O(1/ 8 ) .
In order to achieve the above result, we combine several ideas. Our starting point is a polynomial-time construction of a k × k grid such that each rectangle in the input contains some crossing point of this grid (or we find a solution of size k directly). By applying (in a non-trivial way) a result by Frederickson [21] on planar graphs, and losing a small factor in the approximation, we define a decomposition of our grid into a collection of disjoint groups of cells. Each such group defines an independent instance of the problem, consisting of the rectangles strictly contained in the considered group of cells. Furthermore, we guarantee that each group spans only a constant number O ε (1) of rectangles of the optimum solution. Therefore in FPT time we can guess the correct decomposition, and solve each corresponding subproblem in n Oε(1) time. We remark that our approach deviates substantially from prior work, and might be useful for other related problems. An adaptation of our construction also leads to the following (1 + )-approximative kernelization.
Theorem 2. There is an algorithm for misr that, given k ∈ N, computes in time n O(1/ 8 ) a subset of the input rectangles of size k O(1/ 8 ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
Similarly as for a PAS, if we run the above algorithm for each k ≤ k we obtain a set of size k O(1/ 8 ) that contains a solution of size at least min{k, OPT}/(1 + ε). Observe that any c-approximate solution on the obtained set of rectangles is also a feasible, and c(1 + ε)approximate, solution for the original instance if OPT ≤ k and otherwise has size at least k/(c(1 + ε)). Thus, our result is a special case of a polynomial-size approximate kernelization scheme (PSAKS) as defined in [32].
(0, w i ) × (0, h i ), N ≥ w i , h i ∈ N.
The goal is to find a feasible packing of a subset I ⊆ I of the items of maximum cardinality |I |. Such packing maps each item i ∈ I into a new translated rectangle (a i , The result is proved by parameterized reductions from a variant of the W[1]-hard subset sum problem, where we need to determine whether a set of m positive integers contains a k-tuple of numbers with sum equal to some given value t. The difficulty for reductions to 2dk or 2dkr is of course that rectangles may be freely selected and placed (and possibly rotated) to get a feasible packing.
a i + w i ) × (b i , b i + h i ) 2 ,
We complement the W[1]-hardness result by giving a PAS for the case with rotations (2dkr) and a corresponding kernelization procedure like in Theorem 2 (which also yields a PSAKS).
Theorem 4.
For 2dkr there is a PAS with running time k O(k/ ) n O(1/ 3 ) and an algorithm that, given k ∈ N, computes in time n O(1/ 3 ) a subset of the input items of size k O(1/ ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
The above result is based on a simple combination of the following two (non-trivial) building blocks: First, we show that, by losing a fraction ε of the items of a given solution of size k, it is possible to free a vertical strip of width N/k Oε(1) (unless the problem can be solved trivially). This is achieved by first sparsifying the solution using the above mentioned result by Frederickson [21]. If this is not sufficient we construct a vertical chain of relatively wide and tall rectangles that split the instance into a left and right side. Then we design a resource augmentation algorithm, however in an FPT sense: we can compute in FPT time a packing of cardinality k if we are allowed to use a knapsack where one side is enlarged by a factor 1 + 1/k Oε(1) . Note that in typical resource augmentation results the packing constraint is relaxed by a constant factor while here this amount is controlled by our parameter.
A Parameterized Approximation Scheme for MISR
In this section we present a PAS and an approximate kernelization for misr. We start by showing that there exists an almost optimal solution for the problem with some helpful structural properties (Sections 2.1 and 2.2). The results are then put together in Section 2.3.
Definition of the grid
We try to construct a non-uniform grid with k rows and k columns such that each input rectangle overlaps a corner of this grid (see Figure 1). To this end, we want to compute k − 1 vertical and k − 1 horizontal lines such that each input rectangle intersects one line from each set. There are instances in which our routine fails to construct such a grid (and in fact such a grid might not even exist). For such instances, we directly find a feasible solution with k rectangles and we are done.
Lemma 5.
There is a polynomial time algorithm that either computes a set of at most k − 1 vertical lines L V with x-coordinates V 1 , . . . , V k−1 such that each input rectangle is crossed by one line in L V or computes a feasible solution with k rectangles. A symmetric statement holds for an algorithm computing a set of at most k − 1 horizontal lines L H with y-coordinates H 1 , . . . , H k−1 .
Proof. Let V 0 := 0. Assume inductively that we defined the x-coordinates V 0 , V 1 , . . . , V k such that V 1 , . . . , V k are the x-coordinates of the first k constructed vertical lines. We define the x-XX:6
Parameterized Approximation Schemes coordinate of the (k + 1)-th vertical line by V k +1 := min Ri∈R:
x (1) i ≥ V k x (2)
i − 1/2. We continue with this construction until we reach an iteration k * such that
{R i ∈ R : x (1) i ≥ V k * −1 } = ∅.
If k * ≤ k then we constructed at most k − 1 lines such that each input rectangle is intersected by one of these lines. Otherwise, assume that k * > k. Then for each iteration k ∈ {1, . . . , k} we can find a rectangle R i(k ) := arg min Ri∈R:
x (1) i ≥ V k −1 x (2)
i . By construction, using the fact that all coordinates are integer, for any two such rectangles
R i(k ) , R i(k ) with k = k we have that (x (1) i(k ) , x (2) i(k ) ) ∩ (x (1) i(k ) , x(2)
i(k ) ) = ∅. Hence, R i(k ) and R i(k ) are disjoint. Therefore, the rectangles R i(1) , . . . , R i(k) are pairwise disjoint and thus form a feasible solution.
The algorithm for constructing the horizontal lines works symmetrically.
We apply the algorithms due to Lemma 5. If one of them finds a set of k independent rectangles then we output them and we are done. Otherwise, we obtain the sets L V and L H . For convenience, we define two more vertical lines with x-coordinates V 0 := 0 and V |L V |+1 = 2n − 1, resp., and similarly two more horizontal lines with y-coordinates H 0 = 0 and H |L H |+1 = 2n − 1, resp.. We denote by G the set of grid cells formed by these lines and the lines in L V ∪L H : for any two consecutive vertices lines (i.e., defined via x-coordinates V j , V j+1 with j ∈ {0, . . . , |L V |}) and two consecutive horizontal grid lines (defined via y-coordinates
H j , H j +1 with j ∈ {0, . . . , |L H |})
we obtain a grid cell whose corners are the intersection of these respective lines. We interpret the grid cells as closed sets (i.e., two adjacent grid cells intersect on their boundary).
Proposition 6. Each input rectangle R i contains a corner of a grid cell of G. If a rectangle R intersects a grid cell g then it must contain a corner of g.
Groups of rectangles
Let R * denote a solution to the given instance with |R * | = k. We prove that there is a special solution R ⊆ R * of large cardinality that we can partition into s ≤ k groups R 1∪ . . .∪R s such that each group has constant size O(1/ 8 ) and no grid cell can be intersected by rectangles from different groups. The remainder of this section is devoted to proving the following lemma.
Lemma 7.
There is a constant c = O(1/ 8 ) such that there exists a solution R ⊆ R * with |R | ≥ (1 − )|R * | and a partition R = R 1∪ . . .∪R s with s ≤ k and |R j | ≤ c for each j and such that if any two rectangles in R intersect the same grid cell g ∈ G then they are contained in the same set R j .
Given the solution R * we construct a planar graph G 1 = (V 1 , E 1 ). In V 1 we have one vertex v i for each rectangle R i ∈ R * . We connect two vertices v i , v i by an edge if and only if there is a grid cell g ∈ G such that R i and R i intersect g and R i and R i are crossed by the same horizontal or vertical line in L V ∪ L H or if R i and R i contain the top left and the bottom right corner of g, resp. Note that we do not introduce an edge if R i and R i contain the bottom left and the top right corner of g, resp. (see Fig. 1): this way we preserve the planarity of the resulting graph, however we will have to deal with the missing connections in a later stage. Let G 1 be the graph obtained when applying Lemma 9 to G 1 with := /2 and let c 1 = O((1/ ) 2 ) be the respective value c . Now we would like to claim that if two rectangles R i , R i intersect the same grid cell g ∈ G then v i , v i are in the same component of G 1 . Unfortunately, this is not true. It might be that there is a grid cell g ∈ G such that R i and R i contain the bottom left corner and the top right corner of g, resp., and that v i and v i are in different components of G 1 . We fix this in a second step. We define a graph G 2 = (V 2 , E 2 ). In V 2 we have one vertex for each connected component in G 1 . We connect two vertices w i , w i ∈ V 2 by an edge if and only if there are two rectangles R i , R i such that their corresponding vertices v i , v i in V 1 belong to the connected components of G 1 represented by w i and w i , resp., and there is a grid cell g whose bottom left and top right corner are contained in R i and R i , resp. Lemma 10. The graph G 2 is planar.
Similarly as above, we apply Lemma 9 to G 2 with := 2c1 and let
c 2 = O((1/ ) 2 ) = O(1/ 6 ) denote the corresponding value of c . Denote by G 2 the resulting graph. We define a group R q for each connected component C q of V 2 . The set R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 such that w j ∈ C q . We define R :=∪ q R q . Lemma 11. Let R i , R i ∈ R be rectangles that intersect the same grid cell g ∈ G. Then there is a set R q such that {R i , R i } ⊆ R q .
Proof. Assume that in G 1 there is an edge connecting v i , v i . Then the latter vertices are in the same connected component C j of G 1 and thus they are in the same group R q . Otherwise, if there is no edge connecting v i , v i in G 1 then R i and R i contain the bottom left and top right corners of g, resp. Assume that v i and v i are contained in the connected components C j and C j of G 1 , resp. Then w j , w j ∈ V 2 , {w j , w j } ∈ E 2 and w j , w j are in the same connected component of V 2 . Hence, R i , R i are in the same group R q .
It remains to prove that each group R q has constant size and that |R | ≥ (1 − )|R * |.
Lemma 12. There is a constant
c = O(1/ 8 ) such that for each group R q it holds that |R q | ≤ c. Proof. For each group R q there is a connected component C q of G 2 such that R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 and w j ∈ C q . Each connected component of G 1 contains at most c 1 = O(1/ε 2 ) vertices of V 1 and each component of G 2 contains at most c 2 = O(1/ε 6 ) vertices of V 2 . Hence, |R q | ≤ c 1 · c 2 =: c and c = O((1/ 2 )(1/ 6 )) = O(1/ 8 ). Lemma 13. We have that |R | ≥ (1 − )|R * |.
Proof. At most 2 · |V 1 | vertices of G 1 are deleted when we construct G 1 from G 1 . Each vertex in G 1 belongs to one connected component C j , represented by a vertex w j ∈ G 2 . At most 2c1 |V 2 | vertices are deleted when we construct G 2 from G 2 . These vertices represent at most c 1 · 2c1 |V 2 | ≤ 2 |V 1 | ≤ 2 |V 1 | vertices in G 1 (and each vertex in G 1 represents one rectangle in R * ). Therefore,
|R | ≥ |R * | − 2 · |V 1 | − 2 · |V 1 | = (1 − )|R * |.
This completes the proof of Lemma 7.
The algorithm
In our algorithm, we compute a solution that is at least as good as the solution R as given by Lemma 7. For each group R j we define by G j the set of grid cells that are intersected by at least one rectangle from R j . Since in R each grid cell can be intersected by rectangles of only one group, we have that G j ∩ G q = ∅ if j = q. We want to guess the sets G j . The next lemma shows that the number of possibilities for one of those sets is polynomially bounded in k.
Lemma 14. Each G j belongs to a set G of cardinality at most k O(1/ε 8 ) that can be computed in polynomial time.
Proof. The cells G j intersected by R j are the union of all cells G(R) with R ∈ R j where for each rectangle R the set G(R) denotes the cells intersected by R. Each set G(R) can be specified by indicating the 4 corner cells of G(R), i.e., top-left, top-right, bottom-left, and bottom-right corner. Hence there are at most k 4 choices for each such R. The claim follows
since |R j | = O(1/ε 8 ).
We hence achieve the main result of this section.
Proof of Theorem 1. Using Lemma 14, we can guess by exhaustive enumeration all the sets G j in time k O(k/ 8 ) . We obtain one independent problem for each value j ∈ {1, . . . , s} which consists of all input rectangles that are contained in G j . For this subproblem, it suffices to compute a solution with at least |R j | rectangles. Since |R j | ≤ c = O(1/ 8 ) we can do this in time n O(1/ 8 ) by complete enumeration. Thus, we solve each of the subproblems and output the union of the computed solutions. The overall running time is as in the claim. If all the computed solutions have size less than (1 − ε)k, this implies that the optimum solution is smaller than k. Otherwise we obtain a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and the claim follows by redefining ε appropriately.
Essentially the same construction as above also gives an approximate kernelization algorithm as claimed in Theorem 2, see Appendix A for details.
A Parameterized Approximation Scheme for 2DKR
In this section we present a PAS and an approximate kernelization for 2dkr. W.l.o.g., we assume that k ≥ Ω(1/ 3 ), since otherwise we can optimally solve the problem in time n O(1/ 3 ) by exhaustive enumeration. In Section 3.1 we show that, if a solution of size k exists, there is a solution of size at least (1 − )k in which no item intersects some horizontal strip
(0, N ) × (0, (1/k) O(1/ ) N )
Freeing a Horizontal Strip
In this section, we prove the following lemma that shows the existence of a near-optimal solution that leaves a sufficiently tall empty horizontal strip in the knapsack (assuming k ≥ Ω(1/ 3 )). W.l.o.g., ε ≤ 1. Since we can rotate the items by 90 degrees, we can assume w.l.o.g. that w i ≥ h i for each item i ∈ I.
(1 − )k in which no packed item intersects (0, N ) × (0, (1/k) c N ), for a proper constant c = O(1/ ).
We classify items into large and thin items. Via a shifting argument, we get the following lemma.
Lemma 16. There is an integer B ∈ {1, . . . , 8/ } such that by losing a factor of 1 + in the objective we can assume that the input items are partitioned into
large items L such that h i ≥ (1/k) B N (and thus also w i ≥ (1/k) B N ) for each item i ∈ L, thin items T such that h i < (1/k) B+2 N for each item i ∈ T .
Let B be the integer due to Lemma 16 and we work with the resulting item classification. If |T | ≥ k then we can create a solution of size k satisfying the claim of Lemma 15 by simply stacking k thin items on top of each other: any k thin items have a total height of at most k · (1/k) B+2 N ≤ (1/k) 2 N . Thus, from now on assume that |T | < k.
Sparsifying large items. Our strategy is now to delete some of the large items and move the remaining items. This will allow us to free the area [0, N ] × [0, (1/k) O(1/ ) N ] of the knapsack. Denote by OPT the almost optimal solution obtained by applying Lemma 16. We remove the items in OPT T := OPT ∩ T temporarily; we will add them back later.
We construct a directed graph G = (V, A) where we have one vertex v i ∈ V for each item i ∈ OPT L := OPT ∩ L. We connect two vertices v i , v i by an arc a = (v i , v i ) if and only if we can draw a vertical line segment of length at most (1/k) B N that connects item i with item i without intersecting any other item such that i lies above i, i.e., the bottom coordinate of i is at least as large as the top coordinate of i, see Figure 2 for a sketch. We obtain the following proposition since for each edge we can draw a vertical line segment and these segments do not intersect each other.
Proposition 17. The graph G is planar.
Next, we apply Lemma 9 to G with := . Let G = (V , A ) be the resulting graph. We remove from OPT L all items i ∈ V \ V and denote by OPT L the resulting solution. We push up all items in OPT L as much as possible. If now the strip (0, N ) × (0, (1/k) B N ) is not intersected by any item then we can place all the items in T into the remaining space. Their total height can be at most k · (1/k) B+2 N ≤ (1/k) B+1 N and thus we can leave a strip
of height (1/k) B N − (1/k) B+1 N ≥ (1/k) O(1/ ) N
and width N empty. This completes the proof of Lemma 15 for this case.
Assume next that the strip (0, N )×(0, (1/k) B N ) is intersected by some item: the following lemma implies that there is a set of c = O(1/ 2 ) vertices whose items intuitively connect the top and the bottom edge of the knapsack.
Lemma 18. Assume that in OPT L there is an item i 1 intersecting (0, N ) × (0, (1/k) B N ). Then G contains a path v i1 , v i2 , . . . , v i K with K ≤ c = O(1/ 2 ), such that the distance between i K and the top edge of the knapsack is less than (1/k) B N . Proof. Let C denote all vertices v in G such that there is a directed path from v i1 to v in G . The vertices in C are contained in the connected component C in G that contains v i1 . Note that |C| ≤ |C | ≤ c .
We claim that C must contain a vertex v j whose corresponding item j is closer than (1/k) B N to the top edge of the knapsack. Otherwise, we would have been able to push up all items corresponding to vertices in C by (1/k) B N units: first we could have pushed up all items such that their corresponding vertices have no outgoing arc, then all items such that their vertices have outgoing arcs pointing at the former set of vertices, and so on. By definition of C, there must be a path connecting
v i1 with v j . This path v i1 , v i2 , . . . , v i K = v j
contains only vertices in C and hence its length is bounded by c . The claim follows.
Our goal is now to remove the items i 1 , . . . , i K due to Lemma 18 and O(K) = O(1/ 2 ) more large items from OPT L . Since we can assume that k ≥ Ω(1/ 3 ) this will lose only a factor of 1 + O( ) in the objective. To this end we define K + 1 deletion rectangles, see Figure 2. We place one such rectangle R between any two consecutive items i , i +1 . The height of R equals the vertical distance between i and i +1 (at most (1/k) B N ) and the width of R equals (1/k) B N . Since v i , v i +1 are connected by an arc in G , we can draw a vertical line segment connecting i with i +1 . We place R such that it is intersected by this line segment. Note that for the horizontal position of R there are still several possibilities and we choose one arbitrarily. Finally, we place a special deletion rectangle between the item i K and the top edge of the knapsack and another special deletion rectangle between the item i 1 and the bottom edge of the knapsack. The heights of these rectangles equal the distance of i 1 and i K with the bottom and top edge of the knapsack, resp. (which is at most (1/k) B N ), and their widths equal (1/k) B N . They are placed such that they touch the bottom edge of i 1 and the top edge of i K , resp.
Lemma 19. Each deletion rectangle can intersect at most 4 large items in its interior. Hence, there can be only O(K) ≤ O(c ) = O(1/ 2 ) large items intersecting a deletion rectangle in their interior.
Observe that the deletion rectangles and the items in {i 1 , . . . , i K } separate the knapsack into a left and a right part with items OPT lef t and OPT right , resp. We delete all items in i 1 , . . . , i K and all items intersecting the interior of a deletion rectangle. Each deletion rectangle and each item in {i 1 , . . . , i K } has a width of at least (1/k) B N . Thus, we can move all items in OPT lef t simultaneously by (1/k) B N units to the right. After this, no large item intersects the area (0, (1/k) B N ) × (0, N ). We rotate the resulting solution by 90 degrees, hence getting an empty horizontal strip (0, N )
× (0, (1/k) B N ). The total height of items in OP T T is at most k · (1/k) B+2 N ≤ (1/k) B+1 N . Therefore,
FPT-algorithm with resource augmentation
We now compute a packing that contains as many items as the solution due to Lemma 15. However, it might use the space of the entire knapsack. In particular, we use the free space in the knapsack in the latter solution in order to round the sizes of the items. In the following lemma the reader may think of k = (1 − )k andk = k O(1/ ) . Note that Lemma 20 yields an FPT algorithm if we are allowed to increase the size of the knapsack by a factor 1 + O(1/k) wherek is a second parameter.
In the remainder of this section, we prove Lemma 20 and we do not differentiate between large and thin items anymore. Assume that there exists a solution OPT of size k that leaves the area [0, N ] × [0, N/k] of the knapsack empty. We want to compute a solution of size k . We use the empty space in order to round the heights of the items in the packing of OPT to integral multiples of N/(k k ). Note that in OPT an item i might be rotated. Thus, depending on this we actually want to round its height h i or its width w i . To this end, we define rounded heights and widths byĥ i :
= hi N/(k k ) N/(k k ) andŵ i := hi N/(k k ) N/(k k ) for each item i.
Lemma 21.
There exists a feasible packing for all items in OPT even if for each rotated item i we increase its width w i toŵ i and for each non-rotated item i ∈ OPT we increase its height h i toĥ i .
To visualize the packing due to Lemma 21 one might imagine a container of heightĥ i and width w i for each non-rotated item i and a container of height h i and widthŵ i for each rotated item i . Next, we group the items according to their valuesĥ i andŵ i . We define I w the items that are not among the k items with smallest width and height, resp. At most 2k · k k = O(k(k ) 2 ) items remain, denote them bȳ I. Then, in time (kk ) O(k ) we can solve the remaining problem by completely enumerating over all subsets ofĪ with at most k elements. For each enumerated set we check within the given time bounds whether its items can be packed into the knapsack (possibly via rotating some of them) by guessing sufficient auxiliary information. Therefore, if a solution of size k for a knapsack of width N and height (1 − 1/k)N exists, then we will find a solution of size k that fits into a knapsack of width and height N . Now the proof of Theorem 4 follows by using Lemma 15 and then applying Lemma 20 with k = (1 − )k andk = k O(1/ ) . The setĪ is the claimed set (which intuitively forms the approximative kernel), we compute a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and we can redefine ε appropriately.
Hardness of Geometric Knapsack
We show that 2dk and 2dkr are both W[1]-hard for parameter k by reducing from a variant of subset sum. Recall that in subset sum we are given m positive integers x 1 , . . . , x m as well as integers t and k, and have to determine whether some k-tuple of the numbers sums to t; this is W[1]-hard with respect to k [18]. In the variant multi-subset sum it is allowed to choose numbers more than once. It is easy to verify that the proof for W[1]-hardness of subset sum due to Downey and Fellows [18] extends also to multi-subset sum. (See Lemma 23 in Section B.) In our reduction to 2dkr we prove that rotations are not required for optimal solutions, making W[1]-hardness of 2dk a free consequence.
Proof sketch for Theorem 3. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). This establishes W[1]hardness of 2dkr.
Observe that, for any packing of items into the knapsack, there is an upper bound of N on the total width of items that intersect any horizontal line through the knapsack, and similarly an upper bound of N for the total height of items along any vertical line. We will let the dimensions of some items depend on numbers x i from the input instance (x 1 , . . . , x m , t, k) of multi-subset sum such that, using these upper bound inequalities, a correct packing certifies that y 1 + . . . + y k = t for some k of the numbers. The key difficulty is that there is a lot of freedom in the choice of which items to pack and where in case of a no instance.
To deal with this, the items corresponding to numbers x i from the input are all almost squares and their dimensions are incomparable. Concretely, an item corresponding to some number x i has height L + S + x i and width L + S + 2t − x i ; we call such an item a tile.
(The exact values of L and S are immaterial here, but L S t > x i holds.) Thus, when using, e.g., a tile of smaller width (i.e., smaller value of x i ) it will occupy "more height" in the packing. The knapsack is only slightly larger than a k by k grid of such tiles, implying that there is little freedom for the placement. Let us also assume for the moment, that no rotations are used.
Accordingly, we can specify k vertical lines that are guaranteed to intersect all tiles of any packing that uses k 2 tiles, by using pairwise distance L − 1 between them. Moreover, each line is intersecting exactly k private tiles. The same holds for a similar set of k horizontal lines. Together we get an upper bound of N for the sum of the widths (heights) along any horizontal (vertical) line. Since the numbers x i occur negatively in widths, we effectively get lower bounds for them from the horizontal lines. When the sizes of these tiles (and the auxiliary items below) are appropriately chosen, it follows that all upper bound equalities must be tight. This in turn, due to the exact choice of N , implies that there are k numbers y 1 , . . . , y k with sum equal to t.
Unsurprisingly, using just the tiles we cannot guarantee that a packing exists when given a yes-instance. This can be fixed by adding a small number of flat/thin items that can be inserted between the tiles (see Figure 3, but note that it does not match the size ratios from this proof); these have dimension L × S or S × L. Because one dimension of these items is large (namely L) they must be intersected by the above horizontal or vertical lines. Thus, they can be proved to enter the above inequalities in a uniform way, so that the proof idea goes through.
Finally, let us address the question of why we can assume that there are no rotations. This is achieved by letting the width of any tile be larger than the height of any tile, and adding a final auxiliary item of width N and small height, called the bar. To get the desired number of items in a solution packing, it can be ensured that the bar must be used as no more than k 2 tiles can fit into N × N and there is a limited supply of flat/thin items. W.l.o.g., the bar is not rotated. It can then be checked that using at least one tile in its rotated form will violate one of the upper bounds for the height. This completes the proof sketch.
Open Problems
This paper leaves several interesting open problems. A first obvious question is whether there exists a PAS also for 2dk (i.e., in the case without rotations). We remark that the algorithm from Lemma 20 can be easily adapted to the case without rotations. Unfortunately, Lemma 15 does not seem to generalize to the latter case. Indeed, there are instances in which we lose up to a factor of 2 if we require a strip of width Ω ε,k (1) · N to be emptied, see Figure 4. We also note that both our PASs work for the cardinality version of the problems: an extension to the weighted case is desirable. Unlike related results in the literature (where extension to the weighted case follows relatively easily from the cardinality case), this seems to pose several technical issues. We remark that all the problems considered in this paper might admit a PTAS in the standard sense, which would be a strict improvement on our PASs. Indeed, the existence of a QPTAS for these problems [1,2,15] suggests that such PTASs are likely to exist. However, finding those PTASs is a very well-known and long-standing problem in the area. We hope that our results can help to achieve this challenging goal.
References
A Omitted Proofs for Sections 2 and 3
Proof of Lemma 8. We define a planar embedding for G 1 based on the position of the rectangles in R * . Each vertex v i ∈ V 1 is represented by a rectangleR i which is defined to be the convex hull of all corners of cells of G that are contained in R i . Let e = {v i , v i } ∈ E 1 be an edge. Let g be a grid cell that R i and R i both intersect. If R i and R i intersect the same horizontal line H ∈ L H then we represent e by a horizontal line segment connectingR i andR i such that H contains . We do a symmetric operation if R i and R i intersect the same vertical line V ∈ L V . If R i and R i contain the top left and the bottom right corner of g, resp., then we represent e by a diagonal line segment connectingR i andR i within g.
We do this operation with each edge e ∈ E 1 . Note that in each grid cell we draw at most one diagonal line segment. By construction, no two line segments intersect and hence G 1 is planar.
Proof of Lemma 9.
A result by Frederickson [21] states that for any integer r any n-vertex planar graph can be divided into O(n/r) regions with no more than r vertices each, and O(n/ √ r) boundary vertices in total. We choose r := O(1/( ) 2 ) and then we have at most · n boundary vertices in total. We define V to be the set of non-boundary vertices.
Proof of Lemma 10. We define a planar embedding for G 2 . Let w j ∈ V 2 and assume that w j represents a connected component C j of G 1 . We represent C j by drawing the rectanglē R i for each vertex v i ∈ C (like in the proof of Lemma 8 the rectangleR i is defined to be the convex hull of all corners of cells of G that are contained in R i ) and the following set of line segments (actually almost the same as the ones defined in the proof of Lemma 8). Consider two rectangles R i , R i ∈ C j intersecting the same grid cell g.
If R i , R i intersect the same horizontal line H ∈ L H then then we draw a horizontal line segment connectingR i andR i such that is a subset of H . If R i and R i contain the top left and the bottom right corner of g, resp., then we draw a diagonal line segment connectingR i andR i within g. This yields a connected area A j representing C j (and thus w j ).
Let e = {w j , w j } ∈ E 2 . We want to introduce a line segment representing e. By definition of E 2 there must be grid cell g and two rectangles R i , R i intersecting g whose vertices belong to different connected components of G 1 and that R i and R i contain the bottom left and the top right corner of g, resp. Note that then there can be no vertex v i ∈ V 1 whose rectangle contains the top left or the bottom right corner of g: such a rectangle would be connected by an edge with both R i and R i in G 1 and then all three rectangles R i , R i , R i would be in the same connected component of G 1 . We draw a diagonal line segment connectingR i andR i within g and then does not intersect any area A j for any vertex w j ∈ V 2 . Also, since we add at most one line segment per grid cell g these line segments do not intersect each other. Hence, G 2 is planar.
Proof of Theorem 2. First, we define the grid as described in Section 2.1. In case that the algorithm in Lemma 5 finds a solution of size k then we define the kernelR to be this solution and we are done. Otherwise, we enumerate all possible sets G k of the kind as described in Lemma 14, at most k O(1/ 8 ) many. Then, for each such set G j we consider all rectangles contained in the union of G j and we compute a feasible solution of size c for them if such a solution exists, and otherwise we compute the optimal solution. We do this by complete enumeration in time n O(c) = n O(1/ 8 ) . For each set G j the obtained solution has size at most c = O(1/ 8 ). We define the kernelR to be the union over all k O(1/ 8 ) solutions obtained in XX:17 this way. Hence, |R| ≤ k O(1/ 8 ) . Also, we can guarantee that the output of our algorithm is a subset ofR and henceR contains a (1 + )-approximative solution.
Proof of Lemma 16. Let OPT denote the optimal solution to the given instance. For each
B ∈ {1, . . . , 8/ } we define I(B ) := {i ∈ I | h i ∈ [(1/k) B +2 N, (1/k) B N )}.
For any item i ∈ I there can be at most four values of B such that i is contained in the respective set I(B ). Hence, there must be one value B ∈ {1, . . . , 8/ } such that |I(B) ∩ OPT| ≤ 2 |OPT|. Each item i ∈ I \ I(B) is then contained in L or T . Since |I(B) ∩ OPT| ≤ 2 |OPT| we lose only a factor of (1 − 2 ) −1 ≤ 1 + in the approximation ratio.
Proof of Lemma 19. Each deletion rectangle has a height of at most (1/k) B N and a width of exactly (1/k) B N . Each large item has height and width at least (1/k) B N . Therefore, each deletion rectangle can intersect with at most 4 large items in its interior (intuitively, at its 4 corners).
Proof of Lemma 21. For each item i ∈ OPT we perform the following operation. Each item i ∈ OPT such that i is placed underneath i (i.e., such that the y-coordinate of the top edge of i is upper-bounded by the y-coordinate of the bottom edge of i) is moved by N/(k k ) units down. If i is not rotated then we increase the height of i toĥ i by appending a rectangle of width w i and heightĥ i −h i ≤ N/(k k ) underneath i. If i is rotated then we increase the width of i toŵ i by appending a rectangle of width h i and heightŵ i − w i ≤ N/(k k ) underneath i. Since we moved down the mentioned other items before, the new (bigger) item does not intersect any other item. We do this operation for each item i ∈ OPT . In the process, we move each item down by at most (k − 1)N/(k k ) and when we increase its height then the y-coordinate of its bottom edge decreases by at most N/(k k ). Initially, the y-coordinate of the bottom edge of any item was at least N/k. Hence, at the end the y-coordinate of the bottom edge of any item is at least N/k − (k − 1)N/(k k ) − N/(k k ) ≥ N/k − N/k ≥ 0. Hence, all rounded items are contained in the knapsack.
Proof of Lemma 22.
Consider the packing for OPT due to Lemma 21 in which we increased the height of each non-rotated item i toĥ i and the width of each rotated item i toŵ i . Suppose that there is a set I h the latter set of items. Since |OPT | ≤ k and i ∈ OPT there must be an item i ∈Ī (j) h such that i / ∈ OPT . Then we can replace i by i sinceĥ i =ĥ i and w i ≤ w i . We perform this operation for each set L (j) h and a symmetric operation for each set L (j) w until we obtain a solution for which the lemma holds. This solution then contains the same number of items as the initial solution OPT .
B
Proofs for Section 4 Lemma 23. multi-subset sum is W[1]-hard.
Proof. Downey and Fellows [18] give a parameterized reduction from perfect code(k) to subset sum. The created instances (x 1 , . . . , x m , t, k) have the property that all numbers have digits 0 or 1 when expressed in base k + 1. Moreover, the target value t is equal to 1 . . . 1 k+1 . Accordingly, when any k numbers x i sum to t there can be no carries in the addition. Thus, no two selected numbers may have a 1 in the same position. Hence, allowing to select numbers multiple times does not create spurious solutions, giving us a correct reduction from perfect code(k) to multi-subset sum.
We split the proof of Theorem 3 into two separate statements for 2dkr and 2dk.
Theorem 24. 2dkr is W[1]-hard.
Proof. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). By Lemma 23, this establishes W[1]-hardness of 2dkr.
Construction. Let (x 1 , . . . , x m , t, k) be an instance of multi-subset sum. W.l.o.g. we may assume that 4 ≤ k ≤ m and that x i < t for all i ∈ [m]. Furthermore, as solutions may select the same integer multiple times, we may assume that all the x i are pairwise different.
Throughout, we take a knapsack to be an N by N square with coordinate (0, 0) in the bottom left corner and (N, N ) at top right. The first coordinate of any point in the knapsack measures the horizontal (left-right) distance from the point to (0, 0); the second coordinate measure the vertical (up-down) distance from (0, 0). All items in the following construction are given such that their sizes reflect their intended rotation in a solution, i.e., heights refers to vertical dimensions and widths to horizontal dimensions.
We begin by constructing an instance of 2dk. Throughout, for an item R, we will use height(R) and width(R) denote its height and width. The instance of 2dk is defined as follows:
We define constants
S := k 2 · t L := k 2 · S = k 4 · t.
(The specific values will not be important so long as k 2 · t ≤ S and k 2 · S ≤ L. Intuitively, the identifiers are chosen to mean small and large.)
The knapsack has height and width both equal to
N := k · L + (2k − 1) · S + (2k − 1) · t.(1)
For each i ∈ [m] we construct k 2 items R(i, 1), . . . , R(i, k 2 ) with
height(R(i, j)) = L + S + x i (2) width(R(i, j)) = L + S + 2t − x i .(3)
We call these items tiles. We say that each tile R(i, ·) corresponds to the number x i from the input that it was constructed for. Since the x i are pairwise different, the x i corresponding to any tile can be easily read off from both height and width. We point out that all tiles have height strictly between L + S and L + S + t and width strictly between L + S + t and L + S + 2t. We add p := k · (k − 1) items T (1), . . . , T (p) with height L and width S. We call these the thin items. We add p items F (1), . . . , F (p) with height S and width L. We call these the flat items.
We add a single (very flat and very wide) item of height (2k − 2) · t and width N , which we call the bar.
The created instance has a target value of k = k 2 + 2p + 1. (The intention is to pack all thin and all flat items, the bar, and exactly k 2 tiles.) This completes the construction. Clearly, all necessary computations can performed in polynomial time. The parameter value k = k 2 + 2p + 1 is upper bounded by O(k 2 ). It remains to prove correctness.
Correctness. We need to prove that the instance (x 1 , . . . , x m , t, k) is yes for multi-subset sum if and only if the constructed instance is yes for 2dkr. ⇐=: Assume that the created instance is yes for 2dkr, i.e., that it has a packing with k = k 2 + 2p + 1 items and fix any such packing. Observe that the packing must contain at least k 2 tiles as there are only 2p + 1 items that are not tiles. We will show that the packing uses exactly k 2 tiles, the 2p thin/flat items, and the bar. It is useful to recall that tiles have height and width both greater than L + S no matter whether they are rotated.
Consider the effect of placing k vertical lines in the knapsack at horizontal coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1). We first observe that these lines must necessarily intersect all tiles of the packing because each of them has width at least L: The distance between any two consecutive lines is L − 1, same as the distance from the left border of the knapsack to the first line. The distance from the kth vertical line to the right border is also strictly less than L:
N − k · (L − 1) = k + (2k − 1) · S + (2k − 1) · t < S + (2k − 1) · S + S = (2k + 1) · S < L
Observe that no line can intersect more than k tiles: Any two tiles of the packing may not overlap and may in particular not share their intersection with any line. Since each line has length N and each intersection with a tile has length greater than L, there can be at most k tiles intersected by any line as N < (k + 1) · L:
N = k · L + (2k − 1) · S + (2k − 1) · t < k · L + 4k · S ≤ (k + 1) · L
Overall, this means that the packing contains at most k 2 tiles: There are k lines that intersect all tiles of the packing, each of them intersecting at most k. By our earlier observation, this implies that the packing contains exactly k 2 tiles in addition to all 2p flat/thin items. Moreover, each line intersects exactly k tiles and no two lines intersect the same tile.
Let us now check how the vertical lines and the flat and thin items interact. Clearly, each both flat as well as rotated thin items have width L and height S. Accordingly, each flat and each rotated thin item must be intersected by at least one of the k vertical lines. We already know that a total length of at least k · (L + S) of each line is occupied by the k tiles that the line intersects. This leaves at most a length of N − k · (L + S) = (k − 1) · S + (2k − 1) · t < k · S for intersecting flat and rotated thin items, and allows for intersecting at most k − 1 of them. (Again, no two items can share their intersection with the line.) Thus, there are at most p = k · (k − 1) of the flat and rotated thin items in the packing.
Before analyzing the vertical lines further, let us perform an analogous argument for k horizontal lines with vertical coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1) and their intersection with tiles and flat/thin items. It can be verified that each of them similarly intersects exactly k tiles and that no tile is intersected twice. The argument for flat and thin items is analogous as well, except that we now reason about rotated flat and (non-rotated) thin items, which have height L and width S; we find that there are at most p such items and that each horizontal line intersects at most k − 1 of them. Since in total there must be 2p flat and thin items, this implies that both sets of lines (horizontal and vertical) intersect p of these items each. Since flat and thin items can be swapped freely, we may assume that none of these items are rotated, and that the vertical lines intersect the p flat items and the horizontal lines intersect the p thin items.
We know now that the packing contains exactly k 2 tiles as well as the p flat and the p thin items. Thus, to get a total of k = k 2 + 2p + 1 items, it must also contain the bar, which has height N and width (2k − 2) · t. W.l.o.g., we may assume that the bar is not rotated, or else we could rotate the entire packing. 3 It follows that all vertical lines intersect the bar due to its width of N , which matches the width of the knapsack.
Let us now analyze both vertical and horizontal lines further. The goal is to obtain inequalities on the values x i that go into the construction of the tiles; up to now we have only used that they are fairly large. We know that each vertical line intersects k tiles, k − 1 flat items, and the bar. Let h 1 , . . . , h k denote the heights of the tiles (ordered arbitrarily) and recall that each flat item has height S while the bar has height (2k − 2) · t. Since all intersections with the line are disjoint and the line has length N (equaling the height of the knapsack), we get that
N ≥ h 1 + . . . + h k + (k − 1) · S + (2k − 2) · t.(4)
At this point, in order to plug in values for the h i , it is important whether any of the tiles are rotated; we will show that having at least one rotated tile causes a violation of (4). To this end, recall that (non-rotated) tiles have heights strictly between L + S and L + S + t and widths strictly between L + S + t and L + S + 2t. Thus, if at least one tile is rotated then it has height greater than L + S + t, rather than the weaker bound of greater than L + S. Using this, the right-hand side of (4) can be lower bounded by
RHS > (k − 1) · (L + S) + (L + S + t) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 1) · t = N,
contradicting (4). Thus, none of the tiles intersected by the vertical line can be rotated.
Since each tile is intersected by a vertical line, it follows that no tiles can be rotated and we can analyze the lines using the sizes as given in (2) and (3). Let us return to replacing the values h i in (4). Recall that the height of a tile is equal to L + S + x i where x i is the corresponding integer from the input to the initial multi-subset sum instance. Thus, if the ith intersected tile corresponds to input integer y i ∈ {x 1 , . . . , x m } then by (2) we have
h i = L + S + y i .
Plugging this into (4) yields
N ≥ k i=1 (L + S + y i ) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 2) · t + k i=1 y i . Using N = k · L + (2k − 1) · S + (2k − 1) · t we immediately get t ≥ k i=1 y i .(5)
More formally, item R a,b is a tile corresponding to y i , where i = 1 + ((a − b) mod k), and accordingly has height(R a,b ) = L + S + y i and width(R a,b ) = L + S + 2t − y i . This yields the required property that for each a ∈ [k] the items R a,1 , . . . , R a,k contain tiles corresponding to all numbers y 1 , . . . , y k (and correctly contain multiple copies for numbers that appear more than once). The same holds for items R 1,b , . . . , R k,b for all b ∈ [k].
We use height(R i,j ) and width(R i,j ) to refer to height and width of tile R i,j . We use left(R), right(R), top(R), and bottom(R) to specify the coordinates of any item in our packing, i.e., for the k 2 tiles, the 2p flat/thin items, and the bar. The coordinates for tiles are chosen as
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ), right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ), bottom(R a,b ) = (b − 1) · S + b−1 i=1 height(R a,i ), top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ).
Let us first check some basic properties of these coordinates:
We observe that each tile is assigned coordinates that match its size, i.e., width(R a,b ) = right(R a,b ) − left(R a,b ) and height(R a,b ) = top(R a,b ) − bottom(R a,b ).
All coordinates lie inside the knapsack. Clearly, all coordinates are non-negative and it suffices to give upper bounds for top(R a,k ) and right (R k,b ). Recall that by construction each set of tiles R a,1 , . . . , R a,k contains tiles corresponding to all numbers y 1 , . . . , y k , and same for R 1,b , . . . , R k,b . Thus we get
right(R k,b ) = (k − 1) · S + k i=1 width(R i,b ) = (k − 1) · S + k i=1 (L + S + 2t − y i ) = k · L + (2k − 1) · S + 2k · t − k i=1 y i = k · L + (2k − 1) · S + (2k − 1) · t = N. Similarly, we get top(R a,k ) = (k − 1) · S + k i=1 height(R a,i ) = (k − 1) · S + k i=1 (L + S + y i ) = k · L + (2k − 1) · S + k i=1 y i = k · L + (2k − 1) · S + t = N − (2k − 2) · t.
We will later use the gap of (2k − 2) · t between N and N − (2k − 2) · t to place the bar item, as its height exactly matches the gap. For any tile R a,b the possible coordinates fall into very small intervals, using that all heights and widths of tiles lie strictly between L + S and L + S + 2t. We show this explicitly for left(R a,b ):
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ) left(R a,b ) > (a − 1) · S + a−1 i=1 (L + S) = (a − 1) · L + (2a − 2) · S left(R a,b ) < (a − 1) · S + a−1 i=1 (L + S + 2t) = (a − 1) · L + (2a − 2) · S + (2a − 2) · t < (a − 1) · L + (2a − 1) · S
In this way, we get the following intervals for left(R a,b ), right(R a,b ), bottom(R a,b ), and top(R a,b ). (Note that we sacrifice the possibility of tighter bounds in order to get the same simple form of bound for top and right and for bottom and left.) (a − 1) · L + (2a − 2) · S < left(R a,b ) < (a − 1) · L + (2a − 1) · S (8) a · L + (2a − 1) · S < right(R a,b ) < a · L + 2a · S (9)
(b − 1) · L + (2b − 2) · S < bottom(R a,b ) < (b − 1) · L + (2b − 1) · S (10) b · L + (2b − 1) · S < top(R a,b ) < b · L + 2b · S(11)
We can now easily verify that no two tiles R a,b and R c,d overlap if (a, b) = (c, d). If a = c then we may assume w.l.o.g. that a < c (and hence a ≤ c − 1). Using (11) and (10) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < left(R c,d ).
Thus, R a,b and R c,d do not overlap if a = c. If instead a = c then we must have b = d and, w.l.o.g., b < d (and hence b ≤ d − 1). Thus we have top(R a,b ) < b · L + 2b · S ≤ (d − 1) · L + (2d − 2) · S < bottom(R c,d ).
Thus, no two tiles R a,b and R c,d with (a, b) = (c, d) overlap.
We will now specify coordinates for the p flat and the p thin items. For this purpose the intervals for coordinates of the tiles (8)-(11) are highly useful. For thin items, there will always be two adjacent tiles, to the left and to the right, and we use the intervals to get top and bottom coordinates. For flat items the situation is the opposite; there are adjacent tiles on the top and bottom sides and we use the intervals to get left and right coordinates. Recall that thin items have height L and width S, whereas flat items have height S and width L.
We denote the p thin items by T a,b with a ∈ [k − 1] and b ∈ [k]; we choose coordinates as follows:
left(T a,b ) = right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ) (12) right(T a,b ) = left(R a+1,b ) = a · S + a i=1 width(R i,b ) (13) bottom(T a,b ) = (b − 1) · L + (2b − 1) · S (14) top(T a,b ) = b · L + (2b − 1) · S(15)
Clearly, the coordinates match the dimension of T a,b . We denote the p flat items by F a,b with a ∈ [k] and b ∈ [k − 1], and we use the following coordinates:
left(F a,b ) = (a − 1) · L + (2a − 1) · S (16) right(F a,b ) = a · L + (2a − 1) · S (17) bottom(F a,b ) = top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ) (18) top(F a,b ) = bottom(R a,b+1 ) = b · S + b i=1 height(R a,i )(19)
Clearly, the coordinates match the dimension of F a,b . It remains to show that there is no overlap between any of the items placed so far (all except the bar), recalling that intersections between tiles are already ruled out: It remains to consider (1) tile-flat, (2) tile-thin, (3) flat-flat, (4) flat-thin, and (5) thin-thin overlaps.
(1) There are no overlaps between any tile R a,b and any flat item F c,d : If a < c then a ≤ c − 1 and using (9) and (16) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < (c − 1) · L + (2c − 1) · S = left(F c,d ).
If a > c then c ≤ a − 1 and using (17) and (8) Thus, in all four cases there is no overlap, as claimed.
(2) There are no overlaps between any tile R a,b and any thin item T c,d :
Thus, in both cases there is no overlap, as claimed. Overall, we find that there are no overlap between any pair of items placed so far. It remains to add the bar to complete our packing. We already observed earlier that top(R a,k ) = N − (2k − 2) · t. Similarly, using (19) we get
top(F a,b ) = bottom(R a,b+1 ) ≤ bottom(R a,k ) ≤ top(R a,k ) ≤ N − (2k − 2) · t
for all a ∈ [k] and b ∈ [k − 1]. In the same way, using (15) we get top(T a,b ) = b · L + (2b − 1) · S ≤ k · L + (2k − 1) · S < N − (2k − 2) · t for all a ∈ [k − 1] and b ∈ [k], recalling that N = k · L + (2k − 1) · S + (2k − 1) · t. Thus, we can place the bar B of height (2k − 2) · t and width N at the top of the knapsack without causing overlaps; formally, its coordinates are as follows.
left(B) = 0 right(B) = N bottom(B) = N − (2k − 2) · t top(B) = N
Overall, we have placed k 2 + 2p + 1 items without overlap. Thus, the constructed instance of 2dk is a yes-instance, as required. This completes the proof.
Corollary 25. The 2dk problem is W[1]-hard.
Proof. We can use the same construction as in the proof of Theorem 24 to get a parameterized reduction from multi-subset sum to 2dk.
If the constructed instance is yes for 2dk then it is also yes for 2dkr, as the same packing of k = k 2 + 2p + 1 items can be used. As showed earlier, the latter implies that the input instance is yes for multi-subset sum. Conversely, if the input instance is yes for multi-subset sum then we already showed that there is a feasible packing to show that the constructed instance is yes for 2dkr. Since the packing did not require rotation of any items, it is also a feasible solution showing that the instance is yes for 2dk. Example showing that Lemma 15 cannot be generalized to 2dk (without rotations). The total height of the k/2 items on the bottom of the knapsack can be made arbitrarily small. Suppose that we wanted to free up an area of height f (k) · N and width N or of height N and width f (k) · N (for some fixed function f ). If the total height of the items on the bottom is smaller than f (k) · N then we would have to eliminate the k/2 items on the bottom or the k/2 items on top. Thus, we would lose a factor of 2 > 1 + ε in the approximation ratio.
| 12,445 |
1906.10982
|
2956057537
|
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+eps)-approximations in f(k,eps)n^ O(1) time where k is some parameter of the input. We obtain the following results on parameterized approximability: 1) In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant eps>0 and integer k>0, in time f(k,eps)n^ g(eps) , either outputs a solution of size at least k (1+eps), or declares that the optimum solution has size less than k. 2) In the (2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of TDK with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factors are 558 325+eps and 4 3+eps, resp. [, FOCS'17]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for TDKR. For all considered problems, getting time f(k,eps)n^ O(1) , rather than f(k,eps)n^ g(eps) , would give FPT time f'(k)n^ O(1) exact algorithms using eps=1 (k+1), contradicting W[1]-hardness.
|
The systematic study of parameterized approximation as a field was initiated independently by three separate publications @cite_10 @cite_5 @cite_7 . A very good introduction to the area including key definitions as well as a survey of earlier results that fit into the picture was given by Marx @cite_24 . In particular, Marx also defined a so-called that, given input @math will run for @math time and return (say, for a maximization problem) a solution of value at least @math if the optimum is at least @math . As mentioned earlier, Marx pointed out that a standard FPT-approximation scheme that finds a solution of value at least @math in time @math if @math is not interesting to study: By setting @math we can decide the decision problem @math ?'' in FPT time. Thus, such a scheme is not helpful if the decision problem is 1 -hard and therefore unlikely to have an FPT-algorithm. Nevertheless, PASs can be useful in this case, as they imply standard FPT-approximation algorithms with ratio @math for each fixed @math despite 1 -hardness.
|
{
"abstract": [
"Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research.",
"Combining classical approximability questions with parameterized complexity, we introduce a theory of parameterized approximability. The main intention of this theory is to deal with the efficient approximation of small cost solutions for optimisation problems.",
"The notion of fixed-parameter approximation is introduced to investigate the approximability of optimization problems within the framework of fixed-parameter computation. This work partially aims at enhancing the world of fixed-parameter computation in parallel with the conventional theory of computation that includes both exact and approximate computations. In particular, it is proved that fixed-parameter approximability is closely related to the approximation of small-cost solutions in polynomial time. It is also demonstrated that many fixed-parameter intractable problems are not fixed-parameter approximable. On the other hand, fixed-parameter approximation appears to be a viable approach to solving some inapproximable yet important optimization problems. For instance, all problems in the class MAX SNP admit fixed-parameter approximation schemes in time O(2 @math p(n)) for any small e> 0.",
"Up to now, most work in the area of parameterized complexity has focussed on exact algorithms for decision problems. The goal of this paper is to apply parameterized ideas to approximation. We begin exploration of parameterized approximation problems, where the problem in question is a parameterized decision problem, and the required approximation factor is treated as a second parameter for the problem."
],
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_10",
"@cite_7"
],
"mid": [
"1980809300",
"2773392011",
"2118980874",
"1516574938"
]
}
|
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack
|
a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2dk with rotations (2dkr), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2 + ε [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese,SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2dkr. For all considered problems, getting time f (k, ε)n O(1) , rather than f (k, ε)n g (ε) , would give FPT time f (k)n O(1) exact algorithms by setting ε = 1/(k + 1), contradicting W[1]-hardness. Instead, for each fixed ε > 0, our PASs give (1 + ε)-approximate solutions in FPT time.
For both misr and 2dkr our techniques also give rise to preprocessing algorithms that take n g(ε) time and return a subset of at most k g(ε) rectangles/items that contains a solution of size at least k/(1 + ε) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al.,STOC'17].
Introduction
Approximation algorithms and parameterized algorithms are two well-established ways to deal with NP-hard problems. An α-approximation for an optimization problem is a polynomialtime algorithm that computes a feasible solution whose cost is within a factor α (that might be a function of the input size n) of the optimal cost. In particular, a polynomial-time approximation scheme (PTAS) is a (1 + ε)-approximation algorithm running in time n g(ε) , where ε > 0 is a given constant and g is some computable function. In parameterized algorithms we identify a parameter k of the input, that we informally assume to be much smaller than n. The goal here is to solve the problem optimally in fixed-parameter tractable (FPT) time f (k)n O(1) , where f is some computable function. Recently, researchers started to combine the two notions (see, e.g., the survey by Marx [34]). The idea is to design approximation algorithms that run in FPT (rather than polynomial) time, e.g., to get (1 + ε)-approximate solutions in time f (k, ε)n O(1) . In this paper we continue this line of research on parameterized approximation, and apply it to two fundamental rectangle packing problems.
Our results and techniques
Our focus is on parameterized approximation algorithms. Unfortunately, as observed by Marx [34], when the parameter k is the desired solution size, computing (1 + ε)-approximate solutions in time f (k, ε)n O(1) implies fixed-parameter tractability. Indeed, setting ε = 1/(k+1) guarantees to find an optimal solution when that value equals to k ∈ N and we get time f (k, 1/(k + 1))n O(1) = f (k)n O(1) . Since the considered problems are W[1]-hard (in part, this is established in our work), they are unlikely to be FPT and similarly unlikely to have such nice approximation schemes. Instead, we construct algorithms (for two maximization problems) that, given ε > 0 and an integer k, take time f (k, ε)n g (ε) and either return a solution of size at least k/(1 + ε) or declare that the optimum is less than k. We call such an algorithm a parameterized approximation scheme (PAS). Note that if we run such an algorithm for each k ≤ k then we can guarantee that we compute a solution with cardinality at least min{k, OPT}/(1 + ε) where OPT denotes the size of the optimal solution. So intuitively, for each ε > 0, we have an FPT-algorithm for getting a (1 + ε)-approximate solution.
In this paper we consider the following two geometric packing problems, and design PASs for them.
Maximum Independent Set of Rectangles.
In the maximum independent set of rectangles problem (misr) we are given a set of n axis-parallel rectangles R = {R 1 , . . . , R n } in the two-dimensional plane, where R i is the open set of points (x
(1) i , x (2) i ) × (y (1) i , y(2)
i ). A feasible solution is a subset of rectangles R ⊆ R such that for any two rectangles R, R ∈ R we have R ∩ R = ∅. Our objective is to find a feasible solution of maximum cardinality |R |. W.l.o.g. we assume that x
(1) i , y (1) i , x(2)
i , y
(2) i ∈ {0, . . . , 2n − 1} for each R i ∈ R (see e.g. [1]). misr is very well-studied in the area of approximation algorithms. The problem is known to be NP-hard [24], and the current best polynomial-time approximation factor is O(log log n) for the cardinality case [11] (addressed in this paper), and O(log n/ log log n) for the natural generalization with rectangle weights [12]. The cardinality case also admits a (1 + ε)-approximation with a running time of n poly(log log(n/ε)) [15] and there is a (slower) QPTAS known for the weighted case [1]. The problem is also known to be W[1]-hard w.r.t. the number k of rectangles in the solution [33], and thus unlikely to be solvable in FPT time f (k)n O(1) .
In this paper we achieve the following main result:
Theorem 1. There is a PAS for misr with running time k O(k/ 8 ) n O(1/ 8 ) .
In order to achieve the above result, we combine several ideas. Our starting point is a polynomial-time construction of a k × k grid such that each rectangle in the input contains some crossing point of this grid (or we find a solution of size k directly). By applying (in a non-trivial way) a result by Frederickson [21] on planar graphs, and losing a small factor in the approximation, we define a decomposition of our grid into a collection of disjoint groups of cells. Each such group defines an independent instance of the problem, consisting of the rectangles strictly contained in the considered group of cells. Furthermore, we guarantee that each group spans only a constant number O ε (1) of rectangles of the optimum solution. Therefore in FPT time we can guess the correct decomposition, and solve each corresponding subproblem in n Oε(1) time. We remark that our approach deviates substantially from prior work, and might be useful for other related problems. An adaptation of our construction also leads to the following (1 + )-approximative kernelization.
Theorem 2. There is an algorithm for misr that, given k ∈ N, computes in time n O(1/ 8 ) a subset of the input rectangles of size k O(1/ 8 ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
Similarly as for a PAS, if we run the above algorithm for each k ≤ k we obtain a set of size k O(1/ 8 ) that contains a solution of size at least min{k, OPT}/(1 + ε). Observe that any c-approximate solution on the obtained set of rectangles is also a feasible, and c(1 + ε)approximate, solution for the original instance if OPT ≤ k and otherwise has size at least k/(c(1 + ε)). Thus, our result is a special case of a polynomial-size approximate kernelization scheme (PSAKS) as defined in [32].
(0, w i ) × (0, h i ), N ≥ w i , h i ∈ N.
The goal is to find a feasible packing of a subset I ⊆ I of the items of maximum cardinality |I |. Such packing maps each item i ∈ I into a new translated rectangle (a i , The result is proved by parameterized reductions from a variant of the W[1]-hard subset sum problem, where we need to determine whether a set of m positive integers contains a k-tuple of numbers with sum equal to some given value t. The difficulty for reductions to 2dk or 2dkr is of course that rectangles may be freely selected and placed (and possibly rotated) to get a feasible packing.
a i + w i ) × (b i , b i + h i ) 2 ,
We complement the W[1]-hardness result by giving a PAS for the case with rotations (2dkr) and a corresponding kernelization procedure like in Theorem 2 (which also yields a PSAKS).
Theorem 4.
For 2dkr there is a PAS with running time k O(k/ ) n O(1/ 3 ) and an algorithm that, given k ∈ N, computes in time n O(1/ 3 ) a subset of the input items of size k O(1/ ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
The above result is based on a simple combination of the following two (non-trivial) building blocks: First, we show that, by losing a fraction ε of the items of a given solution of size k, it is possible to free a vertical strip of width N/k Oε(1) (unless the problem can be solved trivially). This is achieved by first sparsifying the solution using the above mentioned result by Frederickson [21]. If this is not sufficient we construct a vertical chain of relatively wide and tall rectangles that split the instance into a left and right side. Then we design a resource augmentation algorithm, however in an FPT sense: we can compute in FPT time a packing of cardinality k if we are allowed to use a knapsack where one side is enlarged by a factor 1 + 1/k Oε(1) . Note that in typical resource augmentation results the packing constraint is relaxed by a constant factor while here this amount is controlled by our parameter.
A Parameterized Approximation Scheme for MISR
In this section we present a PAS and an approximate kernelization for misr. We start by showing that there exists an almost optimal solution for the problem with some helpful structural properties (Sections 2.1 and 2.2). The results are then put together in Section 2.3.
Definition of the grid
We try to construct a non-uniform grid with k rows and k columns such that each input rectangle overlaps a corner of this grid (see Figure 1). To this end, we want to compute k − 1 vertical and k − 1 horizontal lines such that each input rectangle intersects one line from each set. There are instances in which our routine fails to construct such a grid (and in fact such a grid might not even exist). For such instances, we directly find a feasible solution with k rectangles and we are done.
Lemma 5.
There is a polynomial time algorithm that either computes a set of at most k − 1 vertical lines L V with x-coordinates V 1 , . . . , V k−1 such that each input rectangle is crossed by one line in L V or computes a feasible solution with k rectangles. A symmetric statement holds for an algorithm computing a set of at most k − 1 horizontal lines L H with y-coordinates H 1 , . . . , H k−1 .
Proof. Let V 0 := 0. Assume inductively that we defined the x-coordinates V 0 , V 1 , . . . , V k such that V 1 , . . . , V k are the x-coordinates of the first k constructed vertical lines. We define the x-XX:6
Parameterized Approximation Schemes coordinate of the (k + 1)-th vertical line by V k +1 := min Ri∈R:
x (1) i ≥ V k x (2)
i − 1/2. We continue with this construction until we reach an iteration k * such that
{R i ∈ R : x (1) i ≥ V k * −1 } = ∅.
If k * ≤ k then we constructed at most k − 1 lines such that each input rectangle is intersected by one of these lines. Otherwise, assume that k * > k. Then for each iteration k ∈ {1, . . . , k} we can find a rectangle R i(k ) := arg min Ri∈R:
x (1) i ≥ V k −1 x (2)
i . By construction, using the fact that all coordinates are integer, for any two such rectangles
R i(k ) , R i(k ) with k = k we have that (x (1) i(k ) , x (2) i(k ) ) ∩ (x (1) i(k ) , x(2)
i(k ) ) = ∅. Hence, R i(k ) and R i(k ) are disjoint. Therefore, the rectangles R i(1) , . . . , R i(k) are pairwise disjoint and thus form a feasible solution.
The algorithm for constructing the horizontal lines works symmetrically.
We apply the algorithms due to Lemma 5. If one of them finds a set of k independent rectangles then we output them and we are done. Otherwise, we obtain the sets L V and L H . For convenience, we define two more vertical lines with x-coordinates V 0 := 0 and V |L V |+1 = 2n − 1, resp., and similarly two more horizontal lines with y-coordinates H 0 = 0 and H |L H |+1 = 2n − 1, resp.. We denote by G the set of grid cells formed by these lines and the lines in L V ∪L H : for any two consecutive vertices lines (i.e., defined via x-coordinates V j , V j+1 with j ∈ {0, . . . , |L V |}) and two consecutive horizontal grid lines (defined via y-coordinates
H j , H j +1 with j ∈ {0, . . . , |L H |})
we obtain a grid cell whose corners are the intersection of these respective lines. We interpret the grid cells as closed sets (i.e., two adjacent grid cells intersect on their boundary).
Proposition 6. Each input rectangle R i contains a corner of a grid cell of G. If a rectangle R intersects a grid cell g then it must contain a corner of g.
Groups of rectangles
Let R * denote a solution to the given instance with |R * | = k. We prove that there is a special solution R ⊆ R * of large cardinality that we can partition into s ≤ k groups R 1∪ . . .∪R s such that each group has constant size O(1/ 8 ) and no grid cell can be intersected by rectangles from different groups. The remainder of this section is devoted to proving the following lemma.
Lemma 7.
There is a constant c = O(1/ 8 ) such that there exists a solution R ⊆ R * with |R | ≥ (1 − )|R * | and a partition R = R 1∪ . . .∪R s with s ≤ k and |R j | ≤ c for each j and such that if any two rectangles in R intersect the same grid cell g ∈ G then they are contained in the same set R j .
Given the solution R * we construct a planar graph G 1 = (V 1 , E 1 ). In V 1 we have one vertex v i for each rectangle R i ∈ R * . We connect two vertices v i , v i by an edge if and only if there is a grid cell g ∈ G such that R i and R i intersect g and R i and R i are crossed by the same horizontal or vertical line in L V ∪ L H or if R i and R i contain the top left and the bottom right corner of g, resp. Note that we do not introduce an edge if R i and R i contain the bottom left and the top right corner of g, resp. (see Fig. 1): this way we preserve the planarity of the resulting graph, however we will have to deal with the missing connections in a later stage. Let G 1 be the graph obtained when applying Lemma 9 to G 1 with := /2 and let c 1 = O((1/ ) 2 ) be the respective value c . Now we would like to claim that if two rectangles R i , R i intersect the same grid cell g ∈ G then v i , v i are in the same component of G 1 . Unfortunately, this is not true. It might be that there is a grid cell g ∈ G such that R i and R i contain the bottom left corner and the top right corner of g, resp., and that v i and v i are in different components of G 1 . We fix this in a second step. We define a graph G 2 = (V 2 , E 2 ). In V 2 we have one vertex for each connected component in G 1 . We connect two vertices w i , w i ∈ V 2 by an edge if and only if there are two rectangles R i , R i such that their corresponding vertices v i , v i in V 1 belong to the connected components of G 1 represented by w i and w i , resp., and there is a grid cell g whose bottom left and top right corner are contained in R i and R i , resp. Lemma 10. The graph G 2 is planar.
Similarly as above, we apply Lemma 9 to G 2 with := 2c1 and let
c 2 = O((1/ ) 2 ) = O(1/ 6 ) denote the corresponding value of c . Denote by G 2 the resulting graph. We define a group R q for each connected component C q of V 2 . The set R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 such that w j ∈ C q . We define R :=∪ q R q . Lemma 11. Let R i , R i ∈ R be rectangles that intersect the same grid cell g ∈ G. Then there is a set R q such that {R i , R i } ⊆ R q .
Proof. Assume that in G 1 there is an edge connecting v i , v i . Then the latter vertices are in the same connected component C j of G 1 and thus they are in the same group R q . Otherwise, if there is no edge connecting v i , v i in G 1 then R i and R i contain the bottom left and top right corners of g, resp. Assume that v i and v i are contained in the connected components C j and C j of G 1 , resp. Then w j , w j ∈ V 2 , {w j , w j } ∈ E 2 and w j , w j are in the same connected component of V 2 . Hence, R i , R i are in the same group R q .
It remains to prove that each group R q has constant size and that |R | ≥ (1 − )|R * |.
Lemma 12. There is a constant
c = O(1/ 8 ) such that for each group R q it holds that |R q | ≤ c. Proof. For each group R q there is a connected component C q of G 2 such that R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 and w j ∈ C q . Each connected component of G 1 contains at most c 1 = O(1/ε 2 ) vertices of V 1 and each component of G 2 contains at most c 2 = O(1/ε 6 ) vertices of V 2 . Hence, |R q | ≤ c 1 · c 2 =: c and c = O((1/ 2 )(1/ 6 )) = O(1/ 8 ). Lemma 13. We have that |R | ≥ (1 − )|R * |.
Proof. At most 2 · |V 1 | vertices of G 1 are deleted when we construct G 1 from G 1 . Each vertex in G 1 belongs to one connected component C j , represented by a vertex w j ∈ G 2 . At most 2c1 |V 2 | vertices are deleted when we construct G 2 from G 2 . These vertices represent at most c 1 · 2c1 |V 2 | ≤ 2 |V 1 | ≤ 2 |V 1 | vertices in G 1 (and each vertex in G 1 represents one rectangle in R * ). Therefore,
|R | ≥ |R * | − 2 · |V 1 | − 2 · |V 1 | = (1 − )|R * |.
This completes the proof of Lemma 7.
The algorithm
In our algorithm, we compute a solution that is at least as good as the solution R as given by Lemma 7. For each group R j we define by G j the set of grid cells that are intersected by at least one rectangle from R j . Since in R each grid cell can be intersected by rectangles of only one group, we have that G j ∩ G q = ∅ if j = q. We want to guess the sets G j . The next lemma shows that the number of possibilities for one of those sets is polynomially bounded in k.
Lemma 14. Each G j belongs to a set G of cardinality at most k O(1/ε 8 ) that can be computed in polynomial time.
Proof. The cells G j intersected by R j are the union of all cells G(R) with R ∈ R j where for each rectangle R the set G(R) denotes the cells intersected by R. Each set G(R) can be specified by indicating the 4 corner cells of G(R), i.e., top-left, top-right, bottom-left, and bottom-right corner. Hence there are at most k 4 choices for each such R. The claim follows
since |R j | = O(1/ε 8 ).
We hence achieve the main result of this section.
Proof of Theorem 1. Using Lemma 14, we can guess by exhaustive enumeration all the sets G j in time k O(k/ 8 ) . We obtain one independent problem for each value j ∈ {1, . . . , s} which consists of all input rectangles that are contained in G j . For this subproblem, it suffices to compute a solution with at least |R j | rectangles. Since |R j | ≤ c = O(1/ 8 ) we can do this in time n O(1/ 8 ) by complete enumeration. Thus, we solve each of the subproblems and output the union of the computed solutions. The overall running time is as in the claim. If all the computed solutions have size less than (1 − ε)k, this implies that the optimum solution is smaller than k. Otherwise we obtain a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and the claim follows by redefining ε appropriately.
Essentially the same construction as above also gives an approximate kernelization algorithm as claimed in Theorem 2, see Appendix A for details.
A Parameterized Approximation Scheme for 2DKR
In this section we present a PAS and an approximate kernelization for 2dkr. W.l.o.g., we assume that k ≥ Ω(1/ 3 ), since otherwise we can optimally solve the problem in time n O(1/ 3 ) by exhaustive enumeration. In Section 3.1 we show that, if a solution of size k exists, there is a solution of size at least (1 − )k in which no item intersects some horizontal strip
(0, N ) × (0, (1/k) O(1/ ) N )
Freeing a Horizontal Strip
In this section, we prove the following lemma that shows the existence of a near-optimal solution that leaves a sufficiently tall empty horizontal strip in the knapsack (assuming k ≥ Ω(1/ 3 )). W.l.o.g., ε ≤ 1. Since we can rotate the items by 90 degrees, we can assume w.l.o.g. that w i ≥ h i for each item i ∈ I.
(1 − )k in which no packed item intersects (0, N ) × (0, (1/k) c N ), for a proper constant c = O(1/ ).
We classify items into large and thin items. Via a shifting argument, we get the following lemma.
Lemma 16. There is an integer B ∈ {1, . . . , 8/ } such that by losing a factor of 1 + in the objective we can assume that the input items are partitioned into
large items L such that h i ≥ (1/k) B N (and thus also w i ≥ (1/k) B N ) for each item i ∈ L, thin items T such that h i < (1/k) B+2 N for each item i ∈ T .
Let B be the integer due to Lemma 16 and we work with the resulting item classification. If |T | ≥ k then we can create a solution of size k satisfying the claim of Lemma 15 by simply stacking k thin items on top of each other: any k thin items have a total height of at most k · (1/k) B+2 N ≤ (1/k) 2 N . Thus, from now on assume that |T | < k.
Sparsifying large items. Our strategy is now to delete some of the large items and move the remaining items. This will allow us to free the area [0, N ] × [0, (1/k) O(1/ ) N ] of the knapsack. Denote by OPT the almost optimal solution obtained by applying Lemma 16. We remove the items in OPT T := OPT ∩ T temporarily; we will add them back later.
We construct a directed graph G = (V, A) where we have one vertex v i ∈ V for each item i ∈ OPT L := OPT ∩ L. We connect two vertices v i , v i by an arc a = (v i , v i ) if and only if we can draw a vertical line segment of length at most (1/k) B N that connects item i with item i without intersecting any other item such that i lies above i, i.e., the bottom coordinate of i is at least as large as the top coordinate of i, see Figure 2 for a sketch. We obtain the following proposition since for each edge we can draw a vertical line segment and these segments do not intersect each other.
Proposition 17. The graph G is planar.
Next, we apply Lemma 9 to G with := . Let G = (V , A ) be the resulting graph. We remove from OPT L all items i ∈ V \ V and denote by OPT L the resulting solution. We push up all items in OPT L as much as possible. If now the strip (0, N ) × (0, (1/k) B N ) is not intersected by any item then we can place all the items in T into the remaining space. Their total height can be at most k · (1/k) B+2 N ≤ (1/k) B+1 N and thus we can leave a strip
of height (1/k) B N − (1/k) B+1 N ≥ (1/k) O(1/ ) N
and width N empty. This completes the proof of Lemma 15 for this case.
Assume next that the strip (0, N )×(0, (1/k) B N ) is intersected by some item: the following lemma implies that there is a set of c = O(1/ 2 ) vertices whose items intuitively connect the top and the bottom edge of the knapsack.
Lemma 18. Assume that in OPT L there is an item i 1 intersecting (0, N ) × (0, (1/k) B N ). Then G contains a path v i1 , v i2 , . . . , v i K with K ≤ c = O(1/ 2 ), such that the distance between i K and the top edge of the knapsack is less than (1/k) B N . Proof. Let C denote all vertices v in G such that there is a directed path from v i1 to v in G . The vertices in C are contained in the connected component C in G that contains v i1 . Note that |C| ≤ |C | ≤ c .
We claim that C must contain a vertex v j whose corresponding item j is closer than (1/k) B N to the top edge of the knapsack. Otherwise, we would have been able to push up all items corresponding to vertices in C by (1/k) B N units: first we could have pushed up all items such that their corresponding vertices have no outgoing arc, then all items such that their vertices have outgoing arcs pointing at the former set of vertices, and so on. By definition of C, there must be a path connecting
v i1 with v j . This path v i1 , v i2 , . . . , v i K = v j
contains only vertices in C and hence its length is bounded by c . The claim follows.
Our goal is now to remove the items i 1 , . . . , i K due to Lemma 18 and O(K) = O(1/ 2 ) more large items from OPT L . Since we can assume that k ≥ Ω(1/ 3 ) this will lose only a factor of 1 + O( ) in the objective. To this end we define K + 1 deletion rectangles, see Figure 2. We place one such rectangle R between any two consecutive items i , i +1 . The height of R equals the vertical distance between i and i +1 (at most (1/k) B N ) and the width of R equals (1/k) B N . Since v i , v i +1 are connected by an arc in G , we can draw a vertical line segment connecting i with i +1 . We place R such that it is intersected by this line segment. Note that for the horizontal position of R there are still several possibilities and we choose one arbitrarily. Finally, we place a special deletion rectangle between the item i K and the top edge of the knapsack and another special deletion rectangle between the item i 1 and the bottom edge of the knapsack. The heights of these rectangles equal the distance of i 1 and i K with the bottom and top edge of the knapsack, resp. (which is at most (1/k) B N ), and their widths equal (1/k) B N . They are placed such that they touch the bottom edge of i 1 and the top edge of i K , resp.
Lemma 19. Each deletion rectangle can intersect at most 4 large items in its interior. Hence, there can be only O(K) ≤ O(c ) = O(1/ 2 ) large items intersecting a deletion rectangle in their interior.
Observe that the deletion rectangles and the items in {i 1 , . . . , i K } separate the knapsack into a left and a right part with items OPT lef t and OPT right , resp. We delete all items in i 1 , . . . , i K and all items intersecting the interior of a deletion rectangle. Each deletion rectangle and each item in {i 1 , . . . , i K } has a width of at least (1/k) B N . Thus, we can move all items in OPT lef t simultaneously by (1/k) B N units to the right. After this, no large item intersects the area (0, (1/k) B N ) × (0, N ). We rotate the resulting solution by 90 degrees, hence getting an empty horizontal strip (0, N )
× (0, (1/k) B N ). The total height of items in OP T T is at most k · (1/k) B+2 N ≤ (1/k) B+1 N . Therefore,
FPT-algorithm with resource augmentation
We now compute a packing that contains as many items as the solution due to Lemma 15. However, it might use the space of the entire knapsack. In particular, we use the free space in the knapsack in the latter solution in order to round the sizes of the items. In the following lemma the reader may think of k = (1 − )k andk = k O(1/ ) . Note that Lemma 20 yields an FPT algorithm if we are allowed to increase the size of the knapsack by a factor 1 + O(1/k) wherek is a second parameter.
In the remainder of this section, we prove Lemma 20 and we do not differentiate between large and thin items anymore. Assume that there exists a solution OPT of size k that leaves the area [0, N ] × [0, N/k] of the knapsack empty. We want to compute a solution of size k . We use the empty space in order to round the heights of the items in the packing of OPT to integral multiples of N/(k k ). Note that in OPT an item i might be rotated. Thus, depending on this we actually want to round its height h i or its width w i . To this end, we define rounded heights and widths byĥ i :
= hi N/(k k ) N/(k k ) andŵ i := hi N/(k k ) N/(k k ) for each item i.
Lemma 21.
There exists a feasible packing for all items in OPT even if for each rotated item i we increase its width w i toŵ i and for each non-rotated item i ∈ OPT we increase its height h i toĥ i .
To visualize the packing due to Lemma 21 one might imagine a container of heightĥ i and width w i for each non-rotated item i and a container of height h i and widthŵ i for each rotated item i . Next, we group the items according to their valuesĥ i andŵ i . We define I w the items that are not among the k items with smallest width and height, resp. At most 2k · k k = O(k(k ) 2 ) items remain, denote them bȳ I. Then, in time (kk ) O(k ) we can solve the remaining problem by completely enumerating over all subsets ofĪ with at most k elements. For each enumerated set we check within the given time bounds whether its items can be packed into the knapsack (possibly via rotating some of them) by guessing sufficient auxiliary information. Therefore, if a solution of size k for a knapsack of width N and height (1 − 1/k)N exists, then we will find a solution of size k that fits into a knapsack of width and height N . Now the proof of Theorem 4 follows by using Lemma 15 and then applying Lemma 20 with k = (1 − )k andk = k O(1/ ) . The setĪ is the claimed set (which intuitively forms the approximative kernel), we compute a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and we can redefine ε appropriately.
Hardness of Geometric Knapsack
We show that 2dk and 2dkr are both W[1]-hard for parameter k by reducing from a variant of subset sum. Recall that in subset sum we are given m positive integers x 1 , . . . , x m as well as integers t and k, and have to determine whether some k-tuple of the numbers sums to t; this is W[1]-hard with respect to k [18]. In the variant multi-subset sum it is allowed to choose numbers more than once. It is easy to verify that the proof for W[1]-hardness of subset sum due to Downey and Fellows [18] extends also to multi-subset sum. (See Lemma 23 in Section B.) In our reduction to 2dkr we prove that rotations are not required for optimal solutions, making W[1]-hardness of 2dk a free consequence.
Proof sketch for Theorem 3. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). This establishes W[1]hardness of 2dkr.
Observe that, for any packing of items into the knapsack, there is an upper bound of N on the total width of items that intersect any horizontal line through the knapsack, and similarly an upper bound of N for the total height of items along any vertical line. We will let the dimensions of some items depend on numbers x i from the input instance (x 1 , . . . , x m , t, k) of multi-subset sum such that, using these upper bound inequalities, a correct packing certifies that y 1 + . . . + y k = t for some k of the numbers. The key difficulty is that there is a lot of freedom in the choice of which items to pack and where in case of a no instance.
To deal with this, the items corresponding to numbers x i from the input are all almost squares and their dimensions are incomparable. Concretely, an item corresponding to some number x i has height L + S + x i and width L + S + 2t − x i ; we call such an item a tile.
(The exact values of L and S are immaterial here, but L S t > x i holds.) Thus, when using, e.g., a tile of smaller width (i.e., smaller value of x i ) it will occupy "more height" in the packing. The knapsack is only slightly larger than a k by k grid of such tiles, implying that there is little freedom for the placement. Let us also assume for the moment, that no rotations are used.
Accordingly, we can specify k vertical lines that are guaranteed to intersect all tiles of any packing that uses k 2 tiles, by using pairwise distance L − 1 between them. Moreover, each line is intersecting exactly k private tiles. The same holds for a similar set of k horizontal lines. Together we get an upper bound of N for the sum of the widths (heights) along any horizontal (vertical) line. Since the numbers x i occur negatively in widths, we effectively get lower bounds for them from the horizontal lines. When the sizes of these tiles (and the auxiliary items below) are appropriately chosen, it follows that all upper bound equalities must be tight. This in turn, due to the exact choice of N , implies that there are k numbers y 1 , . . . , y k with sum equal to t.
Unsurprisingly, using just the tiles we cannot guarantee that a packing exists when given a yes-instance. This can be fixed by adding a small number of flat/thin items that can be inserted between the tiles (see Figure 3, but note that it does not match the size ratios from this proof); these have dimension L × S or S × L. Because one dimension of these items is large (namely L) they must be intersected by the above horizontal or vertical lines. Thus, they can be proved to enter the above inequalities in a uniform way, so that the proof idea goes through.
Finally, let us address the question of why we can assume that there are no rotations. This is achieved by letting the width of any tile be larger than the height of any tile, and adding a final auxiliary item of width N and small height, called the bar. To get the desired number of items in a solution packing, it can be ensured that the bar must be used as no more than k 2 tiles can fit into N × N and there is a limited supply of flat/thin items. W.l.o.g., the bar is not rotated. It can then be checked that using at least one tile in its rotated form will violate one of the upper bounds for the height. This completes the proof sketch.
Open Problems
This paper leaves several interesting open problems. A first obvious question is whether there exists a PAS also for 2dk (i.e., in the case without rotations). We remark that the algorithm from Lemma 20 can be easily adapted to the case without rotations. Unfortunately, Lemma 15 does not seem to generalize to the latter case. Indeed, there are instances in which we lose up to a factor of 2 if we require a strip of width Ω ε,k (1) · N to be emptied, see Figure 4. We also note that both our PASs work for the cardinality version of the problems: an extension to the weighted case is desirable. Unlike related results in the literature (where extension to the weighted case follows relatively easily from the cardinality case), this seems to pose several technical issues. We remark that all the problems considered in this paper might admit a PTAS in the standard sense, which would be a strict improvement on our PASs. Indeed, the existence of a QPTAS for these problems [1,2,15] suggests that such PTASs are likely to exist. However, finding those PTASs is a very well-known and long-standing problem in the area. We hope that our results can help to achieve this challenging goal.
References
A Omitted Proofs for Sections 2 and 3
Proof of Lemma 8. We define a planar embedding for G 1 based on the position of the rectangles in R * . Each vertex v i ∈ V 1 is represented by a rectangleR i which is defined to be the convex hull of all corners of cells of G that are contained in R i . Let e = {v i , v i } ∈ E 1 be an edge. Let g be a grid cell that R i and R i both intersect. If R i and R i intersect the same horizontal line H ∈ L H then we represent e by a horizontal line segment connectingR i andR i such that H contains . We do a symmetric operation if R i and R i intersect the same vertical line V ∈ L V . If R i and R i contain the top left and the bottom right corner of g, resp., then we represent e by a diagonal line segment connectingR i andR i within g.
We do this operation with each edge e ∈ E 1 . Note that in each grid cell we draw at most one diagonal line segment. By construction, no two line segments intersect and hence G 1 is planar.
Proof of Lemma 9.
A result by Frederickson [21] states that for any integer r any n-vertex planar graph can be divided into O(n/r) regions with no more than r vertices each, and O(n/ √ r) boundary vertices in total. We choose r := O(1/( ) 2 ) and then we have at most · n boundary vertices in total. We define V to be the set of non-boundary vertices.
Proof of Lemma 10. We define a planar embedding for G 2 . Let w j ∈ V 2 and assume that w j represents a connected component C j of G 1 . We represent C j by drawing the rectanglē R i for each vertex v i ∈ C (like in the proof of Lemma 8 the rectangleR i is defined to be the convex hull of all corners of cells of G that are contained in R i ) and the following set of line segments (actually almost the same as the ones defined in the proof of Lemma 8). Consider two rectangles R i , R i ∈ C j intersecting the same grid cell g.
If R i , R i intersect the same horizontal line H ∈ L H then then we draw a horizontal line segment connectingR i andR i such that is a subset of H . If R i and R i contain the top left and the bottom right corner of g, resp., then we draw a diagonal line segment connectingR i andR i within g. This yields a connected area A j representing C j (and thus w j ).
Let e = {w j , w j } ∈ E 2 . We want to introduce a line segment representing e. By definition of E 2 there must be grid cell g and two rectangles R i , R i intersecting g whose vertices belong to different connected components of G 1 and that R i and R i contain the bottom left and the top right corner of g, resp. Note that then there can be no vertex v i ∈ V 1 whose rectangle contains the top left or the bottom right corner of g: such a rectangle would be connected by an edge with both R i and R i in G 1 and then all three rectangles R i , R i , R i would be in the same connected component of G 1 . We draw a diagonal line segment connectingR i andR i within g and then does not intersect any area A j for any vertex w j ∈ V 2 . Also, since we add at most one line segment per grid cell g these line segments do not intersect each other. Hence, G 2 is planar.
Proof of Theorem 2. First, we define the grid as described in Section 2.1. In case that the algorithm in Lemma 5 finds a solution of size k then we define the kernelR to be this solution and we are done. Otherwise, we enumerate all possible sets G k of the kind as described in Lemma 14, at most k O(1/ 8 ) many. Then, for each such set G j we consider all rectangles contained in the union of G j and we compute a feasible solution of size c for them if such a solution exists, and otherwise we compute the optimal solution. We do this by complete enumeration in time n O(c) = n O(1/ 8 ) . For each set G j the obtained solution has size at most c = O(1/ 8 ). We define the kernelR to be the union over all k O(1/ 8 ) solutions obtained in XX:17 this way. Hence, |R| ≤ k O(1/ 8 ) . Also, we can guarantee that the output of our algorithm is a subset ofR and henceR contains a (1 + )-approximative solution.
Proof of Lemma 16. Let OPT denote the optimal solution to the given instance. For each
B ∈ {1, . . . , 8/ } we define I(B ) := {i ∈ I | h i ∈ [(1/k) B +2 N, (1/k) B N )}.
For any item i ∈ I there can be at most four values of B such that i is contained in the respective set I(B ). Hence, there must be one value B ∈ {1, . . . , 8/ } such that |I(B) ∩ OPT| ≤ 2 |OPT|. Each item i ∈ I \ I(B) is then contained in L or T . Since |I(B) ∩ OPT| ≤ 2 |OPT| we lose only a factor of (1 − 2 ) −1 ≤ 1 + in the approximation ratio.
Proof of Lemma 19. Each deletion rectangle has a height of at most (1/k) B N and a width of exactly (1/k) B N . Each large item has height and width at least (1/k) B N . Therefore, each deletion rectangle can intersect with at most 4 large items in its interior (intuitively, at its 4 corners).
Proof of Lemma 21. For each item i ∈ OPT we perform the following operation. Each item i ∈ OPT such that i is placed underneath i (i.e., such that the y-coordinate of the top edge of i is upper-bounded by the y-coordinate of the bottom edge of i) is moved by N/(k k ) units down. If i is not rotated then we increase the height of i toĥ i by appending a rectangle of width w i and heightĥ i −h i ≤ N/(k k ) underneath i. If i is rotated then we increase the width of i toŵ i by appending a rectangle of width h i and heightŵ i − w i ≤ N/(k k ) underneath i. Since we moved down the mentioned other items before, the new (bigger) item does not intersect any other item. We do this operation for each item i ∈ OPT . In the process, we move each item down by at most (k − 1)N/(k k ) and when we increase its height then the y-coordinate of its bottom edge decreases by at most N/(k k ). Initially, the y-coordinate of the bottom edge of any item was at least N/k. Hence, at the end the y-coordinate of the bottom edge of any item is at least N/k − (k − 1)N/(k k ) − N/(k k ) ≥ N/k − N/k ≥ 0. Hence, all rounded items are contained in the knapsack.
Proof of Lemma 22.
Consider the packing for OPT due to Lemma 21 in which we increased the height of each non-rotated item i toĥ i and the width of each rotated item i toŵ i . Suppose that there is a set I h the latter set of items. Since |OPT | ≤ k and i ∈ OPT there must be an item i ∈Ī (j) h such that i / ∈ OPT . Then we can replace i by i sinceĥ i =ĥ i and w i ≤ w i . We perform this operation for each set L (j) h and a symmetric operation for each set L (j) w until we obtain a solution for which the lemma holds. This solution then contains the same number of items as the initial solution OPT .
B
Proofs for Section 4 Lemma 23. multi-subset sum is W[1]-hard.
Proof. Downey and Fellows [18] give a parameterized reduction from perfect code(k) to subset sum. The created instances (x 1 , . . . , x m , t, k) have the property that all numbers have digits 0 or 1 when expressed in base k + 1. Moreover, the target value t is equal to 1 . . . 1 k+1 . Accordingly, when any k numbers x i sum to t there can be no carries in the addition. Thus, no two selected numbers may have a 1 in the same position. Hence, allowing to select numbers multiple times does not create spurious solutions, giving us a correct reduction from perfect code(k) to multi-subset sum.
We split the proof of Theorem 3 into two separate statements for 2dkr and 2dk.
Theorem 24. 2dkr is W[1]-hard.
Proof. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). By Lemma 23, this establishes W[1]-hardness of 2dkr.
Construction. Let (x 1 , . . . , x m , t, k) be an instance of multi-subset sum. W.l.o.g. we may assume that 4 ≤ k ≤ m and that x i < t for all i ∈ [m]. Furthermore, as solutions may select the same integer multiple times, we may assume that all the x i are pairwise different.
Throughout, we take a knapsack to be an N by N square with coordinate (0, 0) in the bottom left corner and (N, N ) at top right. The first coordinate of any point in the knapsack measures the horizontal (left-right) distance from the point to (0, 0); the second coordinate measure the vertical (up-down) distance from (0, 0). All items in the following construction are given such that their sizes reflect their intended rotation in a solution, i.e., heights refers to vertical dimensions and widths to horizontal dimensions.
We begin by constructing an instance of 2dk. Throughout, for an item R, we will use height(R) and width(R) denote its height and width. The instance of 2dk is defined as follows:
We define constants
S := k 2 · t L := k 2 · S = k 4 · t.
(The specific values will not be important so long as k 2 · t ≤ S and k 2 · S ≤ L. Intuitively, the identifiers are chosen to mean small and large.)
The knapsack has height and width both equal to
N := k · L + (2k − 1) · S + (2k − 1) · t.(1)
For each i ∈ [m] we construct k 2 items R(i, 1), . . . , R(i, k 2 ) with
height(R(i, j)) = L + S + x i (2) width(R(i, j)) = L + S + 2t − x i .(3)
We call these items tiles. We say that each tile R(i, ·) corresponds to the number x i from the input that it was constructed for. Since the x i are pairwise different, the x i corresponding to any tile can be easily read off from both height and width. We point out that all tiles have height strictly between L + S and L + S + t and width strictly between L + S + t and L + S + 2t. We add p := k · (k − 1) items T (1), . . . , T (p) with height L and width S. We call these the thin items. We add p items F (1), . . . , F (p) with height S and width L. We call these the flat items.
We add a single (very flat and very wide) item of height (2k − 2) · t and width N , which we call the bar.
The created instance has a target value of k = k 2 + 2p + 1. (The intention is to pack all thin and all flat items, the bar, and exactly k 2 tiles.) This completes the construction. Clearly, all necessary computations can performed in polynomial time. The parameter value k = k 2 + 2p + 1 is upper bounded by O(k 2 ). It remains to prove correctness.
Correctness. We need to prove that the instance (x 1 , . . . , x m , t, k) is yes for multi-subset sum if and only if the constructed instance is yes for 2dkr. ⇐=: Assume that the created instance is yes for 2dkr, i.e., that it has a packing with k = k 2 + 2p + 1 items and fix any such packing. Observe that the packing must contain at least k 2 tiles as there are only 2p + 1 items that are not tiles. We will show that the packing uses exactly k 2 tiles, the 2p thin/flat items, and the bar. It is useful to recall that tiles have height and width both greater than L + S no matter whether they are rotated.
Consider the effect of placing k vertical lines in the knapsack at horizontal coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1). We first observe that these lines must necessarily intersect all tiles of the packing because each of them has width at least L: The distance between any two consecutive lines is L − 1, same as the distance from the left border of the knapsack to the first line. The distance from the kth vertical line to the right border is also strictly less than L:
N − k · (L − 1) = k + (2k − 1) · S + (2k − 1) · t < S + (2k − 1) · S + S = (2k + 1) · S < L
Observe that no line can intersect more than k tiles: Any two tiles of the packing may not overlap and may in particular not share their intersection with any line. Since each line has length N and each intersection with a tile has length greater than L, there can be at most k tiles intersected by any line as N < (k + 1) · L:
N = k · L + (2k − 1) · S + (2k − 1) · t < k · L + 4k · S ≤ (k + 1) · L
Overall, this means that the packing contains at most k 2 tiles: There are k lines that intersect all tiles of the packing, each of them intersecting at most k. By our earlier observation, this implies that the packing contains exactly k 2 tiles in addition to all 2p flat/thin items. Moreover, each line intersects exactly k tiles and no two lines intersect the same tile.
Let us now check how the vertical lines and the flat and thin items interact. Clearly, each both flat as well as rotated thin items have width L and height S. Accordingly, each flat and each rotated thin item must be intersected by at least one of the k vertical lines. We already know that a total length of at least k · (L + S) of each line is occupied by the k tiles that the line intersects. This leaves at most a length of N − k · (L + S) = (k − 1) · S + (2k − 1) · t < k · S for intersecting flat and rotated thin items, and allows for intersecting at most k − 1 of them. (Again, no two items can share their intersection with the line.) Thus, there are at most p = k · (k − 1) of the flat and rotated thin items in the packing.
Before analyzing the vertical lines further, let us perform an analogous argument for k horizontal lines with vertical coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1) and their intersection with tiles and flat/thin items. It can be verified that each of them similarly intersects exactly k tiles and that no tile is intersected twice. The argument for flat and thin items is analogous as well, except that we now reason about rotated flat and (non-rotated) thin items, which have height L and width S; we find that there are at most p such items and that each horizontal line intersects at most k − 1 of them. Since in total there must be 2p flat and thin items, this implies that both sets of lines (horizontal and vertical) intersect p of these items each. Since flat and thin items can be swapped freely, we may assume that none of these items are rotated, and that the vertical lines intersect the p flat items and the horizontal lines intersect the p thin items.
We know now that the packing contains exactly k 2 tiles as well as the p flat and the p thin items. Thus, to get a total of k = k 2 + 2p + 1 items, it must also contain the bar, which has height N and width (2k − 2) · t. W.l.o.g., we may assume that the bar is not rotated, or else we could rotate the entire packing. 3 It follows that all vertical lines intersect the bar due to its width of N , which matches the width of the knapsack.
Let us now analyze both vertical and horizontal lines further. The goal is to obtain inequalities on the values x i that go into the construction of the tiles; up to now we have only used that they are fairly large. We know that each vertical line intersects k tiles, k − 1 flat items, and the bar. Let h 1 , . . . , h k denote the heights of the tiles (ordered arbitrarily) and recall that each flat item has height S while the bar has height (2k − 2) · t. Since all intersections with the line are disjoint and the line has length N (equaling the height of the knapsack), we get that
N ≥ h 1 + . . . + h k + (k − 1) · S + (2k − 2) · t.(4)
At this point, in order to plug in values for the h i , it is important whether any of the tiles are rotated; we will show that having at least one rotated tile causes a violation of (4). To this end, recall that (non-rotated) tiles have heights strictly between L + S and L + S + t and widths strictly between L + S + t and L + S + 2t. Thus, if at least one tile is rotated then it has height greater than L + S + t, rather than the weaker bound of greater than L + S. Using this, the right-hand side of (4) can be lower bounded by
RHS > (k − 1) · (L + S) + (L + S + t) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 1) · t = N,
contradicting (4). Thus, none of the tiles intersected by the vertical line can be rotated.
Since each tile is intersected by a vertical line, it follows that no tiles can be rotated and we can analyze the lines using the sizes as given in (2) and (3). Let us return to replacing the values h i in (4). Recall that the height of a tile is equal to L + S + x i where x i is the corresponding integer from the input to the initial multi-subset sum instance. Thus, if the ith intersected tile corresponds to input integer y i ∈ {x 1 , . . . , x m } then by (2) we have
h i = L + S + y i .
Plugging this into (4) yields
N ≥ k i=1 (L + S + y i ) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 2) · t + k i=1 y i . Using N = k · L + (2k − 1) · S + (2k − 1) · t we immediately get t ≥ k i=1 y i .(5)
More formally, item R a,b is a tile corresponding to y i , where i = 1 + ((a − b) mod k), and accordingly has height(R a,b ) = L + S + y i and width(R a,b ) = L + S + 2t − y i . This yields the required property that for each a ∈ [k] the items R a,1 , . . . , R a,k contain tiles corresponding to all numbers y 1 , . . . , y k (and correctly contain multiple copies for numbers that appear more than once). The same holds for items R 1,b , . . . , R k,b for all b ∈ [k].
We use height(R i,j ) and width(R i,j ) to refer to height and width of tile R i,j . We use left(R), right(R), top(R), and bottom(R) to specify the coordinates of any item in our packing, i.e., for the k 2 tiles, the 2p flat/thin items, and the bar. The coordinates for tiles are chosen as
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ), right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ), bottom(R a,b ) = (b − 1) · S + b−1 i=1 height(R a,i ), top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ).
Let us first check some basic properties of these coordinates:
We observe that each tile is assigned coordinates that match its size, i.e., width(R a,b ) = right(R a,b ) − left(R a,b ) and height(R a,b ) = top(R a,b ) − bottom(R a,b ).
All coordinates lie inside the knapsack. Clearly, all coordinates are non-negative and it suffices to give upper bounds for top(R a,k ) and right (R k,b ). Recall that by construction each set of tiles R a,1 , . . . , R a,k contains tiles corresponding to all numbers y 1 , . . . , y k , and same for R 1,b , . . . , R k,b . Thus we get
right(R k,b ) = (k − 1) · S + k i=1 width(R i,b ) = (k − 1) · S + k i=1 (L + S + 2t − y i ) = k · L + (2k − 1) · S + 2k · t − k i=1 y i = k · L + (2k − 1) · S + (2k − 1) · t = N. Similarly, we get top(R a,k ) = (k − 1) · S + k i=1 height(R a,i ) = (k − 1) · S + k i=1 (L + S + y i ) = k · L + (2k − 1) · S + k i=1 y i = k · L + (2k − 1) · S + t = N − (2k − 2) · t.
We will later use the gap of (2k − 2) · t between N and N − (2k − 2) · t to place the bar item, as its height exactly matches the gap. For any tile R a,b the possible coordinates fall into very small intervals, using that all heights and widths of tiles lie strictly between L + S and L + S + 2t. We show this explicitly for left(R a,b ):
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ) left(R a,b ) > (a − 1) · S + a−1 i=1 (L + S) = (a − 1) · L + (2a − 2) · S left(R a,b ) < (a − 1) · S + a−1 i=1 (L + S + 2t) = (a − 1) · L + (2a − 2) · S + (2a − 2) · t < (a − 1) · L + (2a − 1) · S
In this way, we get the following intervals for left(R a,b ), right(R a,b ), bottom(R a,b ), and top(R a,b ). (Note that we sacrifice the possibility of tighter bounds in order to get the same simple form of bound for top and right and for bottom and left.) (a − 1) · L + (2a − 2) · S < left(R a,b ) < (a − 1) · L + (2a − 1) · S (8) a · L + (2a − 1) · S < right(R a,b ) < a · L + 2a · S (9)
(b − 1) · L + (2b − 2) · S < bottom(R a,b ) < (b − 1) · L + (2b − 1) · S (10) b · L + (2b − 1) · S < top(R a,b ) < b · L + 2b · S(11)
We can now easily verify that no two tiles R a,b and R c,d overlap if (a, b) = (c, d). If a = c then we may assume w.l.o.g. that a < c (and hence a ≤ c − 1). Using (11) and (10) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < left(R c,d ).
Thus, R a,b and R c,d do not overlap if a = c. If instead a = c then we must have b = d and, w.l.o.g., b < d (and hence b ≤ d − 1). Thus we have top(R a,b ) < b · L + 2b · S ≤ (d − 1) · L + (2d − 2) · S < bottom(R c,d ).
Thus, no two tiles R a,b and R c,d with (a, b) = (c, d) overlap.
We will now specify coordinates for the p flat and the p thin items. For this purpose the intervals for coordinates of the tiles (8)-(11) are highly useful. For thin items, there will always be two adjacent tiles, to the left and to the right, and we use the intervals to get top and bottom coordinates. For flat items the situation is the opposite; there are adjacent tiles on the top and bottom sides and we use the intervals to get left and right coordinates. Recall that thin items have height L and width S, whereas flat items have height S and width L.
We denote the p thin items by T a,b with a ∈ [k − 1] and b ∈ [k]; we choose coordinates as follows:
left(T a,b ) = right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ) (12) right(T a,b ) = left(R a+1,b ) = a · S + a i=1 width(R i,b ) (13) bottom(T a,b ) = (b − 1) · L + (2b − 1) · S (14) top(T a,b ) = b · L + (2b − 1) · S(15)
Clearly, the coordinates match the dimension of T a,b . We denote the p flat items by F a,b with a ∈ [k] and b ∈ [k − 1], and we use the following coordinates:
left(F a,b ) = (a − 1) · L + (2a − 1) · S (16) right(F a,b ) = a · L + (2a − 1) · S (17) bottom(F a,b ) = top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ) (18) top(F a,b ) = bottom(R a,b+1 ) = b · S + b i=1 height(R a,i )(19)
Clearly, the coordinates match the dimension of F a,b . It remains to show that there is no overlap between any of the items placed so far (all except the bar), recalling that intersections between tiles are already ruled out: It remains to consider (1) tile-flat, (2) tile-thin, (3) flat-flat, (4) flat-thin, and (5) thin-thin overlaps.
(1) There are no overlaps between any tile R a,b and any flat item F c,d : If a < c then a ≤ c − 1 and using (9) and (16) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < (c − 1) · L + (2c − 1) · S = left(F c,d ).
If a > c then c ≤ a − 1 and using (17) and (8) Thus, in all four cases there is no overlap, as claimed.
(2) There are no overlaps between any tile R a,b and any thin item T c,d :
Thus, in both cases there is no overlap, as claimed. Overall, we find that there are no overlap between any pair of items placed so far. It remains to add the bar to complete our packing. We already observed earlier that top(R a,k ) = N − (2k − 2) · t. Similarly, using (19) we get
top(F a,b ) = bottom(R a,b+1 ) ≤ bottom(R a,k ) ≤ top(R a,k ) ≤ N − (2k − 2) · t
for all a ∈ [k] and b ∈ [k − 1]. In the same way, using (15) we get top(T a,b ) = b · L + (2b − 1) · S ≤ k · L + (2k − 1) · S < N − (2k − 2) · t for all a ∈ [k − 1] and b ∈ [k], recalling that N = k · L + (2k − 1) · S + (2k − 1) · t. Thus, we can place the bar B of height (2k − 2) · t and width N at the top of the knapsack without causing overlaps; formally, its coordinates are as follows.
left(B) = 0 right(B) = N bottom(B) = N − (2k − 2) · t top(B) = N
Overall, we have placed k 2 + 2p + 1 items without overlap. Thus, the constructed instance of 2dk is a yes-instance, as required. This completes the proof.
Corollary 25. The 2dk problem is W[1]-hard.
Proof. We can use the same construction as in the proof of Theorem 24 to get a parameterized reduction from multi-subset sum to 2dk.
If the constructed instance is yes for 2dk then it is also yes for 2dkr, as the same packing of k = k 2 + 2p + 1 items can be used. As showed earlier, the latter implies that the input instance is yes for multi-subset sum. Conversely, if the input instance is yes for multi-subset sum then we already showed that there is a feasible packing to show that the constructed instance is yes for 2dkr. Since the packing did not require rotation of any items, it is also a feasible solution showing that the instance is yes for 2dk. Example showing that Lemma 15 cannot be generalized to 2dk (without rotations). The total height of the k/2 items on the bottom of the knapsack can be made arbitrarily small. Suppose that we wanted to free up an area of height f (k) · N and width N or of height N and width f (k) · N (for some fixed function f ). If the total height of the items on the bottom is smaller than f (k) · N then we would have to eliminate the k/2 items on the bottom or the k/2 items on top. Thus, we would lose a factor of 2 > 1 + ε in the approximation ratio.
| 12,445 |
1906.10982
|
2956057537
|
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+eps)-approximations in f(k,eps)n^ O(1) time where k is some parameter of the input. We obtain the following results on parameterized approximability: 1) In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant eps>0 and integer k>0, in time f(k,eps)n^ g(eps) , either outputs a solution of size at least k (1+eps), or declares that the optimum solution has size less than k. 2) In the (2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of TDK with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factors are 558 325+eps and 4 3+eps, resp. [, FOCS'17]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for TDKR. For all considered problems, getting time f(k,eps)n^ O(1) , rather than f(k,eps)n^ g(eps) , would give FPT time f'(k)n^ O(1) exact algorithms using eps=1 (k+1), contradicting W[1]-hardness.
|
A central goal of parameterized approximation is to settle the status of problems like Dominating Set or Clique , which are hard to approximate and also parameterized intractable. Recently, Chen and Lin @cite_28 made important progress by showing that Dominating Set admits no constant-factor approximation with running time @math unless @math . Generally, for problems without exact FPT-algorithms, the goal is to find out whether one can beat inapproximability bounds by allowing FPT-time in some parameter; see e.g. @cite_32 @cite_9 @cite_21 @cite_13 @cite_6 @cite_17 @cite_3 @cite_4 @cite_8 ).
|
{
"abstract": [
"We consider the k-Center problem and some generalizations. For k-Center a set of kcenter vertices needs to be found in a graph G with edge lengths, such that the distance from any vertex ofi¾?G to its nearest center is minimized. This problem naturally occurs in transportation networks, and therefore we model the inputs as graphs with bounded highway dimension, as proposed by [ICALP 2011]. We show both approximation and fixed-parameter hardness results, and how to overcome them using fixed-parameter approximations. In particular, we prove that for any @math computing a @math -approximation is W[2]-hard for parameter k, and NP-hard for graphs with highway dimension @math . The latter does not rule out fixed-parameter @math -approximations for the highway dimension parameteri¾?h, but implies that such an algorithm must have at least doubly exponential running time in h if it exists, unless the ETH fails. On the positive side, we show how to get below the approximation factor ofi¾?2 by combining the parameters k andi¾?h: we develop a fixed-parameter 3 2-approximation with running time @math . We also provide similar fixed-parameter approximations for the weightedk-Center and @math -Partition problems, which generalize k-Center.",
"",
"We prove that there is no fpt-algorithm that can approximate the dominating set problem with any constant ratio, unless FPT = W[1]. Our hardness reduction is built on the second author's recent W[1]-hardness proof of the biclique problem [25]. This yields, among other things, a proof without the PCP machinery that the classical dominating set problem has no polynomial time constant approximation under the exponential time hypothesis.",
"In this paper, we consider the problem of maximizing the spread of influence through a social network. Given a graph with a threshold value thr(v) attached to each vertex v, the spread of influence is modeled as follows: A vertex v becomes ''active'' (influenced) if at least thr(v) of its neighbors are active. In the corresponding optimization problem the objective is then to find a fixed number k of vertices to activate such that the number of activated vertices at the end of the propagation process is maximum. We show that this problem is strongly inapproximable in time f(k)@?n^O^(^1^), for some function f, even for very restrictive thresholds. In the case that the threshold of each vertex equals its degree, we prove that the problem is inapproximable in polynomial time and it becomes r(n)-approximable in time f(k)@?n^O^(^1^), for some function f, for any strictly increasing function r. Moreover, we show that the decision version parameterized by k is W[1]-hard but becomes fixed-parameter tractable on bounded degree graphs.",
"",
"We motivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parametrization of a given optimization problem. As a key tool, we introduce the concept of α-shrinking transformation, for α≥1. Applying such transformation to a parameterized problem instance decreases the parameter value, while preserving approximation ratio of α (or α-fidelity). For example, it is well-known that Vertex Cover cannot be approximated within any constant factor better than 2 [24] (under usual assumptions). Our parameterized α-approximation algorithm for k-Vertex Cover, parameterized by the solution size, has a running time of 1.273(2−α)k, where the running time of the best FPT algorithm is 1.273k [10]. Our algorithms define a continuous tradeoff between running times and approximation ratios, allowing practitioners to appropriately allocate computational resources. Moving even beyond the performance ratio, we call for a new type of approximative kernelization race. Our α-shrinking transformations can be used to obtain kernels which are smaller than the best known for a given problem. For the Vertex Cover problem we obtain a kernel size of 2(2−α)k. The smaller \"α-fidelity\" kernels allow us to solve exactly problem instances more efficiently, while obtaining an approximate solution for the original instance. We show that such transformations exist for several fundamental problems, including Vertex Cover, d-Hitting Set, Connected Vertex Cover and Steiner Tree. We note that most of our algorithms are easy to implement and are therefore practical in use.",
"Combining the techniques of approximation algorithms and parameterized complexity has long been considered a promising research area, but relatively few results are currently known. In this paper we study the parameterized approximability of a number of problems which are known to be hard to solve exactly when parameterized by treewidth or clique-width. Our main contribution is to present a natural randomized rounding technique that extends well-known ideas and can be used for both of these widths. Applying this very generic technique we obtain approximation schemes for a number of problems, evading both polynomial-time inapproximability and parameterized intractability bounds.",
"Given a graph G cellularly embedded on a surface Σ of genus g, a cut graph is a subgraph of G such that cutting Σ along G yields a topological disk. We provide a fixed parameter tractable approximation scheme for the problem of computing the shortest cut graph, that is, for any e > 0, we show how to compute a (1 + e) approximation of the shortest cut graph in time f(e, g)n3.",
"The Degree Anonymity problem arises in the context of combinatorial graph anonymization. It asks, given a graph (G ) and two integers (k ) and (s ), whether (G ) can be made k-anonymous with at most (s ) modifications. Here, a graph is k-anonymous if the graph contains for every vertex at least (k-1 ) other vertices of the same degree. Complementing recent investigations on its computational complexity, we show that this problem is very hard when studied from the viewpoints of approximation as well as parameterized approximation. In particular, for the optimization variant where one wants to minimize the number of either edge or vertex deletions there is no factor- (n^ 1- ) approximation running in polynomial time unless P = NP, for any constant (0 < 1 ). For the variant where one wants to maximize (k ) and the number (s ) of either edge or vertex deletions is given, there is no factor- (n^ 1 2 - ) approximation running in time (f(s) n^ O(1) ) unless W[1] = FPT, for any constant (0 < 1 2 ) and any function (f ). On the positive side, we classify the general decision version as fixed-parameter tractable with respect to the combined parameter solution size (s ) and maximum degree.",
"In this paper we design polynomial time approximation algorithms for several parameterized problems such as Odd Cycle Transversal, Almost 2-SAT, Above Guarantee Vertex Cover and Deletion q-Horn Backdoor Set Detection. Our algorithm proceeds by first reducing the given instance to an instance of the d-Skew-Symmetric Multicut problem, and then computing an approximate solution to this instance. Our algorithm runs in polynomial time and returns a solution whose size is bounded quadratically in the parameter, which in this case is the solution size, thus making it useful as a first step in the design of kernelization algorithms. Our algorithm relies on the properties of a combinatorial object called (L,k)-set, which builds on the notion of (L,k)-components, defined by a subset of the authors to design a linear time FPT algorithm for Odd Cycle Transversal. The main motivation behind the introduction of this object in their work was to replicate in skew-symmetric graphs, the properties of important separators introduced by Marx [2006] which has played a very significant role in several recent parameterized tractability results. Combined with the algorithm of Reed, Smith and Vetta, our algorithm also gives an alternate linear time algorithm for Odd Cycle Transversal. Furthermore, our algorithm significantly improves upon the running time of the earlier parameterized approximation algorithm for Deletion q-Horn Backdoor Set Detection which had an exponential dependence on the parameter; albeit at a small cost in the approximation ratio."
],
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_13",
"@cite_17"
],
"mid": [
"2293564405",
"",
"2963456765",
"2119218916",
"",
"1749371477",
"1566367487",
"1849364625",
"165003445",
"2270571990"
]
}
|
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack
|
a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2dk with rotations (2dkr), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2 + ε [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese,SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2dkr. For all considered problems, getting time f (k, ε)n O(1) , rather than f (k, ε)n g (ε) , would give FPT time f (k)n O(1) exact algorithms by setting ε = 1/(k + 1), contradicting W[1]-hardness. Instead, for each fixed ε > 0, our PASs give (1 + ε)-approximate solutions in FPT time.
For both misr and 2dkr our techniques also give rise to preprocessing algorithms that take n g(ε) time and return a subset of at most k g(ε) rectangles/items that contains a solution of size at least k/(1 + ε) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al.,STOC'17].
Introduction
Approximation algorithms and parameterized algorithms are two well-established ways to deal with NP-hard problems. An α-approximation for an optimization problem is a polynomialtime algorithm that computes a feasible solution whose cost is within a factor α (that might be a function of the input size n) of the optimal cost. In particular, a polynomial-time approximation scheme (PTAS) is a (1 + ε)-approximation algorithm running in time n g(ε) , where ε > 0 is a given constant and g is some computable function. In parameterized algorithms we identify a parameter k of the input, that we informally assume to be much smaller than n. The goal here is to solve the problem optimally in fixed-parameter tractable (FPT) time f (k)n O(1) , where f is some computable function. Recently, researchers started to combine the two notions (see, e.g., the survey by Marx [34]). The idea is to design approximation algorithms that run in FPT (rather than polynomial) time, e.g., to get (1 + ε)-approximate solutions in time f (k, ε)n O(1) . In this paper we continue this line of research on parameterized approximation, and apply it to two fundamental rectangle packing problems.
Our results and techniques
Our focus is on parameterized approximation algorithms. Unfortunately, as observed by Marx [34], when the parameter k is the desired solution size, computing (1 + ε)-approximate solutions in time f (k, ε)n O(1) implies fixed-parameter tractability. Indeed, setting ε = 1/(k+1) guarantees to find an optimal solution when that value equals to k ∈ N and we get time f (k, 1/(k + 1))n O(1) = f (k)n O(1) . Since the considered problems are W[1]-hard (in part, this is established in our work), they are unlikely to be FPT and similarly unlikely to have such nice approximation schemes. Instead, we construct algorithms (for two maximization problems) that, given ε > 0 and an integer k, take time f (k, ε)n g (ε) and either return a solution of size at least k/(1 + ε) or declare that the optimum is less than k. We call such an algorithm a parameterized approximation scheme (PAS). Note that if we run such an algorithm for each k ≤ k then we can guarantee that we compute a solution with cardinality at least min{k, OPT}/(1 + ε) where OPT denotes the size of the optimal solution. So intuitively, for each ε > 0, we have an FPT-algorithm for getting a (1 + ε)-approximate solution.
In this paper we consider the following two geometric packing problems, and design PASs for them.
Maximum Independent Set of Rectangles.
In the maximum independent set of rectangles problem (misr) we are given a set of n axis-parallel rectangles R = {R 1 , . . . , R n } in the two-dimensional plane, where R i is the open set of points (x
(1) i , x (2) i ) × (y (1) i , y(2)
i ). A feasible solution is a subset of rectangles R ⊆ R such that for any two rectangles R, R ∈ R we have R ∩ R = ∅. Our objective is to find a feasible solution of maximum cardinality |R |. W.l.o.g. we assume that x
(1) i , y (1) i , x(2)
i , y
(2) i ∈ {0, . . . , 2n − 1} for each R i ∈ R (see e.g. [1]). misr is very well-studied in the area of approximation algorithms. The problem is known to be NP-hard [24], and the current best polynomial-time approximation factor is O(log log n) for the cardinality case [11] (addressed in this paper), and O(log n/ log log n) for the natural generalization with rectangle weights [12]. The cardinality case also admits a (1 + ε)-approximation with a running time of n poly(log log(n/ε)) [15] and there is a (slower) QPTAS known for the weighted case [1]. The problem is also known to be W[1]-hard w.r.t. the number k of rectangles in the solution [33], and thus unlikely to be solvable in FPT time f (k)n O(1) .
In this paper we achieve the following main result:
Theorem 1. There is a PAS for misr with running time k O(k/ 8 ) n O(1/ 8 ) .
In order to achieve the above result, we combine several ideas. Our starting point is a polynomial-time construction of a k × k grid such that each rectangle in the input contains some crossing point of this grid (or we find a solution of size k directly). By applying (in a non-trivial way) a result by Frederickson [21] on planar graphs, and losing a small factor in the approximation, we define a decomposition of our grid into a collection of disjoint groups of cells. Each such group defines an independent instance of the problem, consisting of the rectangles strictly contained in the considered group of cells. Furthermore, we guarantee that each group spans only a constant number O ε (1) of rectangles of the optimum solution. Therefore in FPT time we can guess the correct decomposition, and solve each corresponding subproblem in n Oε(1) time. We remark that our approach deviates substantially from prior work, and might be useful for other related problems. An adaptation of our construction also leads to the following (1 + )-approximative kernelization.
Theorem 2. There is an algorithm for misr that, given k ∈ N, computes in time n O(1/ 8 ) a subset of the input rectangles of size k O(1/ 8 ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
Similarly as for a PAS, if we run the above algorithm for each k ≤ k we obtain a set of size k O(1/ 8 ) that contains a solution of size at least min{k, OPT}/(1 + ε). Observe that any c-approximate solution on the obtained set of rectangles is also a feasible, and c(1 + ε)approximate, solution for the original instance if OPT ≤ k and otherwise has size at least k/(c(1 + ε)). Thus, our result is a special case of a polynomial-size approximate kernelization scheme (PSAKS) as defined in [32].
(0, w i ) × (0, h i ), N ≥ w i , h i ∈ N.
The goal is to find a feasible packing of a subset I ⊆ I of the items of maximum cardinality |I |. Such packing maps each item i ∈ I into a new translated rectangle (a i , The result is proved by parameterized reductions from a variant of the W[1]-hard subset sum problem, where we need to determine whether a set of m positive integers contains a k-tuple of numbers with sum equal to some given value t. The difficulty for reductions to 2dk or 2dkr is of course that rectangles may be freely selected and placed (and possibly rotated) to get a feasible packing.
a i + w i ) × (b i , b i + h i ) 2 ,
We complement the W[1]-hardness result by giving a PAS for the case with rotations (2dkr) and a corresponding kernelization procedure like in Theorem 2 (which also yields a PSAKS).
Theorem 4.
For 2dkr there is a PAS with running time k O(k/ ) n O(1/ 3 ) and an algorithm that, given k ∈ N, computes in time n O(1/ 3 ) a subset of the input items of size k O(1/ ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
The above result is based on a simple combination of the following two (non-trivial) building blocks: First, we show that, by losing a fraction ε of the items of a given solution of size k, it is possible to free a vertical strip of width N/k Oε(1) (unless the problem can be solved trivially). This is achieved by first sparsifying the solution using the above mentioned result by Frederickson [21]. If this is not sufficient we construct a vertical chain of relatively wide and tall rectangles that split the instance into a left and right side. Then we design a resource augmentation algorithm, however in an FPT sense: we can compute in FPT time a packing of cardinality k if we are allowed to use a knapsack where one side is enlarged by a factor 1 + 1/k Oε(1) . Note that in typical resource augmentation results the packing constraint is relaxed by a constant factor while here this amount is controlled by our parameter.
A Parameterized Approximation Scheme for MISR
In this section we present a PAS and an approximate kernelization for misr. We start by showing that there exists an almost optimal solution for the problem with some helpful structural properties (Sections 2.1 and 2.2). The results are then put together in Section 2.3.
Definition of the grid
We try to construct a non-uniform grid with k rows and k columns such that each input rectangle overlaps a corner of this grid (see Figure 1). To this end, we want to compute k − 1 vertical and k − 1 horizontal lines such that each input rectangle intersects one line from each set. There are instances in which our routine fails to construct such a grid (and in fact such a grid might not even exist). For such instances, we directly find a feasible solution with k rectangles and we are done.
Lemma 5.
There is a polynomial time algorithm that either computes a set of at most k − 1 vertical lines L V with x-coordinates V 1 , . . . , V k−1 such that each input rectangle is crossed by one line in L V or computes a feasible solution with k rectangles. A symmetric statement holds for an algorithm computing a set of at most k − 1 horizontal lines L H with y-coordinates H 1 , . . . , H k−1 .
Proof. Let V 0 := 0. Assume inductively that we defined the x-coordinates V 0 , V 1 , . . . , V k such that V 1 , . . . , V k are the x-coordinates of the first k constructed vertical lines. We define the x-XX:6
Parameterized Approximation Schemes coordinate of the (k + 1)-th vertical line by V k +1 := min Ri∈R:
x (1) i ≥ V k x (2)
i − 1/2. We continue with this construction until we reach an iteration k * such that
{R i ∈ R : x (1) i ≥ V k * −1 } = ∅.
If k * ≤ k then we constructed at most k − 1 lines such that each input rectangle is intersected by one of these lines. Otherwise, assume that k * > k. Then for each iteration k ∈ {1, . . . , k} we can find a rectangle R i(k ) := arg min Ri∈R:
x (1) i ≥ V k −1 x (2)
i . By construction, using the fact that all coordinates are integer, for any two such rectangles
R i(k ) , R i(k ) with k = k we have that (x (1) i(k ) , x (2) i(k ) ) ∩ (x (1) i(k ) , x(2)
i(k ) ) = ∅. Hence, R i(k ) and R i(k ) are disjoint. Therefore, the rectangles R i(1) , . . . , R i(k) are pairwise disjoint and thus form a feasible solution.
The algorithm for constructing the horizontal lines works symmetrically.
We apply the algorithms due to Lemma 5. If one of them finds a set of k independent rectangles then we output them and we are done. Otherwise, we obtain the sets L V and L H . For convenience, we define two more vertical lines with x-coordinates V 0 := 0 and V |L V |+1 = 2n − 1, resp., and similarly two more horizontal lines with y-coordinates H 0 = 0 and H |L H |+1 = 2n − 1, resp.. We denote by G the set of grid cells formed by these lines and the lines in L V ∪L H : for any two consecutive vertices lines (i.e., defined via x-coordinates V j , V j+1 with j ∈ {0, . . . , |L V |}) and two consecutive horizontal grid lines (defined via y-coordinates
H j , H j +1 with j ∈ {0, . . . , |L H |})
we obtain a grid cell whose corners are the intersection of these respective lines. We interpret the grid cells as closed sets (i.e., two adjacent grid cells intersect on their boundary).
Proposition 6. Each input rectangle R i contains a corner of a grid cell of G. If a rectangle R intersects a grid cell g then it must contain a corner of g.
Groups of rectangles
Let R * denote a solution to the given instance with |R * | = k. We prove that there is a special solution R ⊆ R * of large cardinality that we can partition into s ≤ k groups R 1∪ . . .∪R s such that each group has constant size O(1/ 8 ) and no grid cell can be intersected by rectangles from different groups. The remainder of this section is devoted to proving the following lemma.
Lemma 7.
There is a constant c = O(1/ 8 ) such that there exists a solution R ⊆ R * with |R | ≥ (1 − )|R * | and a partition R = R 1∪ . . .∪R s with s ≤ k and |R j | ≤ c for each j and such that if any two rectangles in R intersect the same grid cell g ∈ G then they are contained in the same set R j .
Given the solution R * we construct a planar graph G 1 = (V 1 , E 1 ). In V 1 we have one vertex v i for each rectangle R i ∈ R * . We connect two vertices v i , v i by an edge if and only if there is a grid cell g ∈ G such that R i and R i intersect g and R i and R i are crossed by the same horizontal or vertical line in L V ∪ L H or if R i and R i contain the top left and the bottom right corner of g, resp. Note that we do not introduce an edge if R i and R i contain the bottom left and the top right corner of g, resp. (see Fig. 1): this way we preserve the planarity of the resulting graph, however we will have to deal with the missing connections in a later stage. Let G 1 be the graph obtained when applying Lemma 9 to G 1 with := /2 and let c 1 = O((1/ ) 2 ) be the respective value c . Now we would like to claim that if two rectangles R i , R i intersect the same grid cell g ∈ G then v i , v i are in the same component of G 1 . Unfortunately, this is not true. It might be that there is a grid cell g ∈ G such that R i and R i contain the bottom left corner and the top right corner of g, resp., and that v i and v i are in different components of G 1 . We fix this in a second step. We define a graph G 2 = (V 2 , E 2 ). In V 2 we have one vertex for each connected component in G 1 . We connect two vertices w i , w i ∈ V 2 by an edge if and only if there are two rectangles R i , R i such that their corresponding vertices v i , v i in V 1 belong to the connected components of G 1 represented by w i and w i , resp., and there is a grid cell g whose bottom left and top right corner are contained in R i and R i , resp. Lemma 10. The graph G 2 is planar.
Similarly as above, we apply Lemma 9 to G 2 with := 2c1 and let
c 2 = O((1/ ) 2 ) = O(1/ 6 ) denote the corresponding value of c . Denote by G 2 the resulting graph. We define a group R q for each connected component C q of V 2 . The set R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 such that w j ∈ C q . We define R :=∪ q R q . Lemma 11. Let R i , R i ∈ R be rectangles that intersect the same grid cell g ∈ G. Then there is a set R q such that {R i , R i } ⊆ R q .
Proof. Assume that in G 1 there is an edge connecting v i , v i . Then the latter vertices are in the same connected component C j of G 1 and thus they are in the same group R q . Otherwise, if there is no edge connecting v i , v i in G 1 then R i and R i contain the bottom left and top right corners of g, resp. Assume that v i and v i are contained in the connected components C j and C j of G 1 , resp. Then w j , w j ∈ V 2 , {w j , w j } ∈ E 2 and w j , w j are in the same connected component of V 2 . Hence, R i , R i are in the same group R q .
It remains to prove that each group R q has constant size and that |R | ≥ (1 − )|R * |.
Lemma 12. There is a constant
c = O(1/ 8 ) such that for each group R q it holds that |R q | ≤ c. Proof. For each group R q there is a connected component C q of G 2 such that R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 and w j ∈ C q . Each connected component of G 1 contains at most c 1 = O(1/ε 2 ) vertices of V 1 and each component of G 2 contains at most c 2 = O(1/ε 6 ) vertices of V 2 . Hence, |R q | ≤ c 1 · c 2 =: c and c = O((1/ 2 )(1/ 6 )) = O(1/ 8 ). Lemma 13. We have that |R | ≥ (1 − )|R * |.
Proof. At most 2 · |V 1 | vertices of G 1 are deleted when we construct G 1 from G 1 . Each vertex in G 1 belongs to one connected component C j , represented by a vertex w j ∈ G 2 . At most 2c1 |V 2 | vertices are deleted when we construct G 2 from G 2 . These vertices represent at most c 1 · 2c1 |V 2 | ≤ 2 |V 1 | ≤ 2 |V 1 | vertices in G 1 (and each vertex in G 1 represents one rectangle in R * ). Therefore,
|R | ≥ |R * | − 2 · |V 1 | − 2 · |V 1 | = (1 − )|R * |.
This completes the proof of Lemma 7.
The algorithm
In our algorithm, we compute a solution that is at least as good as the solution R as given by Lemma 7. For each group R j we define by G j the set of grid cells that are intersected by at least one rectangle from R j . Since in R each grid cell can be intersected by rectangles of only one group, we have that G j ∩ G q = ∅ if j = q. We want to guess the sets G j . The next lemma shows that the number of possibilities for one of those sets is polynomially bounded in k.
Lemma 14. Each G j belongs to a set G of cardinality at most k O(1/ε 8 ) that can be computed in polynomial time.
Proof. The cells G j intersected by R j are the union of all cells G(R) with R ∈ R j where for each rectangle R the set G(R) denotes the cells intersected by R. Each set G(R) can be specified by indicating the 4 corner cells of G(R), i.e., top-left, top-right, bottom-left, and bottom-right corner. Hence there are at most k 4 choices for each such R. The claim follows
since |R j | = O(1/ε 8 ).
We hence achieve the main result of this section.
Proof of Theorem 1. Using Lemma 14, we can guess by exhaustive enumeration all the sets G j in time k O(k/ 8 ) . We obtain one independent problem for each value j ∈ {1, . . . , s} which consists of all input rectangles that are contained in G j . For this subproblem, it suffices to compute a solution with at least |R j | rectangles. Since |R j | ≤ c = O(1/ 8 ) we can do this in time n O(1/ 8 ) by complete enumeration. Thus, we solve each of the subproblems and output the union of the computed solutions. The overall running time is as in the claim. If all the computed solutions have size less than (1 − ε)k, this implies that the optimum solution is smaller than k. Otherwise we obtain a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and the claim follows by redefining ε appropriately.
Essentially the same construction as above also gives an approximate kernelization algorithm as claimed in Theorem 2, see Appendix A for details.
A Parameterized Approximation Scheme for 2DKR
In this section we present a PAS and an approximate kernelization for 2dkr. W.l.o.g., we assume that k ≥ Ω(1/ 3 ), since otherwise we can optimally solve the problem in time n O(1/ 3 ) by exhaustive enumeration. In Section 3.1 we show that, if a solution of size k exists, there is a solution of size at least (1 − )k in which no item intersects some horizontal strip
(0, N ) × (0, (1/k) O(1/ ) N )
Freeing a Horizontal Strip
In this section, we prove the following lemma that shows the existence of a near-optimal solution that leaves a sufficiently tall empty horizontal strip in the knapsack (assuming k ≥ Ω(1/ 3 )). W.l.o.g., ε ≤ 1. Since we can rotate the items by 90 degrees, we can assume w.l.o.g. that w i ≥ h i for each item i ∈ I.
(1 − )k in which no packed item intersects (0, N ) × (0, (1/k) c N ), for a proper constant c = O(1/ ).
We classify items into large and thin items. Via a shifting argument, we get the following lemma.
Lemma 16. There is an integer B ∈ {1, . . . , 8/ } such that by losing a factor of 1 + in the objective we can assume that the input items are partitioned into
large items L such that h i ≥ (1/k) B N (and thus also w i ≥ (1/k) B N ) for each item i ∈ L, thin items T such that h i < (1/k) B+2 N for each item i ∈ T .
Let B be the integer due to Lemma 16 and we work with the resulting item classification. If |T | ≥ k then we can create a solution of size k satisfying the claim of Lemma 15 by simply stacking k thin items on top of each other: any k thin items have a total height of at most k · (1/k) B+2 N ≤ (1/k) 2 N . Thus, from now on assume that |T | < k.
Sparsifying large items. Our strategy is now to delete some of the large items and move the remaining items. This will allow us to free the area [0, N ] × [0, (1/k) O(1/ ) N ] of the knapsack. Denote by OPT the almost optimal solution obtained by applying Lemma 16. We remove the items in OPT T := OPT ∩ T temporarily; we will add them back later.
We construct a directed graph G = (V, A) where we have one vertex v i ∈ V for each item i ∈ OPT L := OPT ∩ L. We connect two vertices v i , v i by an arc a = (v i , v i ) if and only if we can draw a vertical line segment of length at most (1/k) B N that connects item i with item i without intersecting any other item such that i lies above i, i.e., the bottom coordinate of i is at least as large as the top coordinate of i, see Figure 2 for a sketch. We obtain the following proposition since for each edge we can draw a vertical line segment and these segments do not intersect each other.
Proposition 17. The graph G is planar.
Next, we apply Lemma 9 to G with := . Let G = (V , A ) be the resulting graph. We remove from OPT L all items i ∈ V \ V and denote by OPT L the resulting solution. We push up all items in OPT L as much as possible. If now the strip (0, N ) × (0, (1/k) B N ) is not intersected by any item then we can place all the items in T into the remaining space. Their total height can be at most k · (1/k) B+2 N ≤ (1/k) B+1 N and thus we can leave a strip
of height (1/k) B N − (1/k) B+1 N ≥ (1/k) O(1/ ) N
and width N empty. This completes the proof of Lemma 15 for this case.
Assume next that the strip (0, N )×(0, (1/k) B N ) is intersected by some item: the following lemma implies that there is a set of c = O(1/ 2 ) vertices whose items intuitively connect the top and the bottom edge of the knapsack.
Lemma 18. Assume that in OPT L there is an item i 1 intersecting (0, N ) × (0, (1/k) B N ). Then G contains a path v i1 , v i2 , . . . , v i K with K ≤ c = O(1/ 2 ), such that the distance between i K and the top edge of the knapsack is less than (1/k) B N . Proof. Let C denote all vertices v in G such that there is a directed path from v i1 to v in G . The vertices in C are contained in the connected component C in G that contains v i1 . Note that |C| ≤ |C | ≤ c .
We claim that C must contain a vertex v j whose corresponding item j is closer than (1/k) B N to the top edge of the knapsack. Otherwise, we would have been able to push up all items corresponding to vertices in C by (1/k) B N units: first we could have pushed up all items such that their corresponding vertices have no outgoing arc, then all items such that their vertices have outgoing arcs pointing at the former set of vertices, and so on. By definition of C, there must be a path connecting
v i1 with v j . This path v i1 , v i2 , . . . , v i K = v j
contains only vertices in C and hence its length is bounded by c . The claim follows.
Our goal is now to remove the items i 1 , . . . , i K due to Lemma 18 and O(K) = O(1/ 2 ) more large items from OPT L . Since we can assume that k ≥ Ω(1/ 3 ) this will lose only a factor of 1 + O( ) in the objective. To this end we define K + 1 deletion rectangles, see Figure 2. We place one such rectangle R between any two consecutive items i , i +1 . The height of R equals the vertical distance between i and i +1 (at most (1/k) B N ) and the width of R equals (1/k) B N . Since v i , v i +1 are connected by an arc in G , we can draw a vertical line segment connecting i with i +1 . We place R such that it is intersected by this line segment. Note that for the horizontal position of R there are still several possibilities and we choose one arbitrarily. Finally, we place a special deletion rectangle between the item i K and the top edge of the knapsack and another special deletion rectangle between the item i 1 and the bottom edge of the knapsack. The heights of these rectangles equal the distance of i 1 and i K with the bottom and top edge of the knapsack, resp. (which is at most (1/k) B N ), and their widths equal (1/k) B N . They are placed such that they touch the bottom edge of i 1 and the top edge of i K , resp.
Lemma 19. Each deletion rectangle can intersect at most 4 large items in its interior. Hence, there can be only O(K) ≤ O(c ) = O(1/ 2 ) large items intersecting a deletion rectangle in their interior.
Observe that the deletion rectangles and the items in {i 1 , . . . , i K } separate the knapsack into a left and a right part with items OPT lef t and OPT right , resp. We delete all items in i 1 , . . . , i K and all items intersecting the interior of a deletion rectangle. Each deletion rectangle and each item in {i 1 , . . . , i K } has a width of at least (1/k) B N . Thus, we can move all items in OPT lef t simultaneously by (1/k) B N units to the right. After this, no large item intersects the area (0, (1/k) B N ) × (0, N ). We rotate the resulting solution by 90 degrees, hence getting an empty horizontal strip (0, N )
× (0, (1/k) B N ). The total height of items in OP T T is at most k · (1/k) B+2 N ≤ (1/k) B+1 N . Therefore,
FPT-algorithm with resource augmentation
We now compute a packing that contains as many items as the solution due to Lemma 15. However, it might use the space of the entire knapsack. In particular, we use the free space in the knapsack in the latter solution in order to round the sizes of the items. In the following lemma the reader may think of k = (1 − )k andk = k O(1/ ) . Note that Lemma 20 yields an FPT algorithm if we are allowed to increase the size of the knapsack by a factor 1 + O(1/k) wherek is a second parameter.
In the remainder of this section, we prove Lemma 20 and we do not differentiate between large and thin items anymore. Assume that there exists a solution OPT of size k that leaves the area [0, N ] × [0, N/k] of the knapsack empty. We want to compute a solution of size k . We use the empty space in order to round the heights of the items in the packing of OPT to integral multiples of N/(k k ). Note that in OPT an item i might be rotated. Thus, depending on this we actually want to round its height h i or its width w i . To this end, we define rounded heights and widths byĥ i :
= hi N/(k k ) N/(k k ) andŵ i := hi N/(k k ) N/(k k ) for each item i.
Lemma 21.
There exists a feasible packing for all items in OPT even if for each rotated item i we increase its width w i toŵ i and for each non-rotated item i ∈ OPT we increase its height h i toĥ i .
To visualize the packing due to Lemma 21 one might imagine a container of heightĥ i and width w i for each non-rotated item i and a container of height h i and widthŵ i for each rotated item i . Next, we group the items according to their valuesĥ i andŵ i . We define I w the items that are not among the k items with smallest width and height, resp. At most 2k · k k = O(k(k ) 2 ) items remain, denote them bȳ I. Then, in time (kk ) O(k ) we can solve the remaining problem by completely enumerating over all subsets ofĪ with at most k elements. For each enumerated set we check within the given time bounds whether its items can be packed into the knapsack (possibly via rotating some of them) by guessing sufficient auxiliary information. Therefore, if a solution of size k for a knapsack of width N and height (1 − 1/k)N exists, then we will find a solution of size k that fits into a knapsack of width and height N . Now the proof of Theorem 4 follows by using Lemma 15 and then applying Lemma 20 with k = (1 − )k andk = k O(1/ ) . The setĪ is the claimed set (which intuitively forms the approximative kernel), we compute a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and we can redefine ε appropriately.
Hardness of Geometric Knapsack
We show that 2dk and 2dkr are both W[1]-hard for parameter k by reducing from a variant of subset sum. Recall that in subset sum we are given m positive integers x 1 , . . . , x m as well as integers t and k, and have to determine whether some k-tuple of the numbers sums to t; this is W[1]-hard with respect to k [18]. In the variant multi-subset sum it is allowed to choose numbers more than once. It is easy to verify that the proof for W[1]-hardness of subset sum due to Downey and Fellows [18] extends also to multi-subset sum. (See Lemma 23 in Section B.) In our reduction to 2dkr we prove that rotations are not required for optimal solutions, making W[1]-hardness of 2dk a free consequence.
Proof sketch for Theorem 3. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). This establishes W[1]hardness of 2dkr.
Observe that, for any packing of items into the knapsack, there is an upper bound of N on the total width of items that intersect any horizontal line through the knapsack, and similarly an upper bound of N for the total height of items along any vertical line. We will let the dimensions of some items depend on numbers x i from the input instance (x 1 , . . . , x m , t, k) of multi-subset sum such that, using these upper bound inequalities, a correct packing certifies that y 1 + . . . + y k = t for some k of the numbers. The key difficulty is that there is a lot of freedom in the choice of which items to pack and where in case of a no instance.
To deal with this, the items corresponding to numbers x i from the input are all almost squares and their dimensions are incomparable. Concretely, an item corresponding to some number x i has height L + S + x i and width L + S + 2t − x i ; we call such an item a tile.
(The exact values of L and S are immaterial here, but L S t > x i holds.) Thus, when using, e.g., a tile of smaller width (i.e., smaller value of x i ) it will occupy "more height" in the packing. The knapsack is only slightly larger than a k by k grid of such tiles, implying that there is little freedom for the placement. Let us also assume for the moment, that no rotations are used.
Accordingly, we can specify k vertical lines that are guaranteed to intersect all tiles of any packing that uses k 2 tiles, by using pairwise distance L − 1 between them. Moreover, each line is intersecting exactly k private tiles. The same holds for a similar set of k horizontal lines. Together we get an upper bound of N for the sum of the widths (heights) along any horizontal (vertical) line. Since the numbers x i occur negatively in widths, we effectively get lower bounds for them from the horizontal lines. When the sizes of these tiles (and the auxiliary items below) are appropriately chosen, it follows that all upper bound equalities must be tight. This in turn, due to the exact choice of N , implies that there are k numbers y 1 , . . . , y k with sum equal to t.
Unsurprisingly, using just the tiles we cannot guarantee that a packing exists when given a yes-instance. This can be fixed by adding a small number of flat/thin items that can be inserted between the tiles (see Figure 3, but note that it does not match the size ratios from this proof); these have dimension L × S or S × L. Because one dimension of these items is large (namely L) they must be intersected by the above horizontal or vertical lines. Thus, they can be proved to enter the above inequalities in a uniform way, so that the proof idea goes through.
Finally, let us address the question of why we can assume that there are no rotations. This is achieved by letting the width of any tile be larger than the height of any tile, and adding a final auxiliary item of width N and small height, called the bar. To get the desired number of items in a solution packing, it can be ensured that the bar must be used as no more than k 2 tiles can fit into N × N and there is a limited supply of flat/thin items. W.l.o.g., the bar is not rotated. It can then be checked that using at least one tile in its rotated form will violate one of the upper bounds for the height. This completes the proof sketch.
Open Problems
This paper leaves several interesting open problems. A first obvious question is whether there exists a PAS also for 2dk (i.e., in the case without rotations). We remark that the algorithm from Lemma 20 can be easily adapted to the case without rotations. Unfortunately, Lemma 15 does not seem to generalize to the latter case. Indeed, there are instances in which we lose up to a factor of 2 if we require a strip of width Ω ε,k (1) · N to be emptied, see Figure 4. We also note that both our PASs work for the cardinality version of the problems: an extension to the weighted case is desirable. Unlike related results in the literature (where extension to the weighted case follows relatively easily from the cardinality case), this seems to pose several technical issues. We remark that all the problems considered in this paper might admit a PTAS in the standard sense, which would be a strict improvement on our PASs. Indeed, the existence of a QPTAS for these problems [1,2,15] suggests that such PTASs are likely to exist. However, finding those PTASs is a very well-known and long-standing problem in the area. We hope that our results can help to achieve this challenging goal.
References
A Omitted Proofs for Sections 2 and 3
Proof of Lemma 8. We define a planar embedding for G 1 based on the position of the rectangles in R * . Each vertex v i ∈ V 1 is represented by a rectangleR i which is defined to be the convex hull of all corners of cells of G that are contained in R i . Let e = {v i , v i } ∈ E 1 be an edge. Let g be a grid cell that R i and R i both intersect. If R i and R i intersect the same horizontal line H ∈ L H then we represent e by a horizontal line segment connectingR i andR i such that H contains . We do a symmetric operation if R i and R i intersect the same vertical line V ∈ L V . If R i and R i contain the top left and the bottom right corner of g, resp., then we represent e by a diagonal line segment connectingR i andR i within g.
We do this operation with each edge e ∈ E 1 . Note that in each grid cell we draw at most one diagonal line segment. By construction, no two line segments intersect and hence G 1 is planar.
Proof of Lemma 9.
A result by Frederickson [21] states that for any integer r any n-vertex planar graph can be divided into O(n/r) regions with no more than r vertices each, and O(n/ √ r) boundary vertices in total. We choose r := O(1/( ) 2 ) and then we have at most · n boundary vertices in total. We define V to be the set of non-boundary vertices.
Proof of Lemma 10. We define a planar embedding for G 2 . Let w j ∈ V 2 and assume that w j represents a connected component C j of G 1 . We represent C j by drawing the rectanglē R i for each vertex v i ∈ C (like in the proof of Lemma 8 the rectangleR i is defined to be the convex hull of all corners of cells of G that are contained in R i ) and the following set of line segments (actually almost the same as the ones defined in the proof of Lemma 8). Consider two rectangles R i , R i ∈ C j intersecting the same grid cell g.
If R i , R i intersect the same horizontal line H ∈ L H then then we draw a horizontal line segment connectingR i andR i such that is a subset of H . If R i and R i contain the top left and the bottom right corner of g, resp., then we draw a diagonal line segment connectingR i andR i within g. This yields a connected area A j representing C j (and thus w j ).
Let e = {w j , w j } ∈ E 2 . We want to introduce a line segment representing e. By definition of E 2 there must be grid cell g and two rectangles R i , R i intersecting g whose vertices belong to different connected components of G 1 and that R i and R i contain the bottom left and the top right corner of g, resp. Note that then there can be no vertex v i ∈ V 1 whose rectangle contains the top left or the bottom right corner of g: such a rectangle would be connected by an edge with both R i and R i in G 1 and then all three rectangles R i , R i , R i would be in the same connected component of G 1 . We draw a diagonal line segment connectingR i andR i within g and then does not intersect any area A j for any vertex w j ∈ V 2 . Also, since we add at most one line segment per grid cell g these line segments do not intersect each other. Hence, G 2 is planar.
Proof of Theorem 2. First, we define the grid as described in Section 2.1. In case that the algorithm in Lemma 5 finds a solution of size k then we define the kernelR to be this solution and we are done. Otherwise, we enumerate all possible sets G k of the kind as described in Lemma 14, at most k O(1/ 8 ) many. Then, for each such set G j we consider all rectangles contained in the union of G j and we compute a feasible solution of size c for them if such a solution exists, and otherwise we compute the optimal solution. We do this by complete enumeration in time n O(c) = n O(1/ 8 ) . For each set G j the obtained solution has size at most c = O(1/ 8 ). We define the kernelR to be the union over all k O(1/ 8 ) solutions obtained in XX:17 this way. Hence, |R| ≤ k O(1/ 8 ) . Also, we can guarantee that the output of our algorithm is a subset ofR and henceR contains a (1 + )-approximative solution.
Proof of Lemma 16. Let OPT denote the optimal solution to the given instance. For each
B ∈ {1, . . . , 8/ } we define I(B ) := {i ∈ I | h i ∈ [(1/k) B +2 N, (1/k) B N )}.
For any item i ∈ I there can be at most four values of B such that i is contained in the respective set I(B ). Hence, there must be one value B ∈ {1, . . . , 8/ } such that |I(B) ∩ OPT| ≤ 2 |OPT|. Each item i ∈ I \ I(B) is then contained in L or T . Since |I(B) ∩ OPT| ≤ 2 |OPT| we lose only a factor of (1 − 2 ) −1 ≤ 1 + in the approximation ratio.
Proof of Lemma 19. Each deletion rectangle has a height of at most (1/k) B N and a width of exactly (1/k) B N . Each large item has height and width at least (1/k) B N . Therefore, each deletion rectangle can intersect with at most 4 large items in its interior (intuitively, at its 4 corners).
Proof of Lemma 21. For each item i ∈ OPT we perform the following operation. Each item i ∈ OPT such that i is placed underneath i (i.e., such that the y-coordinate of the top edge of i is upper-bounded by the y-coordinate of the bottom edge of i) is moved by N/(k k ) units down. If i is not rotated then we increase the height of i toĥ i by appending a rectangle of width w i and heightĥ i −h i ≤ N/(k k ) underneath i. If i is rotated then we increase the width of i toŵ i by appending a rectangle of width h i and heightŵ i − w i ≤ N/(k k ) underneath i. Since we moved down the mentioned other items before, the new (bigger) item does not intersect any other item. We do this operation for each item i ∈ OPT . In the process, we move each item down by at most (k − 1)N/(k k ) and when we increase its height then the y-coordinate of its bottom edge decreases by at most N/(k k ). Initially, the y-coordinate of the bottom edge of any item was at least N/k. Hence, at the end the y-coordinate of the bottom edge of any item is at least N/k − (k − 1)N/(k k ) − N/(k k ) ≥ N/k − N/k ≥ 0. Hence, all rounded items are contained in the knapsack.
Proof of Lemma 22.
Consider the packing for OPT due to Lemma 21 in which we increased the height of each non-rotated item i toĥ i and the width of each rotated item i toŵ i . Suppose that there is a set I h the latter set of items. Since |OPT | ≤ k and i ∈ OPT there must be an item i ∈Ī (j) h such that i / ∈ OPT . Then we can replace i by i sinceĥ i =ĥ i and w i ≤ w i . We perform this operation for each set L (j) h and a symmetric operation for each set L (j) w until we obtain a solution for which the lemma holds. This solution then contains the same number of items as the initial solution OPT .
B
Proofs for Section 4 Lemma 23. multi-subset sum is W[1]-hard.
Proof. Downey and Fellows [18] give a parameterized reduction from perfect code(k) to subset sum. The created instances (x 1 , . . . , x m , t, k) have the property that all numbers have digits 0 or 1 when expressed in base k + 1. Moreover, the target value t is equal to 1 . . . 1 k+1 . Accordingly, when any k numbers x i sum to t there can be no carries in the addition. Thus, no two selected numbers may have a 1 in the same position. Hence, allowing to select numbers multiple times does not create spurious solutions, giving us a correct reduction from perfect code(k) to multi-subset sum.
We split the proof of Theorem 3 into two separate statements for 2dkr and 2dk.
Theorem 24. 2dkr is W[1]-hard.
Proof. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). By Lemma 23, this establishes W[1]-hardness of 2dkr.
Construction. Let (x 1 , . . . , x m , t, k) be an instance of multi-subset sum. W.l.o.g. we may assume that 4 ≤ k ≤ m and that x i < t for all i ∈ [m]. Furthermore, as solutions may select the same integer multiple times, we may assume that all the x i are pairwise different.
Throughout, we take a knapsack to be an N by N square with coordinate (0, 0) in the bottom left corner and (N, N ) at top right. The first coordinate of any point in the knapsack measures the horizontal (left-right) distance from the point to (0, 0); the second coordinate measure the vertical (up-down) distance from (0, 0). All items in the following construction are given such that their sizes reflect their intended rotation in a solution, i.e., heights refers to vertical dimensions and widths to horizontal dimensions.
We begin by constructing an instance of 2dk. Throughout, for an item R, we will use height(R) and width(R) denote its height and width. The instance of 2dk is defined as follows:
We define constants
S := k 2 · t L := k 2 · S = k 4 · t.
(The specific values will not be important so long as k 2 · t ≤ S and k 2 · S ≤ L. Intuitively, the identifiers are chosen to mean small and large.)
The knapsack has height and width both equal to
N := k · L + (2k − 1) · S + (2k − 1) · t.(1)
For each i ∈ [m] we construct k 2 items R(i, 1), . . . , R(i, k 2 ) with
height(R(i, j)) = L + S + x i (2) width(R(i, j)) = L + S + 2t − x i .(3)
We call these items tiles. We say that each tile R(i, ·) corresponds to the number x i from the input that it was constructed for. Since the x i are pairwise different, the x i corresponding to any tile can be easily read off from both height and width. We point out that all tiles have height strictly between L + S and L + S + t and width strictly between L + S + t and L + S + 2t. We add p := k · (k − 1) items T (1), . . . , T (p) with height L and width S. We call these the thin items. We add p items F (1), . . . , F (p) with height S and width L. We call these the flat items.
We add a single (very flat and very wide) item of height (2k − 2) · t and width N , which we call the bar.
The created instance has a target value of k = k 2 + 2p + 1. (The intention is to pack all thin and all flat items, the bar, and exactly k 2 tiles.) This completes the construction. Clearly, all necessary computations can performed in polynomial time. The parameter value k = k 2 + 2p + 1 is upper bounded by O(k 2 ). It remains to prove correctness.
Correctness. We need to prove that the instance (x 1 , . . . , x m , t, k) is yes for multi-subset sum if and only if the constructed instance is yes for 2dkr. ⇐=: Assume that the created instance is yes for 2dkr, i.e., that it has a packing with k = k 2 + 2p + 1 items and fix any such packing. Observe that the packing must contain at least k 2 tiles as there are only 2p + 1 items that are not tiles. We will show that the packing uses exactly k 2 tiles, the 2p thin/flat items, and the bar. It is useful to recall that tiles have height and width both greater than L + S no matter whether they are rotated.
Consider the effect of placing k vertical lines in the knapsack at horizontal coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1). We first observe that these lines must necessarily intersect all tiles of the packing because each of them has width at least L: The distance between any two consecutive lines is L − 1, same as the distance from the left border of the knapsack to the first line. The distance from the kth vertical line to the right border is also strictly less than L:
N − k · (L − 1) = k + (2k − 1) · S + (2k − 1) · t < S + (2k − 1) · S + S = (2k + 1) · S < L
Observe that no line can intersect more than k tiles: Any two tiles of the packing may not overlap and may in particular not share their intersection with any line. Since each line has length N and each intersection with a tile has length greater than L, there can be at most k tiles intersected by any line as N < (k + 1) · L:
N = k · L + (2k − 1) · S + (2k − 1) · t < k · L + 4k · S ≤ (k + 1) · L
Overall, this means that the packing contains at most k 2 tiles: There are k lines that intersect all tiles of the packing, each of them intersecting at most k. By our earlier observation, this implies that the packing contains exactly k 2 tiles in addition to all 2p flat/thin items. Moreover, each line intersects exactly k tiles and no two lines intersect the same tile.
Let us now check how the vertical lines and the flat and thin items interact. Clearly, each both flat as well as rotated thin items have width L and height S. Accordingly, each flat and each rotated thin item must be intersected by at least one of the k vertical lines. We already know that a total length of at least k · (L + S) of each line is occupied by the k tiles that the line intersects. This leaves at most a length of N − k · (L + S) = (k − 1) · S + (2k − 1) · t < k · S for intersecting flat and rotated thin items, and allows for intersecting at most k − 1 of them. (Again, no two items can share their intersection with the line.) Thus, there are at most p = k · (k − 1) of the flat and rotated thin items in the packing.
Before analyzing the vertical lines further, let us perform an analogous argument for k horizontal lines with vertical coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1) and their intersection with tiles and flat/thin items. It can be verified that each of them similarly intersects exactly k tiles and that no tile is intersected twice. The argument for flat and thin items is analogous as well, except that we now reason about rotated flat and (non-rotated) thin items, which have height L and width S; we find that there are at most p such items and that each horizontal line intersects at most k − 1 of them. Since in total there must be 2p flat and thin items, this implies that both sets of lines (horizontal and vertical) intersect p of these items each. Since flat and thin items can be swapped freely, we may assume that none of these items are rotated, and that the vertical lines intersect the p flat items and the horizontal lines intersect the p thin items.
We know now that the packing contains exactly k 2 tiles as well as the p flat and the p thin items. Thus, to get a total of k = k 2 + 2p + 1 items, it must also contain the bar, which has height N and width (2k − 2) · t. W.l.o.g., we may assume that the bar is not rotated, or else we could rotate the entire packing. 3 It follows that all vertical lines intersect the bar due to its width of N , which matches the width of the knapsack.
Let us now analyze both vertical and horizontal lines further. The goal is to obtain inequalities on the values x i that go into the construction of the tiles; up to now we have only used that they are fairly large. We know that each vertical line intersects k tiles, k − 1 flat items, and the bar. Let h 1 , . . . , h k denote the heights of the tiles (ordered arbitrarily) and recall that each flat item has height S while the bar has height (2k − 2) · t. Since all intersections with the line are disjoint and the line has length N (equaling the height of the knapsack), we get that
N ≥ h 1 + . . . + h k + (k − 1) · S + (2k − 2) · t.(4)
At this point, in order to plug in values for the h i , it is important whether any of the tiles are rotated; we will show that having at least one rotated tile causes a violation of (4). To this end, recall that (non-rotated) tiles have heights strictly between L + S and L + S + t and widths strictly between L + S + t and L + S + 2t. Thus, if at least one tile is rotated then it has height greater than L + S + t, rather than the weaker bound of greater than L + S. Using this, the right-hand side of (4) can be lower bounded by
RHS > (k − 1) · (L + S) + (L + S + t) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 1) · t = N,
contradicting (4). Thus, none of the tiles intersected by the vertical line can be rotated.
Since each tile is intersected by a vertical line, it follows that no tiles can be rotated and we can analyze the lines using the sizes as given in (2) and (3). Let us return to replacing the values h i in (4). Recall that the height of a tile is equal to L + S + x i where x i is the corresponding integer from the input to the initial multi-subset sum instance. Thus, if the ith intersected tile corresponds to input integer y i ∈ {x 1 , . . . , x m } then by (2) we have
h i = L + S + y i .
Plugging this into (4) yields
N ≥ k i=1 (L + S + y i ) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 2) · t + k i=1 y i . Using N = k · L + (2k − 1) · S + (2k − 1) · t we immediately get t ≥ k i=1 y i .(5)
More formally, item R a,b is a tile corresponding to y i , where i = 1 + ((a − b) mod k), and accordingly has height(R a,b ) = L + S + y i and width(R a,b ) = L + S + 2t − y i . This yields the required property that for each a ∈ [k] the items R a,1 , . . . , R a,k contain tiles corresponding to all numbers y 1 , . . . , y k (and correctly contain multiple copies for numbers that appear more than once). The same holds for items R 1,b , . . . , R k,b for all b ∈ [k].
We use height(R i,j ) and width(R i,j ) to refer to height and width of tile R i,j . We use left(R), right(R), top(R), and bottom(R) to specify the coordinates of any item in our packing, i.e., for the k 2 tiles, the 2p flat/thin items, and the bar. The coordinates for tiles are chosen as
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ), right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ), bottom(R a,b ) = (b − 1) · S + b−1 i=1 height(R a,i ), top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ).
Let us first check some basic properties of these coordinates:
We observe that each tile is assigned coordinates that match its size, i.e., width(R a,b ) = right(R a,b ) − left(R a,b ) and height(R a,b ) = top(R a,b ) − bottom(R a,b ).
All coordinates lie inside the knapsack. Clearly, all coordinates are non-negative and it suffices to give upper bounds for top(R a,k ) and right (R k,b ). Recall that by construction each set of tiles R a,1 , . . . , R a,k contains tiles corresponding to all numbers y 1 , . . . , y k , and same for R 1,b , . . . , R k,b . Thus we get
right(R k,b ) = (k − 1) · S + k i=1 width(R i,b ) = (k − 1) · S + k i=1 (L + S + 2t − y i ) = k · L + (2k − 1) · S + 2k · t − k i=1 y i = k · L + (2k − 1) · S + (2k − 1) · t = N. Similarly, we get top(R a,k ) = (k − 1) · S + k i=1 height(R a,i ) = (k − 1) · S + k i=1 (L + S + y i ) = k · L + (2k − 1) · S + k i=1 y i = k · L + (2k − 1) · S + t = N − (2k − 2) · t.
We will later use the gap of (2k − 2) · t between N and N − (2k − 2) · t to place the bar item, as its height exactly matches the gap. For any tile R a,b the possible coordinates fall into very small intervals, using that all heights and widths of tiles lie strictly between L + S and L + S + 2t. We show this explicitly for left(R a,b ):
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ) left(R a,b ) > (a − 1) · S + a−1 i=1 (L + S) = (a − 1) · L + (2a − 2) · S left(R a,b ) < (a − 1) · S + a−1 i=1 (L + S + 2t) = (a − 1) · L + (2a − 2) · S + (2a − 2) · t < (a − 1) · L + (2a − 1) · S
In this way, we get the following intervals for left(R a,b ), right(R a,b ), bottom(R a,b ), and top(R a,b ). (Note that we sacrifice the possibility of tighter bounds in order to get the same simple form of bound for top and right and for bottom and left.) (a − 1) · L + (2a − 2) · S < left(R a,b ) < (a − 1) · L + (2a − 1) · S (8) a · L + (2a − 1) · S < right(R a,b ) < a · L + 2a · S (9)
(b − 1) · L + (2b − 2) · S < bottom(R a,b ) < (b − 1) · L + (2b − 1) · S (10) b · L + (2b − 1) · S < top(R a,b ) < b · L + 2b · S(11)
We can now easily verify that no two tiles R a,b and R c,d overlap if (a, b) = (c, d). If a = c then we may assume w.l.o.g. that a < c (and hence a ≤ c − 1). Using (11) and (10) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < left(R c,d ).
Thus, R a,b and R c,d do not overlap if a = c. If instead a = c then we must have b = d and, w.l.o.g., b < d (and hence b ≤ d − 1). Thus we have top(R a,b ) < b · L + 2b · S ≤ (d − 1) · L + (2d − 2) · S < bottom(R c,d ).
Thus, no two tiles R a,b and R c,d with (a, b) = (c, d) overlap.
We will now specify coordinates for the p flat and the p thin items. For this purpose the intervals for coordinates of the tiles (8)-(11) are highly useful. For thin items, there will always be two adjacent tiles, to the left and to the right, and we use the intervals to get top and bottom coordinates. For flat items the situation is the opposite; there are adjacent tiles on the top and bottom sides and we use the intervals to get left and right coordinates. Recall that thin items have height L and width S, whereas flat items have height S and width L.
We denote the p thin items by T a,b with a ∈ [k − 1] and b ∈ [k]; we choose coordinates as follows:
left(T a,b ) = right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ) (12) right(T a,b ) = left(R a+1,b ) = a · S + a i=1 width(R i,b ) (13) bottom(T a,b ) = (b − 1) · L + (2b − 1) · S (14) top(T a,b ) = b · L + (2b − 1) · S(15)
Clearly, the coordinates match the dimension of T a,b . We denote the p flat items by F a,b with a ∈ [k] and b ∈ [k − 1], and we use the following coordinates:
left(F a,b ) = (a − 1) · L + (2a − 1) · S (16) right(F a,b ) = a · L + (2a − 1) · S (17) bottom(F a,b ) = top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ) (18) top(F a,b ) = bottom(R a,b+1 ) = b · S + b i=1 height(R a,i )(19)
Clearly, the coordinates match the dimension of F a,b . It remains to show that there is no overlap between any of the items placed so far (all except the bar), recalling that intersections between tiles are already ruled out: It remains to consider (1) tile-flat, (2) tile-thin, (3) flat-flat, (4) flat-thin, and (5) thin-thin overlaps.
(1) There are no overlaps between any tile R a,b and any flat item F c,d : If a < c then a ≤ c − 1 and using (9) and (16) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < (c − 1) · L + (2c − 1) · S = left(F c,d ).
If a > c then c ≤ a − 1 and using (17) and (8) Thus, in all four cases there is no overlap, as claimed.
(2) There are no overlaps between any tile R a,b and any thin item T c,d :
Thus, in both cases there is no overlap, as claimed. Overall, we find that there are no overlap between any pair of items placed so far. It remains to add the bar to complete our packing. We already observed earlier that top(R a,k ) = N − (2k − 2) · t. Similarly, using (19) we get
top(F a,b ) = bottom(R a,b+1 ) ≤ bottom(R a,k ) ≤ top(R a,k ) ≤ N − (2k − 2) · t
for all a ∈ [k] and b ∈ [k − 1]. In the same way, using (15) we get top(T a,b ) = b · L + (2b − 1) · S ≤ k · L + (2k − 1) · S < N − (2k − 2) · t for all a ∈ [k − 1] and b ∈ [k], recalling that N = k · L + (2k − 1) · S + (2k − 1) · t. Thus, we can place the bar B of height (2k − 2) · t and width N at the top of the knapsack without causing overlaps; formally, its coordinates are as follows.
left(B) = 0 right(B) = N bottom(B) = N − (2k − 2) · t top(B) = N
Overall, we have placed k 2 + 2p + 1 items without overlap. Thus, the constructed instance of 2dk is a yes-instance, as required. This completes the proof.
Corollary 25. The 2dk problem is W[1]-hard.
Proof. We can use the same construction as in the proof of Theorem 24 to get a parameterized reduction from multi-subset sum to 2dk.
If the constructed instance is yes for 2dk then it is also yes for 2dkr, as the same packing of k = k 2 + 2p + 1 items can be used. As showed earlier, the latter implies that the input instance is yes for multi-subset sum. Conversely, if the input instance is yes for multi-subset sum then we already showed that there is a feasible packing to show that the constructed instance is yes for 2dkr. Since the packing did not require rotation of any items, it is also a feasible solution showing that the instance is yes for 2dk. Example showing that Lemma 15 cannot be generalized to 2dk (without rotations). The total height of the k/2 items on the bottom of the knapsack can be made arbitrarily small. Suppose that we wanted to free up an area of height f (k) · N and width N or of height N and width f (k) · N (for some fixed function f ). If the total height of the items on the bottom is smaller than f (k) · N then we would have to eliminate the k/2 items on the bottom or the k/2 items on top. Thus, we would lose a factor of 2 > 1 + ε in the approximation ratio.
| 12,445 |
1906.10982
|
2956057537
|
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+eps)-approximations in f(k,eps)n^ O(1) time where k is some parameter of the input. We obtain the following results on parameterized approximability: 1) In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant eps>0 and integer k>0, in time f(k,eps)n^ g(eps) , either outputs a solution of size at least k (1+eps), or declares that the optimum solution has size less than k. 2) In the (2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of TDK with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factors are 558 325+eps and 4 3+eps, resp. [, FOCS'17]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for TDKR. For all considered problems, getting time f(k,eps)n^ O(1) , rather than f(k,eps)n^ g(eps) , would give FPT time f'(k)n^ O(1) exact algorithms using eps=1 (k+1), contradicting W[1]-hardness.
|
For the special case of where all input objects are squares a PTAS is known @cite_16 but there can be no EPTAS @cite_27 . Recently, @cite_1 found polynomial-time algorithms for and with approximation ratio smaller than @math (also for the weighted case). For the special case that all input objects are squares there is a PTAS @cite_19 and even an EPTAS @cite_15 .
|
{
"abstract": [
"We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is 2+ε [Jansen and Zhang, SODA 2004]. In this paper we break the 2 approximation barrier, achieving a polynomialtime 17 9 + ε",
"Given a set Q of squares with positive profits, the square packing problem is to select and pack a subset of squares of maximum profit into a rectangular bin R. We present a polynomial time approximation scheme for this problem, that for any value Ɛ > 0 finds and packs a subset Q′ ⊆ Q of profit at least (1 - Ɛ)OPT, where OPT is the profit of an optimum solution. This settles the approximability of the problem and improves on the previously best approximation ratio of 5 4 +Ɛ achieved by Harren's algorithm.",
"An EPTAS (efficient PTAS) is an approximation scheme where e does not appear in the exponent of n, i.e., the running time is f(e)nc. We use parameterized complexity to investigate the possibility of improving the known approximation schemes for certain geometric problems to EPTAS. Answering an open question of Alber and Fiala [2], we show that Maximum Independent Set is W[1]-complete for the intersection graphs of unit disks and axis-parallel unit squares in the plane. A standard consequence of this result is that the @math time PTAS of [11] for Maximum Independent Set on unit disk graphs cannot be improved to an EPTAS. Similar results are obtained for the problem of covering points with squares.",
"An important question in theoretical computer science is to determine the best possible running time for solving a problem at hand. For geometric optimization problems, we often understand their complexity on a rough scale, but not very well on a finer scale. One such example is the two-dimensional knapsack problem for squares. There is a polynomial time (1 + ϵ)-approximation algorithm for it (i.e., a PTAS) but the running time of this algorithm is triple exponential in 1 ϵ, i.e., Ω(n221 ϵ). A double or triple exponential dependence on 1 ϵ is inherent in how this and several other algorithms for other geometric problems work. In this paper, we present an EPTAS for knapsack for squares, i.e., a (1+ϵ)-approximation algorithm with a running time of Oϵ(1)·nO(1). In particular, the exponent of n in the running time does not depend on ϵ at all! Since there can be no FPTAS for the problem (unless P = NP) this is the best kind of approximation scheme we can hope for. To achieve this improvement, we introduce two new key ideas: We present a fast method to guess the Ω(221 ϵ) relatively large squares of a suitable near-optimal packing instead of using brute-force enumeration. Secondly, we introduce an indirect guessing framework to define sizes of cells for the remaining squares. In the previous PTAS each of these steps needs a running time of Ω(n221 ϵ) and we improve both to Oϵ(1) · nO(1). We complete our result by giving an algorithm for two-dimensional knapsack for rectangles under (1 + ϵ)-resource augmentation. In this setting, we also improve the best known running time of Ω(n1 ϵ1 ϵ) to Oϵ(1) · nO(1) and compute even a solution with optimal profit, in contrast to the best previously known polynomial time algorithm for this setting that computes only an approximation. We believe that our new techniques have the potential to be useful for other settings as well.",
"A disk graph is the intersection graph of a set of disks with arbitrary diameters in the plane. For the case that the disk representation is given, we present polynomial-time approximation schemes (PTASs) for the maximum weight independent set problem (selecting disjoint disks of maximum total weight) and for the minimum weight vertex cover problem in disk graphs. These are the first known PTASs for @math -hard optimization problems on disk graphs. They are based on a novel recursive subdivision of the plane that allows applying a shifting strategy on different levels simultaneously, so that a dynamic programming approach becomes feasible. The PTASs for disk graphs represent a common generalization of previous results for planar graphs and unit disk graphs. They can be extended to intersection graphs of other \"disk-like\" geometric objects (such as squares or regular polygons), also in higher dimensions."
],
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_16"
],
"mid": [
"2962797157",
"1517042457",
"1510609119",
"2567804199",
"2094511120"
]
}
|
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack
|
a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2dk with rotations (2dkr), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2 + ε [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese,SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2dkr. For all considered problems, getting time f (k, ε)n O(1) , rather than f (k, ε)n g (ε) , would give FPT time f (k)n O(1) exact algorithms by setting ε = 1/(k + 1), contradicting W[1]-hardness. Instead, for each fixed ε > 0, our PASs give (1 + ε)-approximate solutions in FPT time.
For both misr and 2dkr our techniques also give rise to preprocessing algorithms that take n g(ε) time and return a subset of at most k g(ε) rectangles/items that contains a solution of size at least k/(1 + ε) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al.,STOC'17].
Introduction
Approximation algorithms and parameterized algorithms are two well-established ways to deal with NP-hard problems. An α-approximation for an optimization problem is a polynomialtime algorithm that computes a feasible solution whose cost is within a factor α (that might be a function of the input size n) of the optimal cost. In particular, a polynomial-time approximation scheme (PTAS) is a (1 + ε)-approximation algorithm running in time n g(ε) , where ε > 0 is a given constant and g is some computable function. In parameterized algorithms we identify a parameter k of the input, that we informally assume to be much smaller than n. The goal here is to solve the problem optimally in fixed-parameter tractable (FPT) time f (k)n O(1) , where f is some computable function. Recently, researchers started to combine the two notions (see, e.g., the survey by Marx [34]). The idea is to design approximation algorithms that run in FPT (rather than polynomial) time, e.g., to get (1 + ε)-approximate solutions in time f (k, ε)n O(1) . In this paper we continue this line of research on parameterized approximation, and apply it to two fundamental rectangle packing problems.
Our results and techniques
Our focus is on parameterized approximation algorithms. Unfortunately, as observed by Marx [34], when the parameter k is the desired solution size, computing (1 + ε)-approximate solutions in time f (k, ε)n O(1) implies fixed-parameter tractability. Indeed, setting ε = 1/(k+1) guarantees to find an optimal solution when that value equals to k ∈ N and we get time f (k, 1/(k + 1))n O(1) = f (k)n O(1) . Since the considered problems are W[1]-hard (in part, this is established in our work), they are unlikely to be FPT and similarly unlikely to have such nice approximation schemes. Instead, we construct algorithms (for two maximization problems) that, given ε > 0 and an integer k, take time f (k, ε)n g (ε) and either return a solution of size at least k/(1 + ε) or declare that the optimum is less than k. We call such an algorithm a parameterized approximation scheme (PAS). Note that if we run such an algorithm for each k ≤ k then we can guarantee that we compute a solution with cardinality at least min{k, OPT}/(1 + ε) where OPT denotes the size of the optimal solution. So intuitively, for each ε > 0, we have an FPT-algorithm for getting a (1 + ε)-approximate solution.
In this paper we consider the following two geometric packing problems, and design PASs for them.
Maximum Independent Set of Rectangles.
In the maximum independent set of rectangles problem (misr) we are given a set of n axis-parallel rectangles R = {R 1 , . . . , R n } in the two-dimensional plane, where R i is the open set of points (x
(1) i , x (2) i ) × (y (1) i , y(2)
i ). A feasible solution is a subset of rectangles R ⊆ R such that for any two rectangles R, R ∈ R we have R ∩ R = ∅. Our objective is to find a feasible solution of maximum cardinality |R |. W.l.o.g. we assume that x
(1) i , y (1) i , x(2)
i , y
(2) i ∈ {0, . . . , 2n − 1} for each R i ∈ R (see e.g. [1]). misr is very well-studied in the area of approximation algorithms. The problem is known to be NP-hard [24], and the current best polynomial-time approximation factor is O(log log n) for the cardinality case [11] (addressed in this paper), and O(log n/ log log n) for the natural generalization with rectangle weights [12]. The cardinality case also admits a (1 + ε)-approximation with a running time of n poly(log log(n/ε)) [15] and there is a (slower) QPTAS known for the weighted case [1]. The problem is also known to be W[1]-hard w.r.t. the number k of rectangles in the solution [33], and thus unlikely to be solvable in FPT time f (k)n O(1) .
In this paper we achieve the following main result:
Theorem 1. There is a PAS for misr with running time k O(k/ 8 ) n O(1/ 8 ) .
In order to achieve the above result, we combine several ideas. Our starting point is a polynomial-time construction of a k × k grid such that each rectangle in the input contains some crossing point of this grid (or we find a solution of size k directly). By applying (in a non-trivial way) a result by Frederickson [21] on planar graphs, and losing a small factor in the approximation, we define a decomposition of our grid into a collection of disjoint groups of cells. Each such group defines an independent instance of the problem, consisting of the rectangles strictly contained in the considered group of cells. Furthermore, we guarantee that each group spans only a constant number O ε (1) of rectangles of the optimum solution. Therefore in FPT time we can guess the correct decomposition, and solve each corresponding subproblem in n Oε(1) time. We remark that our approach deviates substantially from prior work, and might be useful for other related problems. An adaptation of our construction also leads to the following (1 + )-approximative kernelization.
Theorem 2. There is an algorithm for misr that, given k ∈ N, computes in time n O(1/ 8 ) a subset of the input rectangles of size k O(1/ 8 ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
Similarly as for a PAS, if we run the above algorithm for each k ≤ k we obtain a set of size k O(1/ 8 ) that contains a solution of size at least min{k, OPT}/(1 + ε). Observe that any c-approximate solution on the obtained set of rectangles is also a feasible, and c(1 + ε)approximate, solution for the original instance if OPT ≤ k and otherwise has size at least k/(c(1 + ε)). Thus, our result is a special case of a polynomial-size approximate kernelization scheme (PSAKS) as defined in [32].
(0, w i ) × (0, h i ), N ≥ w i , h i ∈ N.
The goal is to find a feasible packing of a subset I ⊆ I of the items of maximum cardinality |I |. Such packing maps each item i ∈ I into a new translated rectangle (a i , The result is proved by parameterized reductions from a variant of the W[1]-hard subset sum problem, where we need to determine whether a set of m positive integers contains a k-tuple of numbers with sum equal to some given value t. The difficulty for reductions to 2dk or 2dkr is of course that rectangles may be freely selected and placed (and possibly rotated) to get a feasible packing.
a i + w i ) × (b i , b i + h i ) 2 ,
We complement the W[1]-hardness result by giving a PAS for the case with rotations (2dkr) and a corresponding kernelization procedure like in Theorem 2 (which also yields a PSAKS).
Theorem 4.
For 2dkr there is a PAS with running time k O(k/ ) n O(1/ 3 ) and an algorithm that, given k ∈ N, computes in time n O(1/ 3 ) a subset of the input items of size k O(1/ ) that contains a solution of size at least k/(1 + ε), assuming that the input instance admits a solution of size at least k.
The above result is based on a simple combination of the following two (non-trivial) building blocks: First, we show that, by losing a fraction ε of the items of a given solution of size k, it is possible to free a vertical strip of width N/k Oε(1) (unless the problem can be solved trivially). This is achieved by first sparsifying the solution using the above mentioned result by Frederickson [21]. If this is not sufficient we construct a vertical chain of relatively wide and tall rectangles that split the instance into a left and right side. Then we design a resource augmentation algorithm, however in an FPT sense: we can compute in FPT time a packing of cardinality k if we are allowed to use a knapsack where one side is enlarged by a factor 1 + 1/k Oε(1) . Note that in typical resource augmentation results the packing constraint is relaxed by a constant factor while here this amount is controlled by our parameter.
A Parameterized Approximation Scheme for MISR
In this section we present a PAS and an approximate kernelization for misr. We start by showing that there exists an almost optimal solution for the problem with some helpful structural properties (Sections 2.1 and 2.2). The results are then put together in Section 2.3.
Definition of the grid
We try to construct a non-uniform grid with k rows and k columns such that each input rectangle overlaps a corner of this grid (see Figure 1). To this end, we want to compute k − 1 vertical and k − 1 horizontal lines such that each input rectangle intersects one line from each set. There are instances in which our routine fails to construct such a grid (and in fact such a grid might not even exist). For such instances, we directly find a feasible solution with k rectangles and we are done.
Lemma 5.
There is a polynomial time algorithm that either computes a set of at most k − 1 vertical lines L V with x-coordinates V 1 , . . . , V k−1 such that each input rectangle is crossed by one line in L V or computes a feasible solution with k rectangles. A symmetric statement holds for an algorithm computing a set of at most k − 1 horizontal lines L H with y-coordinates H 1 , . . . , H k−1 .
Proof. Let V 0 := 0. Assume inductively that we defined the x-coordinates V 0 , V 1 , . . . , V k such that V 1 , . . . , V k are the x-coordinates of the first k constructed vertical lines. We define the x-XX:6
Parameterized Approximation Schemes coordinate of the (k + 1)-th vertical line by V k +1 := min Ri∈R:
x (1) i ≥ V k x (2)
i − 1/2. We continue with this construction until we reach an iteration k * such that
{R i ∈ R : x (1) i ≥ V k * −1 } = ∅.
If k * ≤ k then we constructed at most k − 1 lines such that each input rectangle is intersected by one of these lines. Otherwise, assume that k * > k. Then for each iteration k ∈ {1, . . . , k} we can find a rectangle R i(k ) := arg min Ri∈R:
x (1) i ≥ V k −1 x (2)
i . By construction, using the fact that all coordinates are integer, for any two such rectangles
R i(k ) , R i(k ) with k = k we have that (x (1) i(k ) , x (2) i(k ) ) ∩ (x (1) i(k ) , x(2)
i(k ) ) = ∅. Hence, R i(k ) and R i(k ) are disjoint. Therefore, the rectangles R i(1) , . . . , R i(k) are pairwise disjoint and thus form a feasible solution.
The algorithm for constructing the horizontal lines works symmetrically.
We apply the algorithms due to Lemma 5. If one of them finds a set of k independent rectangles then we output them and we are done. Otherwise, we obtain the sets L V and L H . For convenience, we define two more vertical lines with x-coordinates V 0 := 0 and V |L V |+1 = 2n − 1, resp., and similarly two more horizontal lines with y-coordinates H 0 = 0 and H |L H |+1 = 2n − 1, resp.. We denote by G the set of grid cells formed by these lines and the lines in L V ∪L H : for any two consecutive vertices lines (i.e., defined via x-coordinates V j , V j+1 with j ∈ {0, . . . , |L V |}) and two consecutive horizontal grid lines (defined via y-coordinates
H j , H j +1 with j ∈ {0, . . . , |L H |})
we obtain a grid cell whose corners are the intersection of these respective lines. We interpret the grid cells as closed sets (i.e., two adjacent grid cells intersect on their boundary).
Proposition 6. Each input rectangle R i contains a corner of a grid cell of G. If a rectangle R intersects a grid cell g then it must contain a corner of g.
Groups of rectangles
Let R * denote a solution to the given instance with |R * | = k. We prove that there is a special solution R ⊆ R * of large cardinality that we can partition into s ≤ k groups R 1∪ . . .∪R s such that each group has constant size O(1/ 8 ) and no grid cell can be intersected by rectangles from different groups. The remainder of this section is devoted to proving the following lemma.
Lemma 7.
There is a constant c = O(1/ 8 ) such that there exists a solution R ⊆ R * with |R | ≥ (1 − )|R * | and a partition R = R 1∪ . . .∪R s with s ≤ k and |R j | ≤ c for each j and such that if any two rectangles in R intersect the same grid cell g ∈ G then they are contained in the same set R j .
Given the solution R * we construct a planar graph G 1 = (V 1 , E 1 ). In V 1 we have one vertex v i for each rectangle R i ∈ R * . We connect two vertices v i , v i by an edge if and only if there is a grid cell g ∈ G such that R i and R i intersect g and R i and R i are crossed by the same horizontal or vertical line in L V ∪ L H or if R i and R i contain the top left and the bottom right corner of g, resp. Note that we do not introduce an edge if R i and R i contain the bottom left and the top right corner of g, resp. (see Fig. 1): this way we preserve the planarity of the resulting graph, however we will have to deal with the missing connections in a later stage. Let G 1 be the graph obtained when applying Lemma 9 to G 1 with := /2 and let c 1 = O((1/ ) 2 ) be the respective value c . Now we would like to claim that if two rectangles R i , R i intersect the same grid cell g ∈ G then v i , v i are in the same component of G 1 . Unfortunately, this is not true. It might be that there is a grid cell g ∈ G such that R i and R i contain the bottom left corner and the top right corner of g, resp., and that v i and v i are in different components of G 1 . We fix this in a second step. We define a graph G 2 = (V 2 , E 2 ). In V 2 we have one vertex for each connected component in G 1 . We connect two vertices w i , w i ∈ V 2 by an edge if and only if there are two rectangles R i , R i such that their corresponding vertices v i , v i in V 1 belong to the connected components of G 1 represented by w i and w i , resp., and there is a grid cell g whose bottom left and top right corner are contained in R i and R i , resp. Lemma 10. The graph G 2 is planar.
Similarly as above, we apply Lemma 9 to G 2 with := 2c1 and let
c 2 = O((1/ ) 2 ) = O(1/ 6 ) denote the corresponding value of c . Denote by G 2 the resulting graph. We define a group R q for each connected component C q of V 2 . The set R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 such that w j ∈ C q . We define R :=∪ q R q . Lemma 11. Let R i , R i ∈ R be rectangles that intersect the same grid cell g ∈ G. Then there is a set R q such that {R i , R i } ⊆ R q .
Proof. Assume that in G 1 there is an edge connecting v i , v i . Then the latter vertices are in the same connected component C j of G 1 and thus they are in the same group R q . Otherwise, if there is no edge connecting v i , v i in G 1 then R i and R i contain the bottom left and top right corners of g, resp. Assume that v i and v i are contained in the connected components C j and C j of G 1 , resp. Then w j , w j ∈ V 2 , {w j , w j } ∈ E 2 and w j , w j are in the same connected component of V 2 . Hence, R i , R i are in the same group R q .
It remains to prove that each group R q has constant size and that |R | ≥ (1 − )|R * |.
Lemma 12. There is a constant
c = O(1/ 8 ) such that for each group R q it holds that |R q | ≤ c. Proof. For each group R q there is a connected component C q of G 2 such that R q contains all rectangles R i such that v i is contained in a connected component C j of G 1 and w j ∈ C q . Each connected component of G 1 contains at most c 1 = O(1/ε 2 ) vertices of V 1 and each component of G 2 contains at most c 2 = O(1/ε 6 ) vertices of V 2 . Hence, |R q | ≤ c 1 · c 2 =: c and c = O((1/ 2 )(1/ 6 )) = O(1/ 8 ). Lemma 13. We have that |R | ≥ (1 − )|R * |.
Proof. At most 2 · |V 1 | vertices of G 1 are deleted when we construct G 1 from G 1 . Each vertex in G 1 belongs to one connected component C j , represented by a vertex w j ∈ G 2 . At most 2c1 |V 2 | vertices are deleted when we construct G 2 from G 2 . These vertices represent at most c 1 · 2c1 |V 2 | ≤ 2 |V 1 | ≤ 2 |V 1 | vertices in G 1 (and each vertex in G 1 represents one rectangle in R * ). Therefore,
|R | ≥ |R * | − 2 · |V 1 | − 2 · |V 1 | = (1 − )|R * |.
This completes the proof of Lemma 7.
The algorithm
In our algorithm, we compute a solution that is at least as good as the solution R as given by Lemma 7. For each group R j we define by G j the set of grid cells that are intersected by at least one rectangle from R j . Since in R each grid cell can be intersected by rectangles of only one group, we have that G j ∩ G q = ∅ if j = q. We want to guess the sets G j . The next lemma shows that the number of possibilities for one of those sets is polynomially bounded in k.
Lemma 14. Each G j belongs to a set G of cardinality at most k O(1/ε 8 ) that can be computed in polynomial time.
Proof. The cells G j intersected by R j are the union of all cells G(R) with R ∈ R j where for each rectangle R the set G(R) denotes the cells intersected by R. Each set G(R) can be specified by indicating the 4 corner cells of G(R), i.e., top-left, top-right, bottom-left, and bottom-right corner. Hence there are at most k 4 choices for each such R. The claim follows
since |R j | = O(1/ε 8 ).
We hence achieve the main result of this section.
Proof of Theorem 1. Using Lemma 14, we can guess by exhaustive enumeration all the sets G j in time k O(k/ 8 ) . We obtain one independent problem for each value j ∈ {1, . . . , s} which consists of all input rectangles that are contained in G j . For this subproblem, it suffices to compute a solution with at least |R j | rectangles. Since |R j | ≤ c = O(1/ 8 ) we can do this in time n O(1/ 8 ) by complete enumeration. Thus, we solve each of the subproblems and output the union of the computed solutions. The overall running time is as in the claim. If all the computed solutions have size less than (1 − ε)k, this implies that the optimum solution is smaller than k. Otherwise we obtain a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and the claim follows by redefining ε appropriately.
Essentially the same construction as above also gives an approximate kernelization algorithm as claimed in Theorem 2, see Appendix A for details.
A Parameterized Approximation Scheme for 2DKR
In this section we present a PAS and an approximate kernelization for 2dkr. W.l.o.g., we assume that k ≥ Ω(1/ 3 ), since otherwise we can optimally solve the problem in time n O(1/ 3 ) by exhaustive enumeration. In Section 3.1 we show that, if a solution of size k exists, there is a solution of size at least (1 − )k in which no item intersects some horizontal strip
(0, N ) × (0, (1/k) O(1/ ) N )
Freeing a Horizontal Strip
In this section, we prove the following lemma that shows the existence of a near-optimal solution that leaves a sufficiently tall empty horizontal strip in the knapsack (assuming k ≥ Ω(1/ 3 )). W.l.o.g., ε ≤ 1. Since we can rotate the items by 90 degrees, we can assume w.l.o.g. that w i ≥ h i for each item i ∈ I.
(1 − )k in which no packed item intersects (0, N ) × (0, (1/k) c N ), for a proper constant c = O(1/ ).
We classify items into large and thin items. Via a shifting argument, we get the following lemma.
Lemma 16. There is an integer B ∈ {1, . . . , 8/ } such that by losing a factor of 1 + in the objective we can assume that the input items are partitioned into
large items L such that h i ≥ (1/k) B N (and thus also w i ≥ (1/k) B N ) for each item i ∈ L, thin items T such that h i < (1/k) B+2 N for each item i ∈ T .
Let B be the integer due to Lemma 16 and we work with the resulting item classification. If |T | ≥ k then we can create a solution of size k satisfying the claim of Lemma 15 by simply stacking k thin items on top of each other: any k thin items have a total height of at most k · (1/k) B+2 N ≤ (1/k) 2 N . Thus, from now on assume that |T | < k.
Sparsifying large items. Our strategy is now to delete some of the large items and move the remaining items. This will allow us to free the area [0, N ] × [0, (1/k) O(1/ ) N ] of the knapsack. Denote by OPT the almost optimal solution obtained by applying Lemma 16. We remove the items in OPT T := OPT ∩ T temporarily; we will add them back later.
We construct a directed graph G = (V, A) where we have one vertex v i ∈ V for each item i ∈ OPT L := OPT ∩ L. We connect two vertices v i , v i by an arc a = (v i , v i ) if and only if we can draw a vertical line segment of length at most (1/k) B N that connects item i with item i without intersecting any other item such that i lies above i, i.e., the bottom coordinate of i is at least as large as the top coordinate of i, see Figure 2 for a sketch. We obtain the following proposition since for each edge we can draw a vertical line segment and these segments do not intersect each other.
Proposition 17. The graph G is planar.
Next, we apply Lemma 9 to G with := . Let G = (V , A ) be the resulting graph. We remove from OPT L all items i ∈ V \ V and denote by OPT L the resulting solution. We push up all items in OPT L as much as possible. If now the strip (0, N ) × (0, (1/k) B N ) is not intersected by any item then we can place all the items in T into the remaining space. Their total height can be at most k · (1/k) B+2 N ≤ (1/k) B+1 N and thus we can leave a strip
of height (1/k) B N − (1/k) B+1 N ≥ (1/k) O(1/ ) N
and width N empty. This completes the proof of Lemma 15 for this case.
Assume next that the strip (0, N )×(0, (1/k) B N ) is intersected by some item: the following lemma implies that there is a set of c = O(1/ 2 ) vertices whose items intuitively connect the top and the bottom edge of the knapsack.
Lemma 18. Assume that in OPT L there is an item i 1 intersecting (0, N ) × (0, (1/k) B N ). Then G contains a path v i1 , v i2 , . . . , v i K with K ≤ c = O(1/ 2 ), such that the distance between i K and the top edge of the knapsack is less than (1/k) B N . Proof. Let C denote all vertices v in G such that there is a directed path from v i1 to v in G . The vertices in C are contained in the connected component C in G that contains v i1 . Note that |C| ≤ |C | ≤ c .
We claim that C must contain a vertex v j whose corresponding item j is closer than (1/k) B N to the top edge of the knapsack. Otherwise, we would have been able to push up all items corresponding to vertices in C by (1/k) B N units: first we could have pushed up all items such that their corresponding vertices have no outgoing arc, then all items such that their vertices have outgoing arcs pointing at the former set of vertices, and so on. By definition of C, there must be a path connecting
v i1 with v j . This path v i1 , v i2 , . . . , v i K = v j
contains only vertices in C and hence its length is bounded by c . The claim follows.
Our goal is now to remove the items i 1 , . . . , i K due to Lemma 18 and O(K) = O(1/ 2 ) more large items from OPT L . Since we can assume that k ≥ Ω(1/ 3 ) this will lose only a factor of 1 + O( ) in the objective. To this end we define K + 1 deletion rectangles, see Figure 2. We place one such rectangle R between any two consecutive items i , i +1 . The height of R equals the vertical distance between i and i +1 (at most (1/k) B N ) and the width of R equals (1/k) B N . Since v i , v i +1 are connected by an arc in G , we can draw a vertical line segment connecting i with i +1 . We place R such that it is intersected by this line segment. Note that for the horizontal position of R there are still several possibilities and we choose one arbitrarily. Finally, we place a special deletion rectangle between the item i K and the top edge of the knapsack and another special deletion rectangle between the item i 1 and the bottom edge of the knapsack. The heights of these rectangles equal the distance of i 1 and i K with the bottom and top edge of the knapsack, resp. (which is at most (1/k) B N ), and their widths equal (1/k) B N . They are placed such that they touch the bottom edge of i 1 and the top edge of i K , resp.
Lemma 19. Each deletion rectangle can intersect at most 4 large items in its interior. Hence, there can be only O(K) ≤ O(c ) = O(1/ 2 ) large items intersecting a deletion rectangle in their interior.
Observe that the deletion rectangles and the items in {i 1 , . . . , i K } separate the knapsack into a left and a right part with items OPT lef t and OPT right , resp. We delete all items in i 1 , . . . , i K and all items intersecting the interior of a deletion rectangle. Each deletion rectangle and each item in {i 1 , . . . , i K } has a width of at least (1/k) B N . Thus, we can move all items in OPT lef t simultaneously by (1/k) B N units to the right. After this, no large item intersects the area (0, (1/k) B N ) × (0, N ). We rotate the resulting solution by 90 degrees, hence getting an empty horizontal strip (0, N )
× (0, (1/k) B N ). The total height of items in OP T T is at most k · (1/k) B+2 N ≤ (1/k) B+1 N . Therefore,
FPT-algorithm with resource augmentation
We now compute a packing that contains as many items as the solution due to Lemma 15. However, it might use the space of the entire knapsack. In particular, we use the free space in the knapsack in the latter solution in order to round the sizes of the items. In the following lemma the reader may think of k = (1 − )k andk = k O(1/ ) . Note that Lemma 20 yields an FPT algorithm if we are allowed to increase the size of the knapsack by a factor 1 + O(1/k) wherek is a second parameter.
In the remainder of this section, we prove Lemma 20 and we do not differentiate between large and thin items anymore. Assume that there exists a solution OPT of size k that leaves the area [0, N ] × [0, N/k] of the knapsack empty. We want to compute a solution of size k . We use the empty space in order to round the heights of the items in the packing of OPT to integral multiples of N/(k k ). Note that in OPT an item i might be rotated. Thus, depending on this we actually want to round its height h i or its width w i . To this end, we define rounded heights and widths byĥ i :
= hi N/(k k ) N/(k k ) andŵ i := hi N/(k k ) N/(k k ) for each item i.
Lemma 21.
There exists a feasible packing for all items in OPT even if for each rotated item i we increase its width w i toŵ i and for each non-rotated item i ∈ OPT we increase its height h i toĥ i .
To visualize the packing due to Lemma 21 one might imagine a container of heightĥ i and width w i for each non-rotated item i and a container of height h i and widthŵ i for each rotated item i . Next, we group the items according to their valuesĥ i andŵ i . We define I w the items that are not among the k items with smallest width and height, resp. At most 2k · k k = O(k(k ) 2 ) items remain, denote them bȳ I. Then, in time (kk ) O(k ) we can solve the remaining problem by completely enumerating over all subsets ofĪ with at most k elements. For each enumerated set we check within the given time bounds whether its items can be packed into the knapsack (possibly via rotating some of them) by guessing sufficient auxiliary information. Therefore, if a solution of size k for a knapsack of width N and height (1 − 1/k)N exists, then we will find a solution of size k that fits into a knapsack of width and height N . Now the proof of Theorem 4 follows by using Lemma 15 and then applying Lemma 20 with k = (1 − )k andk = k O(1/ ) . The setĪ is the claimed set (which intuitively forms the approximative kernel), we compute a solution of size at least (1 − ε)k ≥ k/(1 + 2ε) and we can redefine ε appropriately.
Hardness of Geometric Knapsack
We show that 2dk and 2dkr are both W[1]-hard for parameter k by reducing from a variant of subset sum. Recall that in subset sum we are given m positive integers x 1 , . . . , x m as well as integers t and k, and have to determine whether some k-tuple of the numbers sums to t; this is W[1]-hard with respect to k [18]. In the variant multi-subset sum it is allowed to choose numbers more than once. It is easy to verify that the proof for W[1]-hardness of subset sum due to Downey and Fellows [18] extends also to multi-subset sum. (See Lemma 23 in Section B.) In our reduction to 2dkr we prove that rotations are not required for optimal solutions, making W[1]-hardness of 2dk a free consequence.
Proof sketch for Theorem 3. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). This establishes W[1]hardness of 2dkr.
Observe that, for any packing of items into the knapsack, there is an upper bound of N on the total width of items that intersect any horizontal line through the knapsack, and similarly an upper bound of N for the total height of items along any vertical line. We will let the dimensions of some items depend on numbers x i from the input instance (x 1 , . . . , x m , t, k) of multi-subset sum such that, using these upper bound inequalities, a correct packing certifies that y 1 + . . . + y k = t for some k of the numbers. The key difficulty is that there is a lot of freedom in the choice of which items to pack and where in case of a no instance.
To deal with this, the items corresponding to numbers x i from the input are all almost squares and their dimensions are incomparable. Concretely, an item corresponding to some number x i has height L + S + x i and width L + S + 2t − x i ; we call such an item a tile.
(The exact values of L and S are immaterial here, but L S t > x i holds.) Thus, when using, e.g., a tile of smaller width (i.e., smaller value of x i ) it will occupy "more height" in the packing. The knapsack is only slightly larger than a k by k grid of such tiles, implying that there is little freedom for the placement. Let us also assume for the moment, that no rotations are used.
Accordingly, we can specify k vertical lines that are guaranteed to intersect all tiles of any packing that uses k 2 tiles, by using pairwise distance L − 1 between them. Moreover, each line is intersecting exactly k private tiles. The same holds for a similar set of k horizontal lines. Together we get an upper bound of N for the sum of the widths (heights) along any horizontal (vertical) line. Since the numbers x i occur negatively in widths, we effectively get lower bounds for them from the horizontal lines. When the sizes of these tiles (and the auxiliary items below) are appropriately chosen, it follows that all upper bound equalities must be tight. This in turn, due to the exact choice of N , implies that there are k numbers y 1 , . . . , y k with sum equal to t.
Unsurprisingly, using just the tiles we cannot guarantee that a packing exists when given a yes-instance. This can be fixed by adding a small number of flat/thin items that can be inserted between the tiles (see Figure 3, but note that it does not match the size ratios from this proof); these have dimension L × S or S × L. Because one dimension of these items is large (namely L) they must be intersected by the above horizontal or vertical lines. Thus, they can be proved to enter the above inequalities in a uniform way, so that the proof idea goes through.
Finally, let us address the question of why we can assume that there are no rotations. This is achieved by letting the width of any tile be larger than the height of any tile, and adding a final auxiliary item of width N and small height, called the bar. To get the desired number of items in a solution packing, it can be ensured that the bar must be used as no more than k 2 tiles can fit into N × N and there is a limited supply of flat/thin items. W.l.o.g., the bar is not rotated. It can then be checked that using at least one tile in its rotated form will violate one of the upper bounds for the height. This completes the proof sketch.
Open Problems
This paper leaves several interesting open problems. A first obvious question is whether there exists a PAS also for 2dk (i.e., in the case without rotations). We remark that the algorithm from Lemma 20 can be easily adapted to the case without rotations. Unfortunately, Lemma 15 does not seem to generalize to the latter case. Indeed, there are instances in which we lose up to a factor of 2 if we require a strip of width Ω ε,k (1) · N to be emptied, see Figure 4. We also note that both our PASs work for the cardinality version of the problems: an extension to the weighted case is desirable. Unlike related results in the literature (where extension to the weighted case follows relatively easily from the cardinality case), this seems to pose several technical issues. We remark that all the problems considered in this paper might admit a PTAS in the standard sense, which would be a strict improvement on our PASs. Indeed, the existence of a QPTAS for these problems [1,2,15] suggests that such PTASs are likely to exist. However, finding those PTASs is a very well-known and long-standing problem in the area. We hope that our results can help to achieve this challenging goal.
References
A Omitted Proofs for Sections 2 and 3
Proof of Lemma 8. We define a planar embedding for G 1 based on the position of the rectangles in R * . Each vertex v i ∈ V 1 is represented by a rectangleR i which is defined to be the convex hull of all corners of cells of G that are contained in R i . Let e = {v i , v i } ∈ E 1 be an edge. Let g be a grid cell that R i and R i both intersect. If R i and R i intersect the same horizontal line H ∈ L H then we represent e by a horizontal line segment connectingR i andR i such that H contains . We do a symmetric operation if R i and R i intersect the same vertical line V ∈ L V . If R i and R i contain the top left and the bottom right corner of g, resp., then we represent e by a diagonal line segment connectingR i andR i within g.
We do this operation with each edge e ∈ E 1 . Note that in each grid cell we draw at most one diagonal line segment. By construction, no two line segments intersect and hence G 1 is planar.
Proof of Lemma 9.
A result by Frederickson [21] states that for any integer r any n-vertex planar graph can be divided into O(n/r) regions with no more than r vertices each, and O(n/ √ r) boundary vertices in total. We choose r := O(1/( ) 2 ) and then we have at most · n boundary vertices in total. We define V to be the set of non-boundary vertices.
Proof of Lemma 10. We define a planar embedding for G 2 . Let w j ∈ V 2 and assume that w j represents a connected component C j of G 1 . We represent C j by drawing the rectanglē R i for each vertex v i ∈ C (like in the proof of Lemma 8 the rectangleR i is defined to be the convex hull of all corners of cells of G that are contained in R i ) and the following set of line segments (actually almost the same as the ones defined in the proof of Lemma 8). Consider two rectangles R i , R i ∈ C j intersecting the same grid cell g.
If R i , R i intersect the same horizontal line H ∈ L H then then we draw a horizontal line segment connectingR i andR i such that is a subset of H . If R i and R i contain the top left and the bottom right corner of g, resp., then we draw a diagonal line segment connectingR i andR i within g. This yields a connected area A j representing C j (and thus w j ).
Let e = {w j , w j } ∈ E 2 . We want to introduce a line segment representing e. By definition of E 2 there must be grid cell g and two rectangles R i , R i intersecting g whose vertices belong to different connected components of G 1 and that R i and R i contain the bottom left and the top right corner of g, resp. Note that then there can be no vertex v i ∈ V 1 whose rectangle contains the top left or the bottom right corner of g: such a rectangle would be connected by an edge with both R i and R i in G 1 and then all three rectangles R i , R i , R i would be in the same connected component of G 1 . We draw a diagonal line segment connectingR i andR i within g and then does not intersect any area A j for any vertex w j ∈ V 2 . Also, since we add at most one line segment per grid cell g these line segments do not intersect each other. Hence, G 2 is planar.
Proof of Theorem 2. First, we define the grid as described in Section 2.1. In case that the algorithm in Lemma 5 finds a solution of size k then we define the kernelR to be this solution and we are done. Otherwise, we enumerate all possible sets G k of the kind as described in Lemma 14, at most k O(1/ 8 ) many. Then, for each such set G j we consider all rectangles contained in the union of G j and we compute a feasible solution of size c for them if such a solution exists, and otherwise we compute the optimal solution. We do this by complete enumeration in time n O(c) = n O(1/ 8 ) . For each set G j the obtained solution has size at most c = O(1/ 8 ). We define the kernelR to be the union over all k O(1/ 8 ) solutions obtained in XX:17 this way. Hence, |R| ≤ k O(1/ 8 ) . Also, we can guarantee that the output of our algorithm is a subset ofR and henceR contains a (1 + )-approximative solution.
Proof of Lemma 16. Let OPT denote the optimal solution to the given instance. For each
B ∈ {1, . . . , 8/ } we define I(B ) := {i ∈ I | h i ∈ [(1/k) B +2 N, (1/k) B N )}.
For any item i ∈ I there can be at most four values of B such that i is contained in the respective set I(B ). Hence, there must be one value B ∈ {1, . . . , 8/ } such that |I(B) ∩ OPT| ≤ 2 |OPT|. Each item i ∈ I \ I(B) is then contained in L or T . Since |I(B) ∩ OPT| ≤ 2 |OPT| we lose only a factor of (1 − 2 ) −1 ≤ 1 + in the approximation ratio.
Proof of Lemma 19. Each deletion rectangle has a height of at most (1/k) B N and a width of exactly (1/k) B N . Each large item has height and width at least (1/k) B N . Therefore, each deletion rectangle can intersect with at most 4 large items in its interior (intuitively, at its 4 corners).
Proof of Lemma 21. For each item i ∈ OPT we perform the following operation. Each item i ∈ OPT such that i is placed underneath i (i.e., such that the y-coordinate of the top edge of i is upper-bounded by the y-coordinate of the bottom edge of i) is moved by N/(k k ) units down. If i is not rotated then we increase the height of i toĥ i by appending a rectangle of width w i and heightĥ i −h i ≤ N/(k k ) underneath i. If i is rotated then we increase the width of i toŵ i by appending a rectangle of width h i and heightŵ i − w i ≤ N/(k k ) underneath i. Since we moved down the mentioned other items before, the new (bigger) item does not intersect any other item. We do this operation for each item i ∈ OPT . In the process, we move each item down by at most (k − 1)N/(k k ) and when we increase its height then the y-coordinate of its bottom edge decreases by at most N/(k k ). Initially, the y-coordinate of the bottom edge of any item was at least N/k. Hence, at the end the y-coordinate of the bottom edge of any item is at least N/k − (k − 1)N/(k k ) − N/(k k ) ≥ N/k − N/k ≥ 0. Hence, all rounded items are contained in the knapsack.
Proof of Lemma 22.
Consider the packing for OPT due to Lemma 21 in which we increased the height of each non-rotated item i toĥ i and the width of each rotated item i toŵ i . Suppose that there is a set I h the latter set of items. Since |OPT | ≤ k and i ∈ OPT there must be an item i ∈Ī (j) h such that i / ∈ OPT . Then we can replace i by i sinceĥ i =ĥ i and w i ≤ w i . We perform this operation for each set L (j) h and a symmetric operation for each set L (j) w until we obtain a solution for which the lemma holds. This solution then contains the same number of items as the initial solution OPT .
B
Proofs for Section 4 Lemma 23. multi-subset sum is W[1]-hard.
Proof. Downey and Fellows [18] give a parameterized reduction from perfect code(k) to subset sum. The created instances (x 1 , . . . , x m , t, k) have the property that all numbers have digits 0 or 1 when expressed in base k + 1. Moreover, the target value t is equal to 1 . . . 1 k+1 . Accordingly, when any k numbers x i sum to t there can be no carries in the addition. Thus, no two selected numbers may have a 1 in the same position. Hence, allowing to select numbers multiple times does not create spurious solutions, giving us a correct reduction from perfect code(k) to multi-subset sum.
We split the proof of Theorem 3 into two separate statements for 2dkr and 2dk.
Theorem 24. 2dkr is W[1]-hard.
Proof. We give a polynomial-time parameterized reduction from multi-subset sum to 2dkr with output parameter k = O(k 2 ). By Lemma 23, this establishes W[1]-hardness of 2dkr.
Construction. Let (x 1 , . . . , x m , t, k) be an instance of multi-subset sum. W.l.o.g. we may assume that 4 ≤ k ≤ m and that x i < t for all i ∈ [m]. Furthermore, as solutions may select the same integer multiple times, we may assume that all the x i are pairwise different.
Throughout, we take a knapsack to be an N by N square with coordinate (0, 0) in the bottom left corner and (N, N ) at top right. The first coordinate of any point in the knapsack measures the horizontal (left-right) distance from the point to (0, 0); the second coordinate measure the vertical (up-down) distance from (0, 0). All items in the following construction are given such that their sizes reflect their intended rotation in a solution, i.e., heights refers to vertical dimensions and widths to horizontal dimensions.
We begin by constructing an instance of 2dk. Throughout, for an item R, we will use height(R) and width(R) denote its height and width. The instance of 2dk is defined as follows:
We define constants
S := k 2 · t L := k 2 · S = k 4 · t.
(The specific values will not be important so long as k 2 · t ≤ S and k 2 · S ≤ L. Intuitively, the identifiers are chosen to mean small and large.)
The knapsack has height and width both equal to
N := k · L + (2k − 1) · S + (2k − 1) · t.(1)
For each i ∈ [m] we construct k 2 items R(i, 1), . . . , R(i, k 2 ) with
height(R(i, j)) = L + S + x i (2) width(R(i, j)) = L + S + 2t − x i .(3)
We call these items tiles. We say that each tile R(i, ·) corresponds to the number x i from the input that it was constructed for. Since the x i are pairwise different, the x i corresponding to any tile can be easily read off from both height and width. We point out that all tiles have height strictly between L + S and L + S + t and width strictly between L + S + t and L + S + 2t. We add p := k · (k − 1) items T (1), . . . , T (p) with height L and width S. We call these the thin items. We add p items F (1), . . . , F (p) with height S and width L. We call these the flat items.
We add a single (very flat and very wide) item of height (2k − 2) · t and width N , which we call the bar.
The created instance has a target value of k = k 2 + 2p + 1. (The intention is to pack all thin and all flat items, the bar, and exactly k 2 tiles.) This completes the construction. Clearly, all necessary computations can performed in polynomial time. The parameter value k = k 2 + 2p + 1 is upper bounded by O(k 2 ). It remains to prove correctness.
Correctness. We need to prove that the instance (x 1 , . . . , x m , t, k) is yes for multi-subset sum if and only if the constructed instance is yes for 2dkr. ⇐=: Assume that the created instance is yes for 2dkr, i.e., that it has a packing with k = k 2 + 2p + 1 items and fix any such packing. Observe that the packing must contain at least k 2 tiles as there are only 2p + 1 items that are not tiles. We will show that the packing uses exactly k 2 tiles, the 2p thin/flat items, and the bar. It is useful to recall that tiles have height and width both greater than L + S no matter whether they are rotated.
Consider the effect of placing k vertical lines in the knapsack at horizontal coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1). We first observe that these lines must necessarily intersect all tiles of the packing because each of them has width at least L: The distance between any two consecutive lines is L − 1, same as the distance from the left border of the knapsack to the first line. The distance from the kth vertical line to the right border is also strictly less than L:
N − k · (L − 1) = k + (2k − 1) · S + (2k − 1) · t < S + (2k − 1) · S + S = (2k + 1) · S < L
Observe that no line can intersect more than k tiles: Any two tiles of the packing may not overlap and may in particular not share their intersection with any line. Since each line has length N and each intersection with a tile has length greater than L, there can be at most k tiles intersected by any line as N < (k + 1) · L:
N = k · L + (2k − 1) · S + (2k − 1) · t < k · L + 4k · S ≤ (k + 1) · L
Overall, this means that the packing contains at most k 2 tiles: There are k lines that intersect all tiles of the packing, each of them intersecting at most k. By our earlier observation, this implies that the packing contains exactly k 2 tiles in addition to all 2p flat/thin items. Moreover, each line intersects exactly k tiles and no two lines intersect the same tile.
Let us now check how the vertical lines and the flat and thin items interact. Clearly, each both flat as well as rotated thin items have width L and height S. Accordingly, each flat and each rotated thin item must be intersected by at least one of the k vertical lines. We already know that a total length of at least k · (L + S) of each line is occupied by the k tiles that the line intersects. This leaves at most a length of N − k · (L + S) = (k − 1) · S + (2k − 1) · t < k · S for intersecting flat and rotated thin items, and allows for intersecting at most k − 1 of them. (Again, no two items can share their intersection with the line.) Thus, there are at most p = k · (k − 1) of the flat and rotated thin items in the packing.
Before analyzing the vertical lines further, let us perform an analogous argument for k horizontal lines with vertical coordinates L − 1, 2 · (L − 1), . . . , k · (L − 1) and their intersection with tiles and flat/thin items. It can be verified that each of them similarly intersects exactly k tiles and that no tile is intersected twice. The argument for flat and thin items is analogous as well, except that we now reason about rotated flat and (non-rotated) thin items, which have height L and width S; we find that there are at most p such items and that each horizontal line intersects at most k − 1 of them. Since in total there must be 2p flat and thin items, this implies that both sets of lines (horizontal and vertical) intersect p of these items each. Since flat and thin items can be swapped freely, we may assume that none of these items are rotated, and that the vertical lines intersect the p flat items and the horizontal lines intersect the p thin items.
We know now that the packing contains exactly k 2 tiles as well as the p flat and the p thin items. Thus, to get a total of k = k 2 + 2p + 1 items, it must also contain the bar, which has height N and width (2k − 2) · t. W.l.o.g., we may assume that the bar is not rotated, or else we could rotate the entire packing. 3 It follows that all vertical lines intersect the bar due to its width of N , which matches the width of the knapsack.
Let us now analyze both vertical and horizontal lines further. The goal is to obtain inequalities on the values x i that go into the construction of the tiles; up to now we have only used that they are fairly large. We know that each vertical line intersects k tiles, k − 1 flat items, and the bar. Let h 1 , . . . , h k denote the heights of the tiles (ordered arbitrarily) and recall that each flat item has height S while the bar has height (2k − 2) · t. Since all intersections with the line are disjoint and the line has length N (equaling the height of the knapsack), we get that
N ≥ h 1 + . . . + h k + (k − 1) · S + (2k − 2) · t.(4)
At this point, in order to plug in values for the h i , it is important whether any of the tiles are rotated; we will show that having at least one rotated tile causes a violation of (4). To this end, recall that (non-rotated) tiles have heights strictly between L + S and L + S + t and widths strictly between L + S + t and L + S + 2t. Thus, if at least one tile is rotated then it has height greater than L + S + t, rather than the weaker bound of greater than L + S. Using this, the right-hand side of (4) can be lower bounded by
RHS > (k − 1) · (L + S) + (L + S + t) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 1) · t = N,
contradicting (4). Thus, none of the tiles intersected by the vertical line can be rotated.
Since each tile is intersected by a vertical line, it follows that no tiles can be rotated and we can analyze the lines using the sizes as given in (2) and (3). Let us return to replacing the values h i in (4). Recall that the height of a tile is equal to L + S + x i where x i is the corresponding integer from the input to the initial multi-subset sum instance. Thus, if the ith intersected tile corresponds to input integer y i ∈ {x 1 , . . . , x m } then by (2) we have
h i = L + S + y i .
Plugging this into (4) yields
N ≥ k i=1 (L + S + y i ) + (k − 1) · S + (2k − 2) · t = k · L + (2k − 1) · S + (2k − 2) · t + k i=1 y i . Using N = k · L + (2k − 1) · S + (2k − 1) · t we immediately get t ≥ k i=1 y i .(5)
More formally, item R a,b is a tile corresponding to y i , where i = 1 + ((a − b) mod k), and accordingly has height(R a,b ) = L + S + y i and width(R a,b ) = L + S + 2t − y i . This yields the required property that for each a ∈ [k] the items R a,1 , . . . , R a,k contain tiles corresponding to all numbers y 1 , . . . , y k (and correctly contain multiple copies for numbers that appear more than once). The same holds for items R 1,b , . . . , R k,b for all b ∈ [k].
We use height(R i,j ) and width(R i,j ) to refer to height and width of tile R i,j . We use left(R), right(R), top(R), and bottom(R) to specify the coordinates of any item in our packing, i.e., for the k 2 tiles, the 2p flat/thin items, and the bar. The coordinates for tiles are chosen as
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ), right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ), bottom(R a,b ) = (b − 1) · S + b−1 i=1 height(R a,i ), top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ).
Let us first check some basic properties of these coordinates:
We observe that each tile is assigned coordinates that match its size, i.e., width(R a,b ) = right(R a,b ) − left(R a,b ) and height(R a,b ) = top(R a,b ) − bottom(R a,b ).
All coordinates lie inside the knapsack. Clearly, all coordinates are non-negative and it suffices to give upper bounds for top(R a,k ) and right (R k,b ). Recall that by construction each set of tiles R a,1 , . . . , R a,k contains tiles corresponding to all numbers y 1 , . . . , y k , and same for R 1,b , . . . , R k,b . Thus we get
right(R k,b ) = (k − 1) · S + k i=1 width(R i,b ) = (k − 1) · S + k i=1 (L + S + 2t − y i ) = k · L + (2k − 1) · S + 2k · t − k i=1 y i = k · L + (2k − 1) · S + (2k − 1) · t = N. Similarly, we get top(R a,k ) = (k − 1) · S + k i=1 height(R a,i ) = (k − 1) · S + k i=1 (L + S + y i ) = k · L + (2k − 1) · S + k i=1 y i = k · L + (2k − 1) · S + t = N − (2k − 2) · t.
We will later use the gap of (2k − 2) · t between N and N − (2k − 2) · t to place the bar item, as its height exactly matches the gap. For any tile R a,b the possible coordinates fall into very small intervals, using that all heights and widths of tiles lie strictly between L + S and L + S + 2t. We show this explicitly for left(R a,b ):
left(R a,b ) = (a − 1) · S + a−1 i=1 width(R i,b ) left(R a,b ) > (a − 1) · S + a−1 i=1 (L + S) = (a − 1) · L + (2a − 2) · S left(R a,b ) < (a − 1) · S + a−1 i=1 (L + S + 2t) = (a − 1) · L + (2a − 2) · S + (2a − 2) · t < (a − 1) · L + (2a − 1) · S
In this way, we get the following intervals for left(R a,b ), right(R a,b ), bottom(R a,b ), and top(R a,b ). (Note that we sacrifice the possibility of tighter bounds in order to get the same simple form of bound for top and right and for bottom and left.) (a − 1) · L + (2a − 2) · S < left(R a,b ) < (a − 1) · L + (2a − 1) · S (8) a · L + (2a − 1) · S < right(R a,b ) < a · L + 2a · S (9)
(b − 1) · L + (2b − 2) · S < bottom(R a,b ) < (b − 1) · L + (2b − 1) · S (10) b · L + (2b − 1) · S < top(R a,b ) < b · L + 2b · S(11)
We can now easily verify that no two tiles R a,b and R c,d overlap if (a, b) = (c, d). If a = c then we may assume w.l.o.g. that a < c (and hence a ≤ c − 1). Using (11) and (10) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < left(R c,d ).
Thus, R a,b and R c,d do not overlap if a = c. If instead a = c then we must have b = d and, w.l.o.g., b < d (and hence b ≤ d − 1). Thus we have top(R a,b ) < b · L + 2b · S ≤ (d − 1) · L + (2d − 2) · S < bottom(R c,d ).
Thus, no two tiles R a,b and R c,d with (a, b) = (c, d) overlap.
We will now specify coordinates for the p flat and the p thin items. For this purpose the intervals for coordinates of the tiles (8)-(11) are highly useful. For thin items, there will always be two adjacent tiles, to the left and to the right, and we use the intervals to get top and bottom coordinates. For flat items the situation is the opposite; there are adjacent tiles on the top and bottom sides and we use the intervals to get left and right coordinates. Recall that thin items have height L and width S, whereas flat items have height S and width L.
We denote the p thin items by T a,b with a ∈ [k − 1] and b ∈ [k]; we choose coordinates as follows:
left(T a,b ) = right(R a,b ) = (a − 1) · S + a i=1 width(R i,b ) (12) right(T a,b ) = left(R a+1,b ) = a · S + a i=1 width(R i,b ) (13) bottom(T a,b ) = (b − 1) · L + (2b − 1) · S (14) top(T a,b ) = b · L + (2b − 1) · S(15)
Clearly, the coordinates match the dimension of T a,b . We denote the p flat items by F a,b with a ∈ [k] and b ∈ [k − 1], and we use the following coordinates:
left(F a,b ) = (a − 1) · L + (2a − 1) · S (16) right(F a,b ) = a · L + (2a − 1) · S (17) bottom(F a,b ) = top(R a,b ) = (b − 1) · S + b i=1 height(R a,i ) (18) top(F a,b ) = bottom(R a,b+1 ) = b · S + b i=1 height(R a,i )(19)
Clearly, the coordinates match the dimension of F a,b . It remains to show that there is no overlap between any of the items placed so far (all except the bar), recalling that intersections between tiles are already ruled out: It remains to consider (1) tile-flat, (2) tile-thin, (3) flat-flat, (4) flat-thin, and (5) thin-thin overlaps.
(1) There are no overlaps between any tile R a,b and any flat item F c,d : If a < c then a ≤ c − 1 and using (9) and (16) we get right(R a,b ) < a · L + 2a · S ≤ (c − 1) · L + (2c − 2) · S < (c − 1) · L + (2c − 1) · S = left(F c,d ).
If a > c then c ≤ a − 1 and using (17) and (8) Thus, in all four cases there is no overlap, as claimed.
(2) There are no overlaps between any tile R a,b and any thin item T c,d :
Thus, in both cases there is no overlap, as claimed. Overall, we find that there are no overlap between any pair of items placed so far. It remains to add the bar to complete our packing. We already observed earlier that top(R a,k ) = N − (2k − 2) · t. Similarly, using (19) we get
top(F a,b ) = bottom(R a,b+1 ) ≤ bottom(R a,k ) ≤ top(R a,k ) ≤ N − (2k − 2) · t
for all a ∈ [k] and b ∈ [k − 1]. In the same way, using (15) we get top(T a,b ) = b · L + (2b − 1) · S ≤ k · L + (2k − 1) · S < N − (2k − 2) · t for all a ∈ [k − 1] and b ∈ [k], recalling that N = k · L + (2k − 1) · S + (2k − 1) · t. Thus, we can place the bar B of height (2k − 2) · t and width N at the top of the knapsack without causing overlaps; formally, its coordinates are as follows.
left(B) = 0 right(B) = N bottom(B) = N − (2k − 2) · t top(B) = N
Overall, we have placed k 2 + 2p + 1 items without overlap. Thus, the constructed instance of 2dk is a yes-instance, as required. This completes the proof.
Corollary 25. The 2dk problem is W[1]-hard.
Proof. We can use the same construction as in the proof of Theorem 24 to get a parameterized reduction from multi-subset sum to 2dk.
If the constructed instance is yes for 2dk then it is also yes for 2dkr, as the same packing of k = k 2 + 2p + 1 items can be used. As showed earlier, the latter implies that the input instance is yes for multi-subset sum. Conversely, if the input instance is yes for multi-subset sum then we already showed that there is a feasible packing to show that the constructed instance is yes for 2dkr. Since the packing did not require rotation of any items, it is also a feasible solution showing that the instance is yes for 2dk. Example showing that Lemma 15 cannot be generalized to 2dk (without rotations). The total height of the k/2 items on the bottom of the knapsack can be made arbitrarily small. Suppose that we wanted to free up an area of height f (k) · N and width N or of height N and width f (k) · N (for some fixed function f ). If the total height of the items on the bottom is smaller than f (k) · N then we would have to eliminate the k/2 items on the bottom or the k/2 items on top. Thus, we would lose a factor of 2 > 1 + ε in the approximation ratio.
| 12,445 |
1811.09242
|
2901560048
|
Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the state-of-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: this https URL.
|
Previous works on WSI used context vectors and attributes @cite_8 , pretrained classification systems @cite_12 , and alignment of parallel corpus @cite_27 . In the most recent shared task on WSI @cite_7 , top models used lexical substitution method () @cite_17 and Hierarchical Dirichlet Process trained with additional instances () @cite_20 .
|
{
"abstract": [
"Most work on word sense disambiguation has assumed that word usages are best labeled with a single sense. However, contextual ambiguity or fine-grained senses can potentially enable multiple sense interpretations of a usage. We present a new SemEval task for evaluating Word Sense Induction and Disambiguation systems in a setting where instances may be labeled with multiple senses, weighted by their applicability. Four teams submitted nine systems, which were evaluated in two settings.",
"We present MSDA (Major Senses Discovery Algorithm) --a development over the context vector approach to (noun) sense discrimination [20, 24] that uses attributes and values instead of word features to cluster contexts, and does not require for the number of senses to be fixed beforehand. The algorithm achieves a precision of 89 on a dataset including both ambiguous and non-ambiguous nouns, twice that of previous algorithms.",
"Given a parallel corpus, if two distinct words in language A, a1 and a2, are aligned to the same word b1 in language B, then this might signal that b1 is polysemous, or it might signal a1 and a2 are synonyms. Both assumptions with successful work have been put forward in the literature. We investigate these assumptions, along with other questions of word sense, by looking at sampled parallel sentences containing tokens of the same type in English, asking how often they mean the same thing when they are: 1. aligned to the same foreign type; and 2. aligned to different foreign types. Results for French-English and Chinese-English parallel corpora show similar behavior: Synonymy is only very weakly the more prevalent scenario, where both cases regularly occur.",
"This paper describes our system for Task 11 of SemEval-2013. In the task, participants are provided with a set of ambiguous search queries and the snippets returned by a search engine, and are asked to associate senses with the snippets. The snippets are then clustered using the sense assignments and systems are evaluated based on the quality of the snippet clusters. Our system adopts a preexisting Word Sense Induction (WSI) methodology based on Hierarchical Dirichlet Process (HDP), a non-parametric topic model. Our system is trained over extracts from the full text of English Wikipedia, and is shown to perform well in the shared task.",
"We develop a supersense taxonomy for adjectives, based on that of GermaNet, and apply it to English adjectives in WordNet using human annotation and supervised classification. Results show that accuracy for automatic adjective type classification is high, but synsets are considerably more difficult to classify, even for trained human annotators. We release the manually annotated data, the classifier, and the induced supersense labeling of 12,304 WordNet adjective synsets.",
""
],
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_27",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"2117805747",
"99559414",
"2128810992",
"2139748995",
"265224334",
""
]
}
|
AutoSense Model for Word Sense Induction
|
Word sense induction (WSI) is the task where given an ambiguous target word (e.g. cold) and texts where the word is used, we automatically discover its multiple senses or meanings (e.g. (1) nose infection, (2) absence of heat, etc.). We show examples of words with multiple senses and example usage in a text 1 in Figure 1. It is distinct from its similar supervised counterpart, word sense disambiguation (WSD) (Stevenson and Wilks 2003), because WSI models should consider the following challenges due to its unsupervised nature: (C1) adaptability to new domains, (C2) ability to detect novel senses, and (C3) flexibility to different word sense granularities (Jurgens and Klapaftis 2013). Another Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1 All sense meanings are copied from WordNet: http:// wordnetweb.princeton.edu/perl/webwn
Senses of play
Senses of cold task similar to the WSI is the unsupervised author name disambiguation (UAND) task (Song et al. 2007), where it aims to automatically find different authors, instead of words, with the same name.
In this paper, we consider a latent variable modeling approach to WSI problem as it is proven to be more effective than other approaches (Chang, Pei, and Chen 2014;Komninos and Manandhar 2016). Specifically, we look into methods based on Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003), a topic modeling method that automatically discovers the topics underlying a set of documents using Dirichlet priors to infer the multinomial distribution over words and topics. LDA naturally answers two of the three main problems mentioned above, i.e. (C1) and (C2), of the WSI task (Brody and Lapata 2009). However, it is not flexible with regards to (C3), or the sense granularity problem, as it requires the users to specify the number of senses: Current systems (Wang et al. 2015;Chang, Pei, and Chen 2014) required to set the number of senses to a small number (set to 3 or 5 in the literature) to get a good accuracy, however many words may have a large number of senses, e.g. play in Figure 1.
LDA: !(#|%)
AutoSense: !('|#, (% ) , %)) target word: cold t 0 : medical t 1 : temperature t 2 : science t 3 : weather (cold, common) (cold, sick) (cold, sneeze) … t 0 : medical, t 2 : science, t 1 : temperature, … Figure 2: Example induced senses when the target word is cold from LDA and AutoSense. Applying our observations to LDA introduces both garbage and fine-grained senses.
To this end, we propose a latent variable model called AutoSense that solves all the challenges of WSI, including overcoming the sense granularity problem. Consider Figure 2 on finding the senses of the target word cold. An LDA model naively considers the topics as senses and thus differentiates the usage of cold in the medical and science domains, even though the same sense is commonly used in the two domains. This results in too many senses induced by the model. We extend LDA using two observations. First, we introduce a separate latent variable for senses, which can be represented as a distribution over topics. This introduces more accurate induced senses (e.g. the cold: nose infection sense can be from a mixture of medical, science, and temperature topics), as well as garbage senses (colored red in the figure) as most topic distributions will not be assigned to any instance. Second, we enforce senses to generate target-neighbor pairs, a pair (w t , w) which consists of the target word w t and one of its neighboring word w, at once. This separates the topic distributions into fine-grained senses based on lexical semantic features easily captured by the target-neighbor pairs. For example, the cold: absence of heat and the cold: sensation from low temperature senses are both related to temperature, but have different syntactic and semantic usage.
By applying the two observations above, AutoSense removes the strict requirement on correctly setting the number of senses by throwing garbage senses and introducing fine-grained senses. Nonparametric models (Teh et al. 2004; have also been used to solve this problem by automatically inducing the number of senses, however our experiments show that these models are less effective than parametric models and induce incorrect number of senses. Our proposed model is parametric, and is also able to adapt to the different number of senses of different words, even when the number of senses is set to an arbitrarily large number. Moreover, the model can also be used in other tasks such as UAND where the variance in the number of senses is large. To the best of our knowledge, we are the first to experiment extensively on the sense granularity problem of parametric latent variable models.
In our experiments, we estimate the parameters of the model using collapsed Gibbs sampling and get the sense distribution of each instance as the WSI solution. We evaluate our model using the SemEval 2010 and 2013 WSI datasets (Manandhar et al. 2010;Jurgens and Klapaftis 2013). Results show that AutoSense performs superior than previous state-of-the-art models. We also provide analyses and experiments that shows how AutoSense overcomes the issue on sense granularity. Finally, we show that our model performs the best on unsupervised author name disambiguation (UAND), where the sense granularities are extremely varied.
Proposed Model
There are two reasons why Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003) is not effective for WSI. First, LDA tries to give instance assignments to all senses even when it is unnecessary. For example, when the number of senses S is set to 10, the model tries to assign all the senses to all instances even when the original number of senses of a target word is 3. LDA extensions (Wang et al. 2015; Chang, Pei, and Chen 2014) mitigated this problem by setting S to a small number (e.g. 3 or 5). However, this is not a good solution because there are many words with more than five senses. Second, LDA and its extensions do not consider the existence of fine-grained senses. For example, the cold: absence of heat and the cold: sensation from low temperature senses are fine-grained senses because they are similarly related to temperature yet have different usage.
AutoSense Model
To solve the problems above, we propose to extend LDA in two parts. First, we introduce a new latent variable, apart from the topic latent variable, to represent word senses. Previous works also attempted to introduce a separate sense latent variable to generate all the words (Chang, Pei, and Chen 2014), or to generate only the neighboring words within a local context, decided by a strict user-specified window (Wang et al. 2015). We improve by softening the strict local context assumption by introducing a switch variable which decides whether a word not in a local context should be generated by conditioning also on the sense latent variable. Our experiments show that our sense representation provides superior improvements from previous models. Second, we force the model to generate target-neighbor
D # of documents L # of local context words M # of global context words S # of senses T # of topics V
vocabulary size wt target word w l , wm word in local/global context s l , sm sense in local/global context t l , tm topic in local/global context x sense/topic switch θs, θt, θx multinomial distribution over senses/topics/switches θ s|t , θ t|s multinomial distribution over senses/topics given topics/senses θst multinomial distribution over sense & topic pairs φ (s) , φ (t) multinomial distribution over words α
Dirichlet prior over θs, except θx β Dirichlet prior over φs γ Dirichlet prior over θx Table 1: Meanings of the notations in AutoSense pairs at once in the local context, instead of generating words one by one. A target-neighbor pair (w t , w) consists of the target word w t and a neighboring word w in the local context. For example, the target-neighbor pairs in "cold snowy weather", where w t is cold, are (cold, snowy) and (cold, weather). These pairs give explicit information on the lexical semantics of the target word given the neighboring words. In our running example (Figure 2), the cold: absence of heat and the cold: sensation from low temperature senses can be easily differentiated when we are given the target-neighbor pairs (cold, weather) and (cold, climate) for the former, and (cold, water) and (cold, f resh) for the latter sense, rather than the individual words. These extensions bring us to our proposed model called AutoSense. The graphical representation of AutoSense is shown in Figure 3, while the meaning of the notations used in this paper is shown in Table 1.
Generative process For each instance, we divide the text into two contexts: the local context L which includes the target word w t and its neighboring words w l , and the global context M which contains the other remaining words w m . Words from different contexts are generated separately.
In the global context M , words w m are generated from either a sense s or a topic t latent variable. The selection is done by a switch variable x. If x = 1, then the word generation is done by using the sense variable s. Otherwise, it is done by using the topic variable t. The probability of a global context word w m in document d is given below.
P (w m |d) = P (w m |x = 1) s P (w m |s)P (s|d)+ P (w m |x = 2) t P (w m |t)P (t|d) = θ x=1 s θ (d) s φ (s) wm + θ x=2 t θ (d) t φ (t) wm
In the local context L, words w l are generated from both sense s and topic t variables. Also, the target word w t is generated along with w l as target-neighbor pairs (w t , w l ) using the sense variable s. Sense and topic variables are dependent to each other, so we generate them using the joint probability p(s, t|d). We factorize p(s, t|d) approximately using ideas from dependency networks (Heckerman et al. 2000) to avoid independency assumptions, i.e. p(a, b|c) = p(a|b, c)p(b|a, c), and deficient modeling (Brown et al. 1993) to ignore redundancies, i.e. p(a|b, c)p(b|a, c) = p(a|b)p(a|c)p(b|a)p(b|c)p(a, b). The probability of a local context word w l in document d given below.
P (w t , w l |d) = s t p(w t |s)p(w l |s, t)p(s, t|d) ≈ s t p(w t |s)p(w l |s, t)p(s|d, t)p(t|d, s) ≈ s t p(w t |s)p(w l |s)p(w l |t) p(s|d)p(s|t)p(t|d)p(t|s)p(s, t) = s t φ (s) wt φ (s) w l φ (t) w l θ (d) s θ (d) t θ s|t θ t|s θ st
Inference We use collapsed Gibbs sampling (Griffiths and Steyvers 2004) to estimate the latent variables. At each transition step of the Markov chain, for each word w m in the global context, we draw the switch x ∼ {1, 2}, and the sense s = k or the topic t = j variables using the conditional probabilities given below. The variable C AB ab represents the number of a ∈ A and b ∈ B assignments, excluding the current word. The rest corresponds to the other remaining variables, such as the instance d, the current word w m , the θ and φ distributions, and the α, β, and γ Dirichlet priors.
P (x = 1, s = k|rest) = C DX d1 + γ 2 x =1 C DX dx + 2γ C DS dk + α S k =1 C DS dk + Sα C SW kwm + β V w =1 C SW kw + V m β P (x = 2, t = j|rest) = C DX d2 + γ 2 x =1 C DX dx + 2γ C DT dj + α T j =1 C DT dj + T α C T W jwm + β V w =1 C T W jw + V m βSubsequently
, for each word w l and the target word w t (forming the target-neighbor pair (w t , w l )) in the local context, we draw the sense s = k and the topic t = j variables using the conditional probability given below.
P (t i = j, s i = k|rest) = C DT dj + α T j =1 C DT dj + T α C DS dk + α S k =1 C DS dk + Sα C T W jw l + β V w =1 C T W jw + V l β C SW kw l + β V w =1 C SW kw + V l β C SW kwt + β V w =1 C SW kw + V l β + 1 C ST kj + α T j =1 C ST kj + T α C T S jk + α S k =1 C T S jk + Sα C ST kj + α S k =1 T j =1 C ST k j + ST α j
Word sense induction After inference is done, the approximate probability of the sense s of the target word in a given document d is induced using the sense distribution of the document as shown in the equation below, where C AB ab represents the number of a ∈ A and b ∈ B assignments. We also calculate the word distribution of each sense using the second equation below to inspect the meaning of sense. For preprocessing, we do tokenization, lemmatization, and removing of symbols to build the word lists using Stanford CoreNLP (Manning et al. 2014). We divide the word lists into two contexts: the local and global context. Following (Wang et al. 2015), we set the local context window to 10, with a maximum number of words of 21 (i.e. 10 words before and 10 words after). Other words are put into the global context. Note however that AutoSense has a less strict global/local context assumption as it treats some words in the global context as local depending on the switch variable.
θ s|d = C DS ds S s =1 C DS ds θ w|s = C SW sw V w =1 C SW sw(
Parameter setting We set the hyperparameters to α = 0.1, β = 0.01, γ = 0.3, following the conventional setup (Griffiths and Steyvers 2004;Chemudugunta, Smyth, and Steyvers 2006). We arbitrarily set the number of senses to S = 15, and the number of topics T = 2S = 30, following (Wang et al. 2015). We also include four other versions of our model: AutoSense −wp removes the target-neighbor pair constraint and transforms the local context to that of STM, AutoSense −sw removes the switch variable and transforms the global context to that of LDA, AutoSense s=X is a tuned and best version of the model, where the number of senses is tuned over a separate development set provided by the shared tasks and X is the tuned number of sense, different for each dataset, and AutoSense s=100 is the overestimated and worst version of the model, where we set the number of senses to an arbitrary large number, i.e. 100.
We set the number of iterations to 2000 and run the Gibbs sampler. Following the convention of previous works (Lau et al. 2012;Goyal and Hovy 2014;Wang et al. 2015), we assume convergence when the number of iterations is high. However, due to the randomized nature of Gibbs sampling, we report the average scores over 5 runs of Gibbs sampling. We then use the distribution θ s|d as shown in Equation 1 as the solution of the WSI problem.
Experiments Word sense induction
SemEval 2010 For the SemEval 2010 dataset, we compare models using two unsupervised metrics: V-measure (V-M) and paired F-score (F-S). V-M favors a high number of senses (e.g. assigning one cluster per instance), while F-S favors a small number of senses (e.g. all instances in one cluster) (Manandhar et al. 2010). In order to get a common ground for comparison, we do a geometric average AVG of both metrics, following (Wang et al. 2015). Finally, we also report the absolute difference between the actual (3.85) and induced number of senses as δ(#S).
We compare with seven other models: a) LDA on cooccurrence graphs (LDA) and b) spectral clustering on cooccurrence graphs (Spectral) as reported in (Goyal and Hovy 2014), c) Hidden Concept (HC), d) HC using Zipf's law (HC+Zipf), and e) Bayesian nonparametric version of HC (BNP-HC) as reported in (Chang, Pei, and Chen 2014), f) CRP-based sense embeddings with positive PMI vectors as pre-trained vectors (CRP-PPMI), and g) Multi-Sense Skip-gram Model (SE-WSI-fix) as reported in (Song 2016). Table 2a, where AutoSense outperforms other competing models on AVG. Among the Au-toSense models, the AutoSense −wp and AutoSense −sw version perform the worst, emphasizing the necessity of the target-neighbor pairs and the switch variable. The overestimated AutoSense s=100 performs better than previously proposed models, proving the robustness of our model on the different word sense granularities. On the δ(#S) metric, the untuned AutoSense and AutoSense s=5 perform the best. The V-M metric needs to be interpreted carefully, because it can easily be maximized by separating all instances into different sense clusters and thus overestimating the actual number of senses #S and decreasing the F-S metric. The model BNP-HC is an example of such: Though its V-M metric is the highest, it scores the lowest on the F-S metric and greatly overestimates #S, thus having a very high δ(#S). The goal is thus a good balance of V-M and F-S (i.e. highest AVG), and a close estimation of #S (i.e. lowest δ(#S), which is successfully achieved by our models.
Results are shown in
SemEval 2013 Two metrics are used for the SemEval 2013 dataset: fuzzy B-cubed (F-BC) and fuzzy normalized mutual information (F-NMI). F-BC gives preference to labelling all instances with the same sense, while F-NMI gives preference to labelling all instances with distinct senses. Therefore, computing the AVG of both metrics is also necessary in this experiment, for ease of comparison, as also suggested in (Wang et al. 2015).
We use seven baselines: a) lexical substitution method (AI-KU) and b) nonparametric HDP model (Unimelb) as reported in (Jurgens and Klapaftis Results are shown in Table 2b. Among the models, all versions of AutoSense perform better than other models on AVG. The untuned AutoSense and AutoSense s=7 especially garner noticeable increase of 6.1% on fuzzy B-cubed metric from MCC, the previous best model. We also notice a big 6.0% decrease on the fuzzy B-cubed of AutoSense when the target-neighbor pair context is removed. This means that introducing the target-neighbor pair is crucial to the improvement of the model. Finally, the overestimated AutoSense model performs as well as the other AutoSense models, even outperforming all previous models on AVG, which proves the effectiveness of AutoSense even when s is set to a large value.
For completeness, we also report STM with additional contexts, STM+actual and STM+ukWac (Wang et al. 2015), where they used the actual additional contexts from the original data and semantically similar contexts from ukWac, respectively, as additional global context. With the performance gain we achieved, AutoSense without additional context can perform comparably to models with additional contexts: Our model greatly outperforms these models on the Sense Word distribution #Docs 1 hotel tour tourist summer flight 22 2 month ticket available performance 3 3 guest office stateroom class suite 3 * advance overseas line popular japan 0 * email day buy unable tour 0 * sort basic tour time 0 Table 3: Six of the 15 senses of the target verb book using AutoSense with S = 15. The word lists shown are preprocessed to remove stopwords and the target word. The first three senses are senses which are assigned at least once to an instance document. The last three are garbage senses.
F-BC metric by at least 2%. Also, considering that both Au-toSense and STM are LDA-based models, the same data enhancements can straightforwardly be applied when the needs arise. We similarly apply the actual additional contexts to AutoSense and find that we achieve state-of-the-art performance on AVG.
Sense granularity problem
The main weakness of LDA when used on WSI task is the sense granularity problem. Recent models such as HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) mitigated this problem by tuning the number of senses hyperparameter S to minimize the error. However, such tuning, often empirically set to a small number such as S = 3 (Wang et al. 2015), fails to infer varying number of senses of words, especially for words with a higher number of senses. Nonparametric models such as HDP and BNP-HC Chang, Pei, and Chen 2014) claim to automatically induce different S for each word. However, as shown in the results in Table 2, the estimated S is far from the actual number of senses and both models are ineffective.
On the other hand, Table 2 also shows that AutoSense is effective even when S is overestimated. We explain why through an example result shown in Table 3, where the target word is the verb book, the actual number of senses is three, and S is set to 15. First, we see that there are senses which are not assigned to any instance document, signified by * , which we call garbage senses. We notice that effectively representing a new latent variable for sense as a distribution over topics forces the model to throw garbage senses. Second, while it is easy to distinguish the third sense (i.e., book: register in a booker) to the two other senses, the first and second senses both refer to planning or arranging for an event in advance. Incorporating the target-neighbor pairs helps the model differentiates both into fine-grained senses book: arrange for and reserve in advance and book: engage for a performance.
We compare the competing models quantitatively on how they correctly detect the actual number of sense clusters using cluster error, which is the mean absolute error between the detected number and the actual number of sense clusters. We compare the cluster errors of LDA (Blei, Ng, and Jordan 2003), STM (Wang et al. 2015), HC (Chang, Pei, and Chen 2014), and a nonparametric model HDP (Teh et al. 2004), with AutoSense. We report the results in Figure 4. Results show that the cluster error of LDA increases sharply as the number of senses exceeds the actual mean number of senses. HC and STM also throw garbage senses since they also introduce in some way a new sense variable, however the cluster errors of both models still increase when S is set beyond the maximum number of senses. We argue that this is because first, the sense representation is not optimal as they assume strict local/global context assumption, and second and most importantly, the models do not produce fine-grained senses. AutoSense does both garbage sense throwing and fine-grained sense induction, which helps in the detection of the actual word granularity. Finally, the cluster error of AutoSense is always better than that of HDP. This shows that AutoSense, despite being a parametric model, automatically detects the number of sense clusters without parameter tuning and is more accurate than the automatic detection of nonparametric models.
Unsupervised author name disambiguation
Unsupervised author name disambiguation (UAND) is a task very similar to the WSI task, where ambiguous author names are the target words. However, one additional challenge of UAND is that there can be as many as 100 authors Table 4: Statistics of the number of senses of target words/names in the datasets used in the paper.
with the same name, whereas words can have at most 20 different senses, at least in our datasets, as shown in the dataset statistics in Table 4. Moreover, the standard deviations of the author name disambiguation datasets are also higher, which means that there is more variation on the number of senses per target author name. Thus, in this task, the sense granularity problem is more difficult and needs to be addressed properly. Current state-of-the-art models use non-text features such as publication venue and citations (Tang et al. 2012). We argue that text features also provide informative clues to disambiguate author names. In this experiment, we make use of text features such as the title and abstract of research papers as data instance of the task. In addition, we also include in our dataset author names and the publication venue as pseudo-words. In this way, we can reformulate the UAND task as a WSI task, and exploit text features not used in current techniques.
Experimental setup We use two publicly available datasets for the UAND task: Arnet 4 and PubMed 5 . The Arnet dataset contains 100 ambiguous author names and a total of 7528 papers as data instance. Each instance includes the title, author list, and publication venue of a research paper authored by the given author name. In addition, we also manually extract the abstracts of the research papers for additional context. The PubMed dataset contains 37 author names with a total of 2875 research papers as instances. It includes the PubMed ID of the papers authored by the given author name. We extract the title, author list, publication venue, and abstract of each PubMed ID from the PubMed website.
We use LDA (Blei, Ng, and Jordan 2003), HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) as baselines. We do not compare with non-text feature-based models (Tang et al. 2012;Cen et al. 2013) because our goal is to compare sense topic models on a task where the sense granularities are more varied. For STM and AutoSense, the title, publication venue and the author names are used as local contexts while the abstract is used as the global context. This decision is based on conclusions from previous works (Tang et al. 2012) that the title, publication venue, and the author names are more informative than the abstract when disambiguating author names. We use the same parameters as used above, and we set S to 5, 25, 50, and 100 to com-4 https://aminer.org/disambiguation 5 https://github.com/Yonsei-TSMM/author_ name_disambiguation pare the performances of the models as the number of senses increases.
Results For evaluation, we use the pairwise F1 measure to compare the performance of competing models, following (Tang et al. 2012). Results are shown in Figure 5. Au-toSense performs the best on almost all settings, except on the PubMed dataset and when S = 5, where it garners a comparable result with STM. However, in the case where S is set close to the maximum number of senses in the dataset (i.e. 28 in PubMed and 112 in Arnet), AutoSense performs the best among the models. LDA and HC perform badly on all settings and greatly decrease their performances when S becomes high. STM also shows decrease in performance on the PubMed dataset when S = 100. This is because the PubMed dataset has a lower maximum number of senses, and STM is sensitive in the setting of S, and thus hurts the robustness of the model to different sense granularities.
Conclusion
We proposed a solution to answer the sense granularity problem, one of the major challenges of the WSI task. We introduced AutoSense, a latent variable model that not only throws away garbage senses, but also induces fine-grained senses. We showed that AutoSense greatly outperforms the current state-of-the-art models in both SemEval 2010 and 2013 WSI datasets. We also show experiments on how Au-toSense is able to overcome sense granularity problem, a well-known flaw of latent variable models on. We further applied our model to UAND task, a similar task but with more varying number of senses, and showed that AutoSense performs the best among latent variable models, proving its robustness to different sense granularities.
| 4,664 |
1811.09242
|
2901560048
|
Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the state-of-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: this https URL.
|
Latent variable models such as LDA @cite_5 are used to induce the word sense of a target word after rigorous preprocessing and feature extraction (, ) @cite_1 . More recent models introduced a latent variable for the sense of a word, with the assumption that a sense has multiple concepts (, ) @cite_0 and that topics and senses should be inferred jointly () @cite_16 . In this paper, we also use a separate sense latent variable, however we show boost in performance by representing it with more versatility and by incorporating the use of target-neighbor pairs. HC was also extended to a nonparametric model () @cite_21 in order to automatically set the number of senses of a word, providing flexibility to the sense granularity @cite_29 @cite_6 @cite_20 . In our experiments, we show that the sense granularity induced from nonparametric models are incorrect making the models less effective.
|
{
"abstract": [
"",
"We propose the hierarchical Dirichlet process (HDP), a nonparametric Bayesian model for clustering problems involving multiple groups of data. Each group of data is modeled with a mixture, with the number of components being open-ended and inferred automatically by the model. Further, components can be shared across groups, allowing dependencies across groups to be modeled effectively as well as conferring generalization to new groups. Such grouped clustering problems occur often in practice, e.g. in the problem of topic discovery in document corpora. We report experimental results on three text corpora showing the effective and superior performance of the HDP over previous models.",
"Word sense induction is an unsupervised task to find and characterize different senses of polysemous words. This work investigates two unsupervised approaches that focus on using distributional word statistics to cluster the contextual information of the target words using two different algorithms involving latent dirichlet allocation and spectral clustering. Using a large corpus for achieving this task, we quantitatively analyze our clusters on the Semeval-2010 dataset and also perform a qualitative analysis of our induced senses. Our results indicate that our methods successfully characterized the senses of the target words and were also able to find unconventional senses for those words.",
"We apply topic modelling to automatically induce word senses of a target word, and demonstrate that our word sense induction method can be used to automatically detect words with emergent novel senses, as well as token occurrences of those senses. We start by exploring the utility of standard topic models for word sense induction (WSI), with a pre-determined number of topics (=senses). We next demonstrate that a non-parametric formulation that learns an appropriate number of senses per word actually performs better at the WSI task. We go on to establish state-of-the-art results over two WSI datasets, and apply the proposed model to a novel sense detection task.",
"Word Sense Induction (WSI) aims to automatically induce meanings of a polysemous word from unlabeled corpora. In this paper, we first propose a novel Bayesian parametric model to WSI. Unlike previous work, our research introduces a layer of hidden concepts and view senses as mixtures of concepts. We believe that concepts generalize the contexts, allowing the model to measure the sense similarity at a more general level. The Zipf’s law of meaning is used as a way of pre-setting the sense number for the parametric model. We further extend the parametric model to non-parametric model which not only simplifies the problem of model selection but also brings improved performance. We test our model on the benchmark datasets released by Semeval-2010 and Semeval-2007. The test results show that our model outperforms state-of-theart systems.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"",
"This paper describes our system for Task 11 of SemEval-2013. In the task, participants are provided with a set of ambiguous search queries and the snippets returned by a search engine, and are asked to associate senses with the snippets. The snippets are then clustered using the sense assignments and systems are evaluated based on the quality of the snippet clusters. Our system adopts a preexisting Word Sense Induction (WSI) methodology based on Hierarchical Dirichlet Process (HDP), a non-parametric topic model. Our system is trained over extracts from the full text of English Wikipedia, and is shown to perform well in the shared task."
],
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_20"
],
"mid": [
"",
"2100163972",
"2251010291",
"1897587124",
"2251188465",
"1880262756",
"",
"2139748995"
]
}
|
AutoSense Model for Word Sense Induction
|
Word sense induction (WSI) is the task where given an ambiguous target word (e.g. cold) and texts where the word is used, we automatically discover its multiple senses or meanings (e.g. (1) nose infection, (2) absence of heat, etc.). We show examples of words with multiple senses and example usage in a text 1 in Figure 1. It is distinct from its similar supervised counterpart, word sense disambiguation (WSD) (Stevenson and Wilks 2003), because WSI models should consider the following challenges due to its unsupervised nature: (C1) adaptability to new domains, (C2) ability to detect novel senses, and (C3) flexibility to different word sense granularities (Jurgens and Klapaftis 2013). Another Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1 All sense meanings are copied from WordNet: http:// wordnetweb.princeton.edu/perl/webwn
Senses of play
Senses of cold task similar to the WSI is the unsupervised author name disambiguation (UAND) task (Song et al. 2007), where it aims to automatically find different authors, instead of words, with the same name.
In this paper, we consider a latent variable modeling approach to WSI problem as it is proven to be more effective than other approaches (Chang, Pei, and Chen 2014;Komninos and Manandhar 2016). Specifically, we look into methods based on Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003), a topic modeling method that automatically discovers the topics underlying a set of documents using Dirichlet priors to infer the multinomial distribution over words and topics. LDA naturally answers two of the three main problems mentioned above, i.e. (C1) and (C2), of the WSI task (Brody and Lapata 2009). However, it is not flexible with regards to (C3), or the sense granularity problem, as it requires the users to specify the number of senses: Current systems (Wang et al. 2015;Chang, Pei, and Chen 2014) required to set the number of senses to a small number (set to 3 or 5 in the literature) to get a good accuracy, however many words may have a large number of senses, e.g. play in Figure 1.
LDA: !(#|%)
AutoSense: !('|#, (% ) , %)) target word: cold t 0 : medical t 1 : temperature t 2 : science t 3 : weather (cold, common) (cold, sick) (cold, sneeze) … t 0 : medical, t 2 : science, t 1 : temperature, … Figure 2: Example induced senses when the target word is cold from LDA and AutoSense. Applying our observations to LDA introduces both garbage and fine-grained senses.
To this end, we propose a latent variable model called AutoSense that solves all the challenges of WSI, including overcoming the sense granularity problem. Consider Figure 2 on finding the senses of the target word cold. An LDA model naively considers the topics as senses and thus differentiates the usage of cold in the medical and science domains, even though the same sense is commonly used in the two domains. This results in too many senses induced by the model. We extend LDA using two observations. First, we introduce a separate latent variable for senses, which can be represented as a distribution over topics. This introduces more accurate induced senses (e.g. the cold: nose infection sense can be from a mixture of medical, science, and temperature topics), as well as garbage senses (colored red in the figure) as most topic distributions will not be assigned to any instance. Second, we enforce senses to generate target-neighbor pairs, a pair (w t , w) which consists of the target word w t and one of its neighboring word w, at once. This separates the topic distributions into fine-grained senses based on lexical semantic features easily captured by the target-neighbor pairs. For example, the cold: absence of heat and the cold: sensation from low temperature senses are both related to temperature, but have different syntactic and semantic usage.
By applying the two observations above, AutoSense removes the strict requirement on correctly setting the number of senses by throwing garbage senses and introducing fine-grained senses. Nonparametric models (Teh et al. 2004; have also been used to solve this problem by automatically inducing the number of senses, however our experiments show that these models are less effective than parametric models and induce incorrect number of senses. Our proposed model is parametric, and is also able to adapt to the different number of senses of different words, even when the number of senses is set to an arbitrarily large number. Moreover, the model can also be used in other tasks such as UAND where the variance in the number of senses is large. To the best of our knowledge, we are the first to experiment extensively on the sense granularity problem of parametric latent variable models.
In our experiments, we estimate the parameters of the model using collapsed Gibbs sampling and get the sense distribution of each instance as the WSI solution. We evaluate our model using the SemEval 2010 and 2013 WSI datasets (Manandhar et al. 2010;Jurgens and Klapaftis 2013). Results show that AutoSense performs superior than previous state-of-the-art models. We also provide analyses and experiments that shows how AutoSense overcomes the issue on sense granularity. Finally, we show that our model performs the best on unsupervised author name disambiguation (UAND), where the sense granularities are extremely varied.
Proposed Model
There are two reasons why Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003) is not effective for WSI. First, LDA tries to give instance assignments to all senses even when it is unnecessary. For example, when the number of senses S is set to 10, the model tries to assign all the senses to all instances even when the original number of senses of a target word is 3. LDA extensions (Wang et al. 2015; Chang, Pei, and Chen 2014) mitigated this problem by setting S to a small number (e.g. 3 or 5). However, this is not a good solution because there are many words with more than five senses. Second, LDA and its extensions do not consider the existence of fine-grained senses. For example, the cold: absence of heat and the cold: sensation from low temperature senses are fine-grained senses because they are similarly related to temperature yet have different usage.
AutoSense Model
To solve the problems above, we propose to extend LDA in two parts. First, we introduce a new latent variable, apart from the topic latent variable, to represent word senses. Previous works also attempted to introduce a separate sense latent variable to generate all the words (Chang, Pei, and Chen 2014), or to generate only the neighboring words within a local context, decided by a strict user-specified window (Wang et al. 2015). We improve by softening the strict local context assumption by introducing a switch variable which decides whether a word not in a local context should be generated by conditioning also on the sense latent variable. Our experiments show that our sense representation provides superior improvements from previous models. Second, we force the model to generate target-neighbor
D # of documents L # of local context words M # of global context words S # of senses T # of topics V
vocabulary size wt target word w l , wm word in local/global context s l , sm sense in local/global context t l , tm topic in local/global context x sense/topic switch θs, θt, θx multinomial distribution over senses/topics/switches θ s|t , θ t|s multinomial distribution over senses/topics given topics/senses θst multinomial distribution over sense & topic pairs φ (s) , φ (t) multinomial distribution over words α
Dirichlet prior over θs, except θx β Dirichlet prior over φs γ Dirichlet prior over θx Table 1: Meanings of the notations in AutoSense pairs at once in the local context, instead of generating words one by one. A target-neighbor pair (w t , w) consists of the target word w t and a neighboring word w in the local context. For example, the target-neighbor pairs in "cold snowy weather", where w t is cold, are (cold, snowy) and (cold, weather). These pairs give explicit information on the lexical semantics of the target word given the neighboring words. In our running example (Figure 2), the cold: absence of heat and the cold: sensation from low temperature senses can be easily differentiated when we are given the target-neighbor pairs (cold, weather) and (cold, climate) for the former, and (cold, water) and (cold, f resh) for the latter sense, rather than the individual words. These extensions bring us to our proposed model called AutoSense. The graphical representation of AutoSense is shown in Figure 3, while the meaning of the notations used in this paper is shown in Table 1.
Generative process For each instance, we divide the text into two contexts: the local context L which includes the target word w t and its neighboring words w l , and the global context M which contains the other remaining words w m . Words from different contexts are generated separately.
In the global context M , words w m are generated from either a sense s or a topic t latent variable. The selection is done by a switch variable x. If x = 1, then the word generation is done by using the sense variable s. Otherwise, it is done by using the topic variable t. The probability of a global context word w m in document d is given below.
P (w m |d) = P (w m |x = 1) s P (w m |s)P (s|d)+ P (w m |x = 2) t P (w m |t)P (t|d) = θ x=1 s θ (d) s φ (s) wm + θ x=2 t θ (d) t φ (t) wm
In the local context L, words w l are generated from both sense s and topic t variables. Also, the target word w t is generated along with w l as target-neighbor pairs (w t , w l ) using the sense variable s. Sense and topic variables are dependent to each other, so we generate them using the joint probability p(s, t|d). We factorize p(s, t|d) approximately using ideas from dependency networks (Heckerman et al. 2000) to avoid independency assumptions, i.e. p(a, b|c) = p(a|b, c)p(b|a, c), and deficient modeling (Brown et al. 1993) to ignore redundancies, i.e. p(a|b, c)p(b|a, c) = p(a|b)p(a|c)p(b|a)p(b|c)p(a, b). The probability of a local context word w l in document d given below.
P (w t , w l |d) = s t p(w t |s)p(w l |s, t)p(s, t|d) ≈ s t p(w t |s)p(w l |s, t)p(s|d, t)p(t|d, s) ≈ s t p(w t |s)p(w l |s)p(w l |t) p(s|d)p(s|t)p(t|d)p(t|s)p(s, t) = s t φ (s) wt φ (s) w l φ (t) w l θ (d) s θ (d) t θ s|t θ t|s θ st
Inference We use collapsed Gibbs sampling (Griffiths and Steyvers 2004) to estimate the latent variables. At each transition step of the Markov chain, for each word w m in the global context, we draw the switch x ∼ {1, 2}, and the sense s = k or the topic t = j variables using the conditional probabilities given below. The variable C AB ab represents the number of a ∈ A and b ∈ B assignments, excluding the current word. The rest corresponds to the other remaining variables, such as the instance d, the current word w m , the θ and φ distributions, and the α, β, and γ Dirichlet priors.
P (x = 1, s = k|rest) = C DX d1 + γ 2 x =1 C DX dx + 2γ C DS dk + α S k =1 C DS dk + Sα C SW kwm + β V w =1 C SW kw + V m β P (x = 2, t = j|rest) = C DX d2 + γ 2 x =1 C DX dx + 2γ C DT dj + α T j =1 C DT dj + T α C T W jwm + β V w =1 C T W jw + V m βSubsequently
, for each word w l and the target word w t (forming the target-neighbor pair (w t , w l )) in the local context, we draw the sense s = k and the topic t = j variables using the conditional probability given below.
P (t i = j, s i = k|rest) = C DT dj + α T j =1 C DT dj + T α C DS dk + α S k =1 C DS dk + Sα C T W jw l + β V w =1 C T W jw + V l β C SW kw l + β V w =1 C SW kw + V l β C SW kwt + β V w =1 C SW kw + V l β + 1 C ST kj + α T j =1 C ST kj + T α C T S jk + α S k =1 C T S jk + Sα C ST kj + α S k =1 T j =1 C ST k j + ST α j
Word sense induction After inference is done, the approximate probability of the sense s of the target word in a given document d is induced using the sense distribution of the document as shown in the equation below, where C AB ab represents the number of a ∈ A and b ∈ B assignments. We also calculate the word distribution of each sense using the second equation below to inspect the meaning of sense. For preprocessing, we do tokenization, lemmatization, and removing of symbols to build the word lists using Stanford CoreNLP (Manning et al. 2014). We divide the word lists into two contexts: the local and global context. Following (Wang et al. 2015), we set the local context window to 10, with a maximum number of words of 21 (i.e. 10 words before and 10 words after). Other words are put into the global context. Note however that AutoSense has a less strict global/local context assumption as it treats some words in the global context as local depending on the switch variable.
θ s|d = C DS ds S s =1 C DS ds θ w|s = C SW sw V w =1 C SW sw(
Parameter setting We set the hyperparameters to α = 0.1, β = 0.01, γ = 0.3, following the conventional setup (Griffiths and Steyvers 2004;Chemudugunta, Smyth, and Steyvers 2006). We arbitrarily set the number of senses to S = 15, and the number of topics T = 2S = 30, following (Wang et al. 2015). We also include four other versions of our model: AutoSense −wp removes the target-neighbor pair constraint and transforms the local context to that of STM, AutoSense −sw removes the switch variable and transforms the global context to that of LDA, AutoSense s=X is a tuned and best version of the model, where the number of senses is tuned over a separate development set provided by the shared tasks and X is the tuned number of sense, different for each dataset, and AutoSense s=100 is the overestimated and worst version of the model, where we set the number of senses to an arbitrary large number, i.e. 100.
We set the number of iterations to 2000 and run the Gibbs sampler. Following the convention of previous works (Lau et al. 2012;Goyal and Hovy 2014;Wang et al. 2015), we assume convergence when the number of iterations is high. However, due to the randomized nature of Gibbs sampling, we report the average scores over 5 runs of Gibbs sampling. We then use the distribution θ s|d as shown in Equation 1 as the solution of the WSI problem.
Experiments Word sense induction
SemEval 2010 For the SemEval 2010 dataset, we compare models using two unsupervised metrics: V-measure (V-M) and paired F-score (F-S). V-M favors a high number of senses (e.g. assigning one cluster per instance), while F-S favors a small number of senses (e.g. all instances in one cluster) (Manandhar et al. 2010). In order to get a common ground for comparison, we do a geometric average AVG of both metrics, following (Wang et al. 2015). Finally, we also report the absolute difference between the actual (3.85) and induced number of senses as δ(#S).
We compare with seven other models: a) LDA on cooccurrence graphs (LDA) and b) spectral clustering on cooccurrence graphs (Spectral) as reported in (Goyal and Hovy 2014), c) Hidden Concept (HC), d) HC using Zipf's law (HC+Zipf), and e) Bayesian nonparametric version of HC (BNP-HC) as reported in (Chang, Pei, and Chen 2014), f) CRP-based sense embeddings with positive PMI vectors as pre-trained vectors (CRP-PPMI), and g) Multi-Sense Skip-gram Model (SE-WSI-fix) as reported in (Song 2016). Table 2a, where AutoSense outperforms other competing models on AVG. Among the Au-toSense models, the AutoSense −wp and AutoSense −sw version perform the worst, emphasizing the necessity of the target-neighbor pairs and the switch variable. The overestimated AutoSense s=100 performs better than previously proposed models, proving the robustness of our model on the different word sense granularities. On the δ(#S) metric, the untuned AutoSense and AutoSense s=5 perform the best. The V-M metric needs to be interpreted carefully, because it can easily be maximized by separating all instances into different sense clusters and thus overestimating the actual number of senses #S and decreasing the F-S metric. The model BNP-HC is an example of such: Though its V-M metric is the highest, it scores the lowest on the F-S metric and greatly overestimates #S, thus having a very high δ(#S). The goal is thus a good balance of V-M and F-S (i.e. highest AVG), and a close estimation of #S (i.e. lowest δ(#S), which is successfully achieved by our models.
Results are shown in
SemEval 2013 Two metrics are used for the SemEval 2013 dataset: fuzzy B-cubed (F-BC) and fuzzy normalized mutual information (F-NMI). F-BC gives preference to labelling all instances with the same sense, while F-NMI gives preference to labelling all instances with distinct senses. Therefore, computing the AVG of both metrics is also necessary in this experiment, for ease of comparison, as also suggested in (Wang et al. 2015).
We use seven baselines: a) lexical substitution method (AI-KU) and b) nonparametric HDP model (Unimelb) as reported in (Jurgens and Klapaftis Results are shown in Table 2b. Among the models, all versions of AutoSense perform better than other models on AVG. The untuned AutoSense and AutoSense s=7 especially garner noticeable increase of 6.1% on fuzzy B-cubed metric from MCC, the previous best model. We also notice a big 6.0% decrease on the fuzzy B-cubed of AutoSense when the target-neighbor pair context is removed. This means that introducing the target-neighbor pair is crucial to the improvement of the model. Finally, the overestimated AutoSense model performs as well as the other AutoSense models, even outperforming all previous models on AVG, which proves the effectiveness of AutoSense even when s is set to a large value.
For completeness, we also report STM with additional contexts, STM+actual and STM+ukWac (Wang et al. 2015), where they used the actual additional contexts from the original data and semantically similar contexts from ukWac, respectively, as additional global context. With the performance gain we achieved, AutoSense without additional context can perform comparably to models with additional contexts: Our model greatly outperforms these models on the Sense Word distribution #Docs 1 hotel tour tourist summer flight 22 2 month ticket available performance 3 3 guest office stateroom class suite 3 * advance overseas line popular japan 0 * email day buy unable tour 0 * sort basic tour time 0 Table 3: Six of the 15 senses of the target verb book using AutoSense with S = 15. The word lists shown are preprocessed to remove stopwords and the target word. The first three senses are senses which are assigned at least once to an instance document. The last three are garbage senses.
F-BC metric by at least 2%. Also, considering that both Au-toSense and STM are LDA-based models, the same data enhancements can straightforwardly be applied when the needs arise. We similarly apply the actual additional contexts to AutoSense and find that we achieve state-of-the-art performance on AVG.
Sense granularity problem
The main weakness of LDA when used on WSI task is the sense granularity problem. Recent models such as HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) mitigated this problem by tuning the number of senses hyperparameter S to minimize the error. However, such tuning, often empirically set to a small number such as S = 3 (Wang et al. 2015), fails to infer varying number of senses of words, especially for words with a higher number of senses. Nonparametric models such as HDP and BNP-HC Chang, Pei, and Chen 2014) claim to automatically induce different S for each word. However, as shown in the results in Table 2, the estimated S is far from the actual number of senses and both models are ineffective.
On the other hand, Table 2 also shows that AutoSense is effective even when S is overestimated. We explain why through an example result shown in Table 3, where the target word is the verb book, the actual number of senses is three, and S is set to 15. First, we see that there are senses which are not assigned to any instance document, signified by * , which we call garbage senses. We notice that effectively representing a new latent variable for sense as a distribution over topics forces the model to throw garbage senses. Second, while it is easy to distinguish the third sense (i.e., book: register in a booker) to the two other senses, the first and second senses both refer to planning or arranging for an event in advance. Incorporating the target-neighbor pairs helps the model differentiates both into fine-grained senses book: arrange for and reserve in advance and book: engage for a performance.
We compare the competing models quantitatively on how they correctly detect the actual number of sense clusters using cluster error, which is the mean absolute error between the detected number and the actual number of sense clusters. We compare the cluster errors of LDA (Blei, Ng, and Jordan 2003), STM (Wang et al. 2015), HC (Chang, Pei, and Chen 2014), and a nonparametric model HDP (Teh et al. 2004), with AutoSense. We report the results in Figure 4. Results show that the cluster error of LDA increases sharply as the number of senses exceeds the actual mean number of senses. HC and STM also throw garbage senses since they also introduce in some way a new sense variable, however the cluster errors of both models still increase when S is set beyond the maximum number of senses. We argue that this is because first, the sense representation is not optimal as they assume strict local/global context assumption, and second and most importantly, the models do not produce fine-grained senses. AutoSense does both garbage sense throwing and fine-grained sense induction, which helps in the detection of the actual word granularity. Finally, the cluster error of AutoSense is always better than that of HDP. This shows that AutoSense, despite being a parametric model, automatically detects the number of sense clusters without parameter tuning and is more accurate than the automatic detection of nonparametric models.
Unsupervised author name disambiguation
Unsupervised author name disambiguation (UAND) is a task very similar to the WSI task, where ambiguous author names are the target words. However, one additional challenge of UAND is that there can be as many as 100 authors Table 4: Statistics of the number of senses of target words/names in the datasets used in the paper.
with the same name, whereas words can have at most 20 different senses, at least in our datasets, as shown in the dataset statistics in Table 4. Moreover, the standard deviations of the author name disambiguation datasets are also higher, which means that there is more variation on the number of senses per target author name. Thus, in this task, the sense granularity problem is more difficult and needs to be addressed properly. Current state-of-the-art models use non-text features such as publication venue and citations (Tang et al. 2012). We argue that text features also provide informative clues to disambiguate author names. In this experiment, we make use of text features such as the title and abstract of research papers as data instance of the task. In addition, we also include in our dataset author names and the publication venue as pseudo-words. In this way, we can reformulate the UAND task as a WSI task, and exploit text features not used in current techniques.
Experimental setup We use two publicly available datasets for the UAND task: Arnet 4 and PubMed 5 . The Arnet dataset contains 100 ambiguous author names and a total of 7528 papers as data instance. Each instance includes the title, author list, and publication venue of a research paper authored by the given author name. In addition, we also manually extract the abstracts of the research papers for additional context. The PubMed dataset contains 37 author names with a total of 2875 research papers as instances. It includes the PubMed ID of the papers authored by the given author name. We extract the title, author list, publication venue, and abstract of each PubMed ID from the PubMed website.
We use LDA (Blei, Ng, and Jordan 2003), HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) as baselines. We do not compare with non-text feature-based models (Tang et al. 2012;Cen et al. 2013) because our goal is to compare sense topic models on a task where the sense granularities are more varied. For STM and AutoSense, the title, publication venue and the author names are used as local contexts while the abstract is used as the global context. This decision is based on conclusions from previous works (Tang et al. 2012) that the title, publication venue, and the author names are more informative than the abstract when disambiguating author names. We use the same parameters as used above, and we set S to 5, 25, 50, and 100 to com-4 https://aminer.org/disambiguation 5 https://github.com/Yonsei-TSMM/author_ name_disambiguation pare the performances of the models as the number of senses increases.
Results For evaluation, we use the pairwise F1 measure to compare the performance of competing models, following (Tang et al. 2012). Results are shown in Figure 5. Au-toSense performs the best on almost all settings, except on the PubMed dataset and when S = 5, where it garners a comparable result with STM. However, in the case where S is set close to the maximum number of senses in the dataset (i.e. 28 in PubMed and 112 in Arnet), AutoSense performs the best among the models. LDA and HC perform badly on all settings and greatly decrease their performances when S becomes high. STM also shows decrease in performance on the PubMed dataset when S = 100. This is because the PubMed dataset has a lower maximum number of senses, and STM is sensitive in the setting of S, and thus hurts the robustness of the model to different sense granularities.
Conclusion
We proposed a solution to answer the sense granularity problem, one of the major challenges of the WSI task. We introduced AutoSense, a latent variable model that not only throws away garbage senses, but also induces fine-grained senses. We showed that AutoSense greatly outperforms the current state-of-the-art models in both SemEval 2010 and 2013 WSI datasets. We also show experiments on how Au-toSense is able to overcome sense granularity problem, a well-known flaw of latent variable models on. We further applied our model to UAND task, a similar task but with more varying number of senses, and showed that AutoSense performs the best among latent variable models, proving its robustness to different sense granularities.
| 4,664 |
1811.09242
|
2901560048
|
Word sense induction (WSI), or the task of automatically discovering multiple senses or meanings of a word, has three main challenges: domain adaptability, novel sense detection, and sense granularity flexibility. While current latent variable models are known to solve the first two challenges, they are not flexible to different word sense granularities, which differ very much among words, from aardvark with one sense, to play with over 50 senses. Current models either require hyperparameter tuning or nonparametric induction of the number of senses, which we find both to be ineffective. Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word. These observations alleviate the problem by (a) throwing garbage senses and (b) additionally inducing fine-grained word senses. Results show great improvements over the state-of-the-art models on popular WSI datasets. We also show that AutoSense is able to learn the appropriate sense granularity of a word. Finally, we apply AutoSense to the unsupervised author name disambiguation task where the sense granularity problem is more evident and show that AutoSense is evidently better than competing models. We share our data and code here: this https URL.
|
In the unsupervised author name disambiguation (UAND) domain, LDA-based models have also been used @cite_22 to employ text features for the task, while non-text features such as co-authors, publication venue, year, and citations are found to be stronger features @cite_28 . In this paper, we study on how to improve the performance of text features for UAND using latent variable models, which can later be combined with non-text features in the future work.
|
{
"abstract": [
"Despite years of research, the name ambiguity problem remains largely unresolved. Outstanding issues include how to capture all information for name disambiguation in a unified approach, and how to determine the number of people K in the disambiguation process. In this paper, we formalize the problem in a unified probabilistic framework, which incorporates both attributes and relationships. Specifically, we define a disambiguation objective function for the problem and propose a two-step parameter estimation algorithm. We also investigate a dynamic approach for estimating the number of people K. Experiments show that our proposed framework significantly outperforms four baseline methods of using clustering algorithms and two other previous methods. Experiments also indicate that the number K automatically found by our method is close to the actual number.",
"In bibliographies like DBLP and Citeseer, there are three kinds of entity-name problems that need to be solved. First, multiple entities share one name, which is called the name sharing problem. Second, one entity has different names, which is called the name variant problem. Third, multiple entities share multiple names, which is called the name mixing problem. We aim to solve these problems based on one model in this paper. We call this task complete entity resolution. Different from previous work, our work use global information based on data with two types of information, words and author names. We propose a generative latent topic model that involves both author names and words — the LDA-dual model, by extending the LDA (Latent Dirichlet Allocation) model. We also propose a method to obtain model parameters that is global information. Based on obtained model parameters, we propose two algorithms to solve the three problems mentioned above. Experimental results demonstrate the effectiveness and great potential of the proposed model and algorithms."
],
"cite_N": [
"@cite_28",
"@cite_22"
],
"mid": [
"2145893390",
"2167055514"
]
}
|
AutoSense Model for Word Sense Induction
|
Word sense induction (WSI) is the task where given an ambiguous target word (e.g. cold) and texts where the word is used, we automatically discover its multiple senses or meanings (e.g. (1) nose infection, (2) absence of heat, etc.). We show examples of words with multiple senses and example usage in a text 1 in Figure 1. It is distinct from its similar supervised counterpart, word sense disambiguation (WSD) (Stevenson and Wilks 2003), because WSI models should consider the following challenges due to its unsupervised nature: (C1) adaptability to new domains, (C2) ability to detect novel senses, and (C3) flexibility to different word sense granularities (Jurgens and Klapaftis 2013). Another Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1 All sense meanings are copied from WordNet: http:// wordnetweb.princeton.edu/perl/webwn
Senses of play
Senses of cold task similar to the WSI is the unsupervised author name disambiguation (UAND) task (Song et al. 2007), where it aims to automatically find different authors, instead of words, with the same name.
In this paper, we consider a latent variable modeling approach to WSI problem as it is proven to be more effective than other approaches (Chang, Pei, and Chen 2014;Komninos and Manandhar 2016). Specifically, we look into methods based on Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003), a topic modeling method that automatically discovers the topics underlying a set of documents using Dirichlet priors to infer the multinomial distribution over words and topics. LDA naturally answers two of the three main problems mentioned above, i.e. (C1) and (C2), of the WSI task (Brody and Lapata 2009). However, it is not flexible with regards to (C3), or the sense granularity problem, as it requires the users to specify the number of senses: Current systems (Wang et al. 2015;Chang, Pei, and Chen 2014) required to set the number of senses to a small number (set to 3 or 5 in the literature) to get a good accuracy, however many words may have a large number of senses, e.g. play in Figure 1.
LDA: !(#|%)
AutoSense: !('|#, (% ) , %)) target word: cold t 0 : medical t 1 : temperature t 2 : science t 3 : weather (cold, common) (cold, sick) (cold, sneeze) … t 0 : medical, t 2 : science, t 1 : temperature, … Figure 2: Example induced senses when the target word is cold from LDA and AutoSense. Applying our observations to LDA introduces both garbage and fine-grained senses.
To this end, we propose a latent variable model called AutoSense that solves all the challenges of WSI, including overcoming the sense granularity problem. Consider Figure 2 on finding the senses of the target word cold. An LDA model naively considers the topics as senses and thus differentiates the usage of cold in the medical and science domains, even though the same sense is commonly used in the two domains. This results in too many senses induced by the model. We extend LDA using two observations. First, we introduce a separate latent variable for senses, which can be represented as a distribution over topics. This introduces more accurate induced senses (e.g. the cold: nose infection sense can be from a mixture of medical, science, and temperature topics), as well as garbage senses (colored red in the figure) as most topic distributions will not be assigned to any instance. Second, we enforce senses to generate target-neighbor pairs, a pair (w t , w) which consists of the target word w t and one of its neighboring word w, at once. This separates the topic distributions into fine-grained senses based on lexical semantic features easily captured by the target-neighbor pairs. For example, the cold: absence of heat and the cold: sensation from low temperature senses are both related to temperature, but have different syntactic and semantic usage.
By applying the two observations above, AutoSense removes the strict requirement on correctly setting the number of senses by throwing garbage senses and introducing fine-grained senses. Nonparametric models (Teh et al. 2004; have also been used to solve this problem by automatically inducing the number of senses, however our experiments show that these models are less effective than parametric models and induce incorrect number of senses. Our proposed model is parametric, and is also able to adapt to the different number of senses of different words, even when the number of senses is set to an arbitrarily large number. Moreover, the model can also be used in other tasks such as UAND where the variance in the number of senses is large. To the best of our knowledge, we are the first to experiment extensively on the sense granularity problem of parametric latent variable models.
In our experiments, we estimate the parameters of the model using collapsed Gibbs sampling and get the sense distribution of each instance as the WSI solution. We evaluate our model using the SemEval 2010 and 2013 WSI datasets (Manandhar et al. 2010;Jurgens and Klapaftis 2013). Results show that AutoSense performs superior than previous state-of-the-art models. We also provide analyses and experiments that shows how AutoSense overcomes the issue on sense granularity. Finally, we show that our model performs the best on unsupervised author name disambiguation (UAND), where the sense granularities are extremely varied.
Proposed Model
There are two reasons why Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003) is not effective for WSI. First, LDA tries to give instance assignments to all senses even when it is unnecessary. For example, when the number of senses S is set to 10, the model tries to assign all the senses to all instances even when the original number of senses of a target word is 3. LDA extensions (Wang et al. 2015; Chang, Pei, and Chen 2014) mitigated this problem by setting S to a small number (e.g. 3 or 5). However, this is not a good solution because there are many words with more than five senses. Second, LDA and its extensions do not consider the existence of fine-grained senses. For example, the cold: absence of heat and the cold: sensation from low temperature senses are fine-grained senses because they are similarly related to temperature yet have different usage.
AutoSense Model
To solve the problems above, we propose to extend LDA in two parts. First, we introduce a new latent variable, apart from the topic latent variable, to represent word senses. Previous works also attempted to introduce a separate sense latent variable to generate all the words (Chang, Pei, and Chen 2014), or to generate only the neighboring words within a local context, decided by a strict user-specified window (Wang et al. 2015). We improve by softening the strict local context assumption by introducing a switch variable which decides whether a word not in a local context should be generated by conditioning also on the sense latent variable. Our experiments show that our sense representation provides superior improvements from previous models. Second, we force the model to generate target-neighbor
D # of documents L # of local context words M # of global context words S # of senses T # of topics V
vocabulary size wt target word w l , wm word in local/global context s l , sm sense in local/global context t l , tm topic in local/global context x sense/topic switch θs, θt, θx multinomial distribution over senses/topics/switches θ s|t , θ t|s multinomial distribution over senses/topics given topics/senses θst multinomial distribution over sense & topic pairs φ (s) , φ (t) multinomial distribution over words α
Dirichlet prior over θs, except θx β Dirichlet prior over φs γ Dirichlet prior over θx Table 1: Meanings of the notations in AutoSense pairs at once in the local context, instead of generating words one by one. A target-neighbor pair (w t , w) consists of the target word w t and a neighboring word w in the local context. For example, the target-neighbor pairs in "cold snowy weather", where w t is cold, are (cold, snowy) and (cold, weather). These pairs give explicit information on the lexical semantics of the target word given the neighboring words. In our running example (Figure 2), the cold: absence of heat and the cold: sensation from low temperature senses can be easily differentiated when we are given the target-neighbor pairs (cold, weather) and (cold, climate) for the former, and (cold, water) and (cold, f resh) for the latter sense, rather than the individual words. These extensions bring us to our proposed model called AutoSense. The graphical representation of AutoSense is shown in Figure 3, while the meaning of the notations used in this paper is shown in Table 1.
Generative process For each instance, we divide the text into two contexts: the local context L which includes the target word w t and its neighboring words w l , and the global context M which contains the other remaining words w m . Words from different contexts are generated separately.
In the global context M , words w m are generated from either a sense s or a topic t latent variable. The selection is done by a switch variable x. If x = 1, then the word generation is done by using the sense variable s. Otherwise, it is done by using the topic variable t. The probability of a global context word w m in document d is given below.
P (w m |d) = P (w m |x = 1) s P (w m |s)P (s|d)+ P (w m |x = 2) t P (w m |t)P (t|d) = θ x=1 s θ (d) s φ (s) wm + θ x=2 t θ (d) t φ (t) wm
In the local context L, words w l are generated from both sense s and topic t variables. Also, the target word w t is generated along with w l as target-neighbor pairs (w t , w l ) using the sense variable s. Sense and topic variables are dependent to each other, so we generate them using the joint probability p(s, t|d). We factorize p(s, t|d) approximately using ideas from dependency networks (Heckerman et al. 2000) to avoid independency assumptions, i.e. p(a, b|c) = p(a|b, c)p(b|a, c), and deficient modeling (Brown et al. 1993) to ignore redundancies, i.e. p(a|b, c)p(b|a, c) = p(a|b)p(a|c)p(b|a)p(b|c)p(a, b). The probability of a local context word w l in document d given below.
P (w t , w l |d) = s t p(w t |s)p(w l |s, t)p(s, t|d) ≈ s t p(w t |s)p(w l |s, t)p(s|d, t)p(t|d, s) ≈ s t p(w t |s)p(w l |s)p(w l |t) p(s|d)p(s|t)p(t|d)p(t|s)p(s, t) = s t φ (s) wt φ (s) w l φ (t) w l θ (d) s θ (d) t θ s|t θ t|s θ st
Inference We use collapsed Gibbs sampling (Griffiths and Steyvers 2004) to estimate the latent variables. At each transition step of the Markov chain, for each word w m in the global context, we draw the switch x ∼ {1, 2}, and the sense s = k or the topic t = j variables using the conditional probabilities given below. The variable C AB ab represents the number of a ∈ A and b ∈ B assignments, excluding the current word. The rest corresponds to the other remaining variables, such as the instance d, the current word w m , the θ and φ distributions, and the α, β, and γ Dirichlet priors.
P (x = 1, s = k|rest) = C DX d1 + γ 2 x =1 C DX dx + 2γ C DS dk + α S k =1 C DS dk + Sα C SW kwm + β V w =1 C SW kw + V m β P (x = 2, t = j|rest) = C DX d2 + γ 2 x =1 C DX dx + 2γ C DT dj + α T j =1 C DT dj + T α C T W jwm + β V w =1 C T W jw + V m βSubsequently
, for each word w l and the target word w t (forming the target-neighbor pair (w t , w l )) in the local context, we draw the sense s = k and the topic t = j variables using the conditional probability given below.
P (t i = j, s i = k|rest) = C DT dj + α T j =1 C DT dj + T α C DS dk + α S k =1 C DS dk + Sα C T W jw l + β V w =1 C T W jw + V l β C SW kw l + β V w =1 C SW kw + V l β C SW kwt + β V w =1 C SW kw + V l β + 1 C ST kj + α T j =1 C ST kj + T α C T S jk + α S k =1 C T S jk + Sα C ST kj + α S k =1 T j =1 C ST k j + ST α j
Word sense induction After inference is done, the approximate probability of the sense s of the target word in a given document d is induced using the sense distribution of the document as shown in the equation below, where C AB ab represents the number of a ∈ A and b ∈ B assignments. We also calculate the word distribution of each sense using the second equation below to inspect the meaning of sense. For preprocessing, we do tokenization, lemmatization, and removing of symbols to build the word lists using Stanford CoreNLP (Manning et al. 2014). We divide the word lists into two contexts: the local and global context. Following (Wang et al. 2015), we set the local context window to 10, with a maximum number of words of 21 (i.e. 10 words before and 10 words after). Other words are put into the global context. Note however that AutoSense has a less strict global/local context assumption as it treats some words in the global context as local depending on the switch variable.
θ s|d = C DS ds S s =1 C DS ds θ w|s = C SW sw V w =1 C SW sw(
Parameter setting We set the hyperparameters to α = 0.1, β = 0.01, γ = 0.3, following the conventional setup (Griffiths and Steyvers 2004;Chemudugunta, Smyth, and Steyvers 2006). We arbitrarily set the number of senses to S = 15, and the number of topics T = 2S = 30, following (Wang et al. 2015). We also include four other versions of our model: AutoSense −wp removes the target-neighbor pair constraint and transforms the local context to that of STM, AutoSense −sw removes the switch variable and transforms the global context to that of LDA, AutoSense s=X is a tuned and best version of the model, where the number of senses is tuned over a separate development set provided by the shared tasks and X is the tuned number of sense, different for each dataset, and AutoSense s=100 is the overestimated and worst version of the model, where we set the number of senses to an arbitrary large number, i.e. 100.
We set the number of iterations to 2000 and run the Gibbs sampler. Following the convention of previous works (Lau et al. 2012;Goyal and Hovy 2014;Wang et al. 2015), we assume convergence when the number of iterations is high. However, due to the randomized nature of Gibbs sampling, we report the average scores over 5 runs of Gibbs sampling. We then use the distribution θ s|d as shown in Equation 1 as the solution of the WSI problem.
Experiments Word sense induction
SemEval 2010 For the SemEval 2010 dataset, we compare models using two unsupervised metrics: V-measure (V-M) and paired F-score (F-S). V-M favors a high number of senses (e.g. assigning one cluster per instance), while F-S favors a small number of senses (e.g. all instances in one cluster) (Manandhar et al. 2010). In order to get a common ground for comparison, we do a geometric average AVG of both metrics, following (Wang et al. 2015). Finally, we also report the absolute difference between the actual (3.85) and induced number of senses as δ(#S).
We compare with seven other models: a) LDA on cooccurrence graphs (LDA) and b) spectral clustering on cooccurrence graphs (Spectral) as reported in (Goyal and Hovy 2014), c) Hidden Concept (HC), d) HC using Zipf's law (HC+Zipf), and e) Bayesian nonparametric version of HC (BNP-HC) as reported in (Chang, Pei, and Chen 2014), f) CRP-based sense embeddings with positive PMI vectors as pre-trained vectors (CRP-PPMI), and g) Multi-Sense Skip-gram Model (SE-WSI-fix) as reported in (Song 2016). Table 2a, where AutoSense outperforms other competing models on AVG. Among the Au-toSense models, the AutoSense −wp and AutoSense −sw version perform the worst, emphasizing the necessity of the target-neighbor pairs and the switch variable. The overestimated AutoSense s=100 performs better than previously proposed models, proving the robustness of our model on the different word sense granularities. On the δ(#S) metric, the untuned AutoSense and AutoSense s=5 perform the best. The V-M metric needs to be interpreted carefully, because it can easily be maximized by separating all instances into different sense clusters and thus overestimating the actual number of senses #S and decreasing the F-S metric. The model BNP-HC is an example of such: Though its V-M metric is the highest, it scores the lowest on the F-S metric and greatly overestimates #S, thus having a very high δ(#S). The goal is thus a good balance of V-M and F-S (i.e. highest AVG), and a close estimation of #S (i.e. lowest δ(#S), which is successfully achieved by our models.
Results are shown in
SemEval 2013 Two metrics are used for the SemEval 2013 dataset: fuzzy B-cubed (F-BC) and fuzzy normalized mutual information (F-NMI). F-BC gives preference to labelling all instances with the same sense, while F-NMI gives preference to labelling all instances with distinct senses. Therefore, computing the AVG of both metrics is also necessary in this experiment, for ease of comparison, as also suggested in (Wang et al. 2015).
We use seven baselines: a) lexical substitution method (AI-KU) and b) nonparametric HDP model (Unimelb) as reported in (Jurgens and Klapaftis Results are shown in Table 2b. Among the models, all versions of AutoSense perform better than other models on AVG. The untuned AutoSense and AutoSense s=7 especially garner noticeable increase of 6.1% on fuzzy B-cubed metric from MCC, the previous best model. We also notice a big 6.0% decrease on the fuzzy B-cubed of AutoSense when the target-neighbor pair context is removed. This means that introducing the target-neighbor pair is crucial to the improvement of the model. Finally, the overestimated AutoSense model performs as well as the other AutoSense models, even outperforming all previous models on AVG, which proves the effectiveness of AutoSense even when s is set to a large value.
For completeness, we also report STM with additional contexts, STM+actual and STM+ukWac (Wang et al. 2015), where they used the actual additional contexts from the original data and semantically similar contexts from ukWac, respectively, as additional global context. With the performance gain we achieved, AutoSense without additional context can perform comparably to models with additional contexts: Our model greatly outperforms these models on the Sense Word distribution #Docs 1 hotel tour tourist summer flight 22 2 month ticket available performance 3 3 guest office stateroom class suite 3 * advance overseas line popular japan 0 * email day buy unable tour 0 * sort basic tour time 0 Table 3: Six of the 15 senses of the target verb book using AutoSense with S = 15. The word lists shown are preprocessed to remove stopwords and the target word. The first three senses are senses which are assigned at least once to an instance document. The last three are garbage senses.
F-BC metric by at least 2%. Also, considering that both Au-toSense and STM are LDA-based models, the same data enhancements can straightforwardly be applied when the needs arise. We similarly apply the actual additional contexts to AutoSense and find that we achieve state-of-the-art performance on AVG.
Sense granularity problem
The main weakness of LDA when used on WSI task is the sense granularity problem. Recent models such as HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) mitigated this problem by tuning the number of senses hyperparameter S to minimize the error. However, such tuning, often empirically set to a small number such as S = 3 (Wang et al. 2015), fails to infer varying number of senses of words, especially for words with a higher number of senses. Nonparametric models such as HDP and BNP-HC Chang, Pei, and Chen 2014) claim to automatically induce different S for each word. However, as shown in the results in Table 2, the estimated S is far from the actual number of senses and both models are ineffective.
On the other hand, Table 2 also shows that AutoSense is effective even when S is overestimated. We explain why through an example result shown in Table 3, where the target word is the verb book, the actual number of senses is three, and S is set to 15. First, we see that there are senses which are not assigned to any instance document, signified by * , which we call garbage senses. We notice that effectively representing a new latent variable for sense as a distribution over topics forces the model to throw garbage senses. Second, while it is easy to distinguish the third sense (i.e., book: register in a booker) to the two other senses, the first and second senses both refer to planning or arranging for an event in advance. Incorporating the target-neighbor pairs helps the model differentiates both into fine-grained senses book: arrange for and reserve in advance and book: engage for a performance.
We compare the competing models quantitatively on how they correctly detect the actual number of sense clusters using cluster error, which is the mean absolute error between the detected number and the actual number of sense clusters. We compare the cluster errors of LDA (Blei, Ng, and Jordan 2003), STM (Wang et al. 2015), HC (Chang, Pei, and Chen 2014), and a nonparametric model HDP (Teh et al. 2004), with AutoSense. We report the results in Figure 4. Results show that the cluster error of LDA increases sharply as the number of senses exceeds the actual mean number of senses. HC and STM also throw garbage senses since they also introduce in some way a new sense variable, however the cluster errors of both models still increase when S is set beyond the maximum number of senses. We argue that this is because first, the sense representation is not optimal as they assume strict local/global context assumption, and second and most importantly, the models do not produce fine-grained senses. AutoSense does both garbage sense throwing and fine-grained sense induction, which helps in the detection of the actual word granularity. Finally, the cluster error of AutoSense is always better than that of HDP. This shows that AutoSense, despite being a parametric model, automatically detects the number of sense clusters without parameter tuning and is more accurate than the automatic detection of nonparametric models.
Unsupervised author name disambiguation
Unsupervised author name disambiguation (UAND) is a task very similar to the WSI task, where ambiguous author names are the target words. However, one additional challenge of UAND is that there can be as many as 100 authors Table 4: Statistics of the number of senses of target words/names in the datasets used in the paper.
with the same name, whereas words can have at most 20 different senses, at least in our datasets, as shown in the dataset statistics in Table 4. Moreover, the standard deviations of the author name disambiguation datasets are also higher, which means that there is more variation on the number of senses per target author name. Thus, in this task, the sense granularity problem is more difficult and needs to be addressed properly. Current state-of-the-art models use non-text features such as publication venue and citations (Tang et al. 2012). We argue that text features also provide informative clues to disambiguate author names. In this experiment, we make use of text features such as the title and abstract of research papers as data instance of the task. In addition, we also include in our dataset author names and the publication venue as pseudo-words. In this way, we can reformulate the UAND task as a WSI task, and exploit text features not used in current techniques.
Experimental setup We use two publicly available datasets for the UAND task: Arnet 4 and PubMed 5 . The Arnet dataset contains 100 ambiguous author names and a total of 7528 papers as data instance. Each instance includes the title, author list, and publication venue of a research paper authored by the given author name. In addition, we also manually extract the abstracts of the research papers for additional context. The PubMed dataset contains 37 author names with a total of 2875 research papers as instances. It includes the PubMed ID of the papers authored by the given author name. We extract the title, author list, publication venue, and abstract of each PubMed ID from the PubMed website.
We use LDA (Blei, Ng, and Jordan 2003), HC (Chang, Pei, and Chen 2014) and STM (Wang et al. 2015) as baselines. We do not compare with non-text feature-based models (Tang et al. 2012;Cen et al. 2013) because our goal is to compare sense topic models on a task where the sense granularities are more varied. For STM and AutoSense, the title, publication venue and the author names are used as local contexts while the abstract is used as the global context. This decision is based on conclusions from previous works (Tang et al. 2012) that the title, publication venue, and the author names are more informative than the abstract when disambiguating author names. We use the same parameters as used above, and we set S to 5, 25, 50, and 100 to com-4 https://aminer.org/disambiguation 5 https://github.com/Yonsei-TSMM/author_ name_disambiguation pare the performances of the models as the number of senses increases.
Results For evaluation, we use the pairwise F1 measure to compare the performance of competing models, following (Tang et al. 2012). Results are shown in Figure 5. Au-toSense performs the best on almost all settings, except on the PubMed dataset and when S = 5, where it garners a comparable result with STM. However, in the case where S is set close to the maximum number of senses in the dataset (i.e. 28 in PubMed and 112 in Arnet), AutoSense performs the best among the models. LDA and HC perform badly on all settings and greatly decrease their performances when S becomes high. STM also shows decrease in performance on the PubMed dataset when S = 100. This is because the PubMed dataset has a lower maximum number of senses, and STM is sensitive in the setting of S, and thus hurts the robustness of the model to different sense granularities.
Conclusion
We proposed a solution to answer the sense granularity problem, one of the major challenges of the WSI task. We introduced AutoSense, a latent variable model that not only throws away garbage senses, but also induces fine-grained senses. We showed that AutoSense greatly outperforms the current state-of-the-art models in both SemEval 2010 and 2013 WSI datasets. We also show experiments on how Au-toSense is able to overcome sense granularity problem, a well-known flaw of latent variable models on. We further applied our model to UAND task, a similar task but with more varying number of senses, and showed that AutoSense performs the best among latent variable models, proving its robustness to different sense granularities.
| 4,664 |
1811.08188
|
2901707509
|
3D object detection from monocular images has proven to be an enormously challenging task, with the performance of leading systems not yet achieving even 10 of that of LiDAR-based counterparts. One explanation for this performance gap is that existing systems are entirely at the mercy of the perspective image-based representation, in which the appearance and scale of objects varies drastically with depth and meaningful distances are difficult to infer. In this work we argue that the ability to reason about the world in 3D is an essential element of the 3D object detection task. To this end, we introduce the orthographic feature transform, which enables us to escape the image domain by mapping image-based features into an orthographic 3D space. This allows us to reason holistically about the spatial configuration of the scene in a domain where scale is consistent and distances between objects are meaningful. We apply this transformation as part of an end-to-end deep learning architecture and achieve state-of-the-art performance on the KITTI 3D object benchmark. We will release full source code and pretrained models upon acceptance of this manuscript for publication.
|
Detecting 2D bounding boxes in images is a widely studied problem and recent approaches are able to excel even on the most formidable datasets @cite_26 @cite_39 @cite_17 . Existing methods may broadly be divided into two main categories: detectors such as YOLO @cite_16 , SSD @cite_27 and RetinaNet @cite_40 which predict object bounding boxes directly and detectors such as Faster R-CNN @cite_1 and FPN @cite_23 which add an intermediate region proposal stage. To date the vast majority of 3D object detection methods have adopted the latter philosophy, in part due to the difficulty in mapping from fixed-sized regions in 3D space to variable-sized regions in the image space. We overcome this limitation via our OFT transform, allowing us to take advantage of the purported speed and accuracy benefits @cite_40 of a single-stage architecture.
|
{
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL",
""
],
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_39",
"@cite_27",
"@cite_40",
"@cite_23",
"@cite_16",
"@cite_17"
],
"mid": [
"2117539524",
"2953106684",
"",
"2193145675",
"2884561390",
"2949533892",
"2796347433",
""
]
}
|
Orthographic Feature Transform for Monocular 3D Object Detection
|
The success of any autonomous agent is contingent on its ability to detect and localize the objects in its surrounding environment. Prediction, avoidance and path planning all depend on robust estimates of the 3D positions and dimensions of other entities in the scene. This has led to 3D bounding box detection emerging as an important problem in computer vision and robotics, particularly in the context of autonomous driving. To date the 3D object detection literature has been dominated by approaches which make use of rich LiDAR point clouds [37,33,15,27,5,6,22,1], while the performance of image-only methods, which lack the absolute depth information of LiDAR, lags significantly behind. Given the high cost of existing LiDAR units, the sparsity of LiDAR point clouds at long ranges, and the need for sensor redundancy, accurate 3D object detection from Figure 1. 3D bounding box detection from monocular images. The proposed system maps image-based features to an orthographic birds-eye-view and predicts confidence maps and bounding box offsets in this space. These outputs are then decoded via nonmaximum suppression to yield discrete bounding box predictions. monocular images remains an important research objective. To this end, we present a novel 3D object detection algorithm which takes a single monocular RGB image as input and produces high quality 3D bounding boxes, achieving state-of-the-art performance among monocular methods on the challenging KITTI benchmark [8].
Images are, in many senses, an extremely challenging modality. Perspective projection implies that the scale of a single object varies considerably with distance from the camera; its appearance can change drastically depending on the viewpoint; and distances in the 3D world cannot be inferred directly. These factors present enormous challenges to a monocular 3D object detection system. A far more innocuous representation is the orthographic birdseye-view map commonly employed in many LiDAR-based methods [37,33,1]. Under this representation, scale is homogeneous; appearance is largely viewpoint-independent; and distances between objects are meaningful. Our key insight therefore is that as much reasoning as possible should be performed in this orthographic space rather than directly on the pixel-based image domain. This insight proves essential to the success of our proposed system.
It is unclear, however, how such a representation could be constructed from a monocular image alone. We therefore introduce the orthographic feature transform (OFT): a differentiable transformation which maps a set of features extracted from a perspective RGB image to an orthographic birds-eye-view feature map. Crucially, we do not rely on any explicit notion of depth: rather our system builds up an internal representation which is able to determine which features from the image are relevant to each location on the birds-eye-view. We apply a deep convolutional neural network, the topdown network, in order to reason locally about the 3D configuration of the scene.
The main contributions of our work are as follows:
1. We introduce the orthographic feature transform (OFT) which maps perspective image-based features into an orthographic birds-eye-view, implemented efficiently using integral images for fast average pooling.
2. We describe a deep learning architecture for predicting 3D bounding boxes from monocular RGB images.
3. We highlight the importance of reasoning in 3D for the object detection task.
The system is evaluated on the challenging KITTI 3D object benchmark and achieves state-of-the-art results among monocular approaches.
3D Object Detection Architecture
In this section we describe our full approach for extracting 3D bounding boxes from monocular images. An overview of the system is illustrated in Figure 3. The algorithm comprises five main components: 1. A front-end ResNet [10] feature extractor which extracts multi-scale feature maps from the input image.
3.
A topdown network, consisting of a series of ResNet residual units, which processes the birds-eye-view feature maps in a manner which is invariant to the perspective effects observed in the image.
4.
A set of output heads which generate, for each object class and each location on the ground plane, a confidence score, position offset, dimension offset and a orientation vector.
5.
A non-maximum suppression and decoding stage, which identifies peaks in the confidence maps and generates discrete bounding box predictions.
The remainder of this section will describe each of these components in detail.
Feature extraction
The first element of our architecture is a convolutional feature extractor which generates a hierarchy of multi-scale 2D feature maps from the raw input image. These features encode information about low-level structures in the image, which form the basic components used by the topdown network to construct an implicit 3D representation of the scene. The front-end network is also responsible for inferring depth information based on the size of image features since subsequent stages of the architecture aim to eliminate variance to scale.
Orthographic feature transform
In order to reason about the 3D world in the absence of perspective effects, we must first apply a mapping from feature maps extracted in the image space to orthographic feature maps in the world space, which we term the Orthographic Feature Transform (OFT).
The objective of the OFT is to populate the 3D voxel feature map g(x, y, z) ∈ R n with relevant n-dimensional features from the image-based feature map f (u, v) ∈ R n extracted by the front-end feature extractor. The voxel map is defined over a uniformly spaced 3D lattice G which is fixed to the ground plane a distance y 0 below the camera and has dimensions W , H, D and a voxel size of r. For a given voxel grid location (x, y, z) ∈ G, we obtain the voxel feature g(x, y, z) by accumulating features over the area of the image feature map f which corresponds to the voxel's 2D projection. In general each voxel, which is a cube of size r, will project to hexagonal region in the image plane. We approximate this by a rectangular bounding box with top-left and bottom-right corners (u 1 , v 1 ) and (u 2 , v 2 ) which are given by
u 1 = f x − 0.5r z + 0.5 x |x| r + c u , u 2 = f x + 0.5r z − 0.5 x |x| r + c u , v 1 = f y − 0.5r z + 0.5 y |y| r + c v , v 2 = f y + 0.5r z − 0.5 y |y| r + c v(1)
where f is the camera focal length and (c u , c v ) the principle point.
We can then assign a feature to the appropriate location in the voxel feature map g by average pooling over the projected voxel's bounding box in the image feature map f :
g(x, y, z) = 1 (u 2 − u 1 )(v 2 − v 1 ) u2 u=u1 v2 v=v1 f (u, v) (2)
The resulting voxel feature map g already provides a representation of the scene which is free from the effects of perspective projection. However deep neural networks which operate on large voxel grids are typically extremely memory intensive. Given that we are predominantly interested in applications such as autonomous driving where most objects are fixed to the 2D ground plane, we can make the problem more tractable by collapsing the 3D voxel feature map down to a third, two-dimensional representation which we term the orthographic feature map h(x, z). The orthographic feature map is obtained by summing voxel features along the vertical axis after multiplication with a set of learned weight matrices W (y) ∈ R n×n :
h(x, z) = y0+H y=y0 W (y)g(x, y, z)(3)
Transforming to an intermediate voxel representation before collapsing to the final orthographic feature map has the advantage that the information about the vertical configuration of the scene is retained. This turns out to be essential for downstream tasks such as estimating the height and vertical position of object bounding boxes. typical voxel grid setting generates around 150k bounding boxes, which far exceeds the ∼2k regions of interest used by the Faster R-CNN [29] architecture, for example. To facilitate pooling over such a large number of regions, we make use of a fast average pooling operation based on integral images [32]. An integral image, or in this case integral feature map, F, is constructed from an input feature map f using the recursive relation
Fast average pooling with integral images
F(u, v) = f (u, v)+F(u−1, v)+F(u, v−1)−F(u−1, v−1).(4)
Given the integral feature map F, the output feature g(x, y, z) corresponding to the region defined by bounding box coordinates (u 1 , v 1 ) and (u 2 , v 2 ) (see Equation 1), is given by
g(x, y, z) = F(u 1 , v 1 ) + F(u 2 , v 2 ) − F(u 1 , v 2 ) − F(u 2 , v 1 ) (u 2 − u 1 )(v 2 − v 1 )(5)
The complexity of this pooling operation is independent of the size of the individual regions, which makes it highly appropriate for our application where the size and shape of the regions varies considerably depending on whether the voxel is close to or far from the camera. It is also fully differentiable in terms of the original feature map f and so can be used as part of an end-to-end deep learning framework.
Topdown network
A core contribution of this work is to emphasize the importance of reasoning in 3D for object recognition and detection in complex 3D scenes. In our architecture, this reasoning component is performed by a sub-network which we term the topdown network. This is a simple convolutional network with ResNet-style skip connections which operates on the the 2D feature maps h generated by the previously described OFT stage. Since the filters of the topdown network are applied convolutionally, all processing is invariant to the location of the feature on the ground plane. This means that feature maps which are distant from the camera receive exactly the same treatment as those that are close, despite corresponding a much smaller region of the image. The ambition is that the final feature representation will therefore capture information purely about the underlying 3D structure of the scene and not its 2D projection.
Confidence map prediction
Among both 2D and 3D approaches, detection is conventionally treated as a classification problem, with a cross entropy loss used to identify regions of the image which contain objects. In our application however we found it to be more effective to adopt the confidence map regression approach of Huang et al. [11]. The confidence map S(x, z) is a smooth function which indicates the probability that there exists an object with a bounding box centred on location (x, y 0 , z), where y 0 is the distance of the ground plane below the camera. Given a set of N ground truth objects with bounding box centres p i = x i y i z i , i = 1, . . . , N , we compute the ground truth confidence map as a smooth Gaussian region of width σ around the center of each object. The confidence at location (x, z) is given by
S(x, z) = max i exp − (x i − x) 2 + (z i − z) 2 2σ 2 .(6)
The confidence map prediction head of our network is trained via an 1 loss to regress to the ground truth confidence for each location on the orthographic grid H. A welldocumented challenge is that there are vastly fewer positive (high confidence) locations than negative ones, which leads to the negative component of the loss dominating optimization [31,18]. To overcome this we scale the loss corresponding to negative locations (which we define as those with S(x, z) < 0.05) by a constant factor of 10 −2 .
Localization and bounding box estimation
The confidence map S encodes a coarse approximation of the location of each object as a peak in the confidence score, which gives a position estimate accurate up to the resolution r of the feature maps. In order to localize each object more precisely, we append an additional network output head which predicts the relative offset ∆ pos from grid cell locations on the ground plane (x, y 0 , z) to the center of the corresponding ground truth object p i :
∆ pos (x, z) = xi−x σ yi−y0 σ zi−z σ(7)
We use the same scale factor σ as described in Section 3.4 to normalize the position offsets within a sensible range. A ground truth object instance i is assigned to a grid location (x, z) if any part of the object's bounding box intersects the given grid cell. Cells which do not intersect any ground truth objects are ignored during training. In addition to localizing each object, we must also determine the size and orientation of each bounding box. We therefore introduce two further network outputs. The first, the dimension head, predicts the logarithmic scale offset ∆ dim between the assigned ground truth object i with dimensions d i = w i h i l i and the mean dimensions d = whl over all objects of the given class.
∆ dim (x, z) = log wī w log hī h log lī l (8)
The second, the orientation head, predicts the sine and cosine of the objects orientation θ i about the y-axis:
∆ ang (x, z) = sin θ i cos θ i(9)
Note that since we are operating in the orthographic birdseye-view space, we are able to predict the y-axis orientation θ directly, unlike other works e.g. [23] which predict the so-called observation angle α to take into account the effects of perspective and relative viewpoint. The position offset ∆ pos , dimension offset ∆ dim and orientation vector ∆ ang are trained using an 1 loss.
Non-maximum suppression
Similarly to other object detection algorithms, we apply a non-maximum suppression (NMS) stage to obtain a final discrete set of object predictions. In a conventional object detection setting this step can be expensive since it requires O(N 2 ) bounding box overlap computations. This is compounded by the fact that pairs of 3D boxes are not necessarily axis aligned, which makes the overlap computation more difficult compared to the 2D case. Fortunately, an additional benefit of the use of confidence maps in place of anchor box classification is that we can apply NMS in the more conventional image processing sense, i.e. searching for local maxima on the 2D confidence maps S. Here, the orthographic birds-eye-view again proves invaluable: the fact that two objects cannot occupy the same volume in the 3D world means that peaks on the confidence maps are naturally separated.
To alleviate the effects of noise in the predictions, we first smooth the confidence maps by applying a Gaussian kernel with width σ N M S . A location (x i , z i ) on the smoothed confidence mapŜ is deemed to be a maximum if
S(x i , z i ) ≥Ŝ(x i +m, z i +n) ∀m, n ∈ {−1, 0, 1}. (10)
Of the produced peak locations, any with a confidence S(x i , y i ) smaller than a given threshold t are eliminated. This results in the final set of predicted object instances, whose bounding box center p i , dimensions d i , and orientation θ i , are given by inverting the relationships in Equations 7, 8 and 9 respectively.
Experiments
Experimental setup
Architecture For our front-end feature extractor we make use of a ResNet-18 network without bottleneck layers. We intentionally choose the front-end network to be relatively shallow, since we wish to put as much emphasis as possible on the 3D reasoning component of the model. We extract features immediately before the final three downsampling layers, resulting in a set of feature maps {f s } at scales s of 1/8, 1/16 and 1/32 of the original input resolution. Convolutional layers with 1×1 kernels are used to map these feature maps to a common feature size of 256, before processing them via the orthographic feature transform to yield orthographic feature maps {h s }. We use a voxel grid with dimensions 80m×4m×80m, which is sufficient to include all annotated instances in KITTI, and set the grid resolution r to be 0.5m. For the topdown network, we use a simple 16-layer ResNet without any downsampling or bottleneck units. The output heads each consist of a single 1×1 convolution layer. Throughout the model we replace all batch normalization [12] layers with group normalization [34] which has been found to perform better for training with small batch sizes.
Dataset We train and evaluate our method using the KITTI 3D object detection benchmark dataset [8]. For all experiments we follow the train-val split of Chen et al. [3] which divides the KITTI training set into 3712 training images and 3769 validation images.
Data augmentation Since our method relies on a fixed mapping from the image plane to the ground plane, we found that extensive data augmentation was essential for the network to learn robustly. We adopt three types of widelyused augmentations: random cropping, scaling and horizontal flipping, adjusting the camera calibration parameters f and (c u , c v ) accordingly to reflect these perturbations. Training procedure The model is trained using SGD for 600 epochs with a batch size of 8, momentum of 0.9 and learning rate of 10 −7 . Following [21], losses are summed rather than averaged, which avoids biasing the gradients towards examples with few object instances. The loss functions from the various output heads are combined using a simple equal weighting strategy.
Comparison to state-of-the-art
We evaluate our approach on two tasks from the KITTI 3D object detection benchmark. The 3D bounding box detection task requires that each predicted 3D bounding box should intersect a corresponding ground truth box by at least 70% in the case of cars and 50% for pedestrians and cyclists. The birds-eye-view detection task meanwhile is slightly more lenient, requiring the same amount of overlap between a 2D birds-eye-view projection of the predicted and ground truth bounding boxes on the ground plane. At the time of writing, the KITTI benchmark included only one published approach operating on monocular RGB images alone ( [24]), which we compare our method against in Table 1. We therefore perform additional evaluation on the KITTI validation split set out by Chen et al. (2016) [3]; the results of which are presented in Table 2. For monocular methods, performance on the pedestrian and cyclist classes is typically insufficient to obtain meaningful results and we therefore follow other works [3,4,24] and focus our evaluation on the car class only.
It can be seen from Tables 1 and 2 that our method is able to outperform all comparable (i.e. monocular only) methods by a considerable margin across both tasks and all difficulty criteria. The improvement is particularly marked on the hard evaluation category, which includes instances which are heavily occluded, truncated or far from the camera. We also show in Table 2 that our method performs competitively with the stereo approach of Chen et al. (2015) [4], achieving close to or in one case better performance than their 3DOP system. This is in spite of the fact that unlike [4], our method does not have access to any explicit knowledge of the depth of the scene.
Qualitative results
Comparison to Mono3D We provide a qualitative comparison of predictions generated by our approach and Mono3D [3] in Figure 4. A notable observation is that our system is able to reliably detect objects at a considerable distance from the camera. This is a common failure case among both 2D and 3D object detectors, and indeed many of the cases which are correctly identified by our system are overlooked by Mono3D. We argue that this ability to recognise objects at distance is a major strength of our system, and we explore this capacity further in Section 5.1. Further qualitative results are included in supplementary material.
Ground plane confidence maps A unique feature of our approach is that we operate largely in the orthographic birds-eye-view feature space. To illustrate this, Figure 5 shows examples of predicted confidence maps S(x, z) both in the topdown view and projected into the image on the ground plane. It can be seen that the predicted confidence maps are well localized around each object center.
Ablation study
A central claim of our approach is that reasoning in the orthographic birds-eye-view space significantly improves performance. To validate this claim, we perform an ablation study where we progressively remove layers from the topdown network. In the extreme case, when the depth of the topdown network is zero, the architecture is effectively reduced to RoI pooling [9] over projected bounding boxes, rendering it similar to R-CNN-based architectures. Figure 7 shows a plot of average precision against the total number of parameters for two different architectures.
The trend is clear: removing layers from the topdown network significantly reduces performance. Some of this Figure 5. Examples of confidence maps generated by our approach, which we visualize both in birds-eye-view (right) and projected onto the ground plane in the image view (left). We use the pre-computed ground planes of [4] to obtain the road position: note that this is for visualization purposes only and the ground planes are not used elsewehere in our approach. Best viewed in color. decline in performance may be explained by the fact that reducing the size of the topdown network reduces the overall depth of the network, and therefore its representational power. However, as can be seen from Figure 7, adopting a shallow front-end (ResNet-18) with a large topdown network achieves significantly better performance than a deeper network (ResNet-34) without any topdown layers, despite the two architectures having roughly the same number of parameters. This strongly suggests that a significant part of the success of our architecture comes from its ability to reason in 3D, as afforded by the 2D convolution layers operating on the orthographic feature maps.
Discussion
Performance as a function of depth
Motivated by the qualitative results in Section 4.2, we wished to further quantify the ability of our system to detect and localize distant objects. Figure 8 plots performance of each system when evaluated only on objects which are at least the given distance away from the camera. Whilst we outperform Mono3D over all depths, it is also apparent that the performance of our system degrades much more slowly as we consider objects further from the camera. We believe that this is a key strength of our approach.
Evolution of confidence maps during training
While the confidence maps predicted by our network are not necessarily calibrated estimates of model certainty, observing their evolution over the course of training does give valuable insights into the learned representation. Figure 6 shows an example of a confidence map predicted by the network at various points during training. During the early Mono3D [3] OFT-Net (Ours) Figure 8. Average BEV precision (val) as a function of the minimum distance of objects from the camera. We use an IoU threshold of 0.5 to better compare performance at large depths. stages of training, the network very quickly learns to identify regions of the image which contain objects, which can be seen by the fact that high confidence regions correspond to projection lines from the optical center at (0, 0) which intersect a ground truth object. However, there exists significant uncertainty about the depth of each object, leading to the predicted confidences being blurred out in the depth direction. This fits well with our intuition that for a monocular system depth estimation is significantly more challenging than recognition. As training progresses, the network is increasingly able to resolve the depth of the objects, producing sharper confidence regions clustered about the ground truth centers. It can be observed that even in the latter stages of training, there is considerably greater uncertainty in the depth of distant objects than that of nearby ones, evoking the well-known result from stereo that depth estimation error increases quadratically with distance.
Conclusions
In this work we have presented a novel approach to monocular 3D object detection, based on the intuition that operating in the birds-eye-view domain alleviates many undesirable properties of images which make it difficult to infer the 3D configuration of the world. We have proposed a simple orthographic feature transform which transforms image-based features into this birds-eye-view representation, and described how to implement it efficiently using integral images. This was then incorporated into part of a deep learning pipeline, in which we particularly emphasized the importance of spatial reasoning in the form of a deep 2D convolutional network applied to the extracted birds-eyeview features. Finally, we experimentally validated our hypothesis that reasoning in the topdown space does achieve significantly better results, and demonstrated state-of-theart performance on the KITTI 3D object benchmark.
| 4,139 |
1811.08188
|
2901707509
|
3D object detection from monocular images has proven to be an enormously challenging task, with the performance of leading systems not yet achieving even 10 of that of LiDAR-based counterparts. One explanation for this performance gap is that existing systems are entirely at the mercy of the perspective image-based representation, in which the appearance and scale of objects varies drastically with depth and meaningful distances are difficult to infer. In this work we argue that the ability to reason about the world in 3D is an essential element of the 3D object detection task. To this end, we introduce the orthographic feature transform, which enables us to escape the image domain by mapping image-based features into an orthographic 3D space. This allows us to reason holistically about the spatial configuration of the scene in a domain where scale is consistent and distances between objects are meaningful. We apply this transformation as part of an end-to-end deep learning architecture and achieve state-of-the-art performance on the KITTI 3D object benchmark. We will release full source code and pretrained models upon acceptance of this manuscript for publication.
|
Obtaining 3D bounding boxes from images, meanwhile, is a much more challenging problem on account of the absence of absolute depth information. Many approaches start from 2D bounding boxes extracted using standard detectors described above, upon which they either directly regress 3D pose parameters for each region @cite_34 @cite_15 @cite_11 @cite_3 or fit 3D templates to the image @cite_35 @cite_4 @cite_13 @cite_7 . Perhaps most closely related to our work is Mono3D @cite_24 which densely spans the 3D space with 3D bounding box proposals and then scores each using a variety of image-based features. Other works which explore the idea of dense 3D proposals in the world space are 3DOP @cite_14 and Pham and Jeon @cite_33 , which rely on explicit estimates of depth using stereo geometry. A major limitation of all the above works is that each region proposal or bounding box is treated independently, precluding any joint reasoning about the 3D configuration of the scene. Our method performs a similar feature aggregation step to @cite_24 , but applies a secondary convolutional network to the resulting proposals whilst retaining their spatial configuration.
|
{
"abstract": [
"",
"The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.",
"",
"Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark.",
"",
"",
"The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain highquality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.",
"For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for slidingwindow detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM.",
"We present a novel method for detecting 3D model instances and estimating their 6D poses from RGB data in a single shot. To this end, we extend the popular SSD paradigm to cover the full 6D pose space and train on synthetic model data only. Our approach competes or surpasses current state-of-the-art methods that leverage RGB-D data on multiple challenging datasets. Furthermore, our method produces these results at around 10Hz, which is many times faster than the related methods. For the sake of reproducibility, we make our trained networks and detection code publicly available.",
"",
""
],
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"2184393491",
"",
"2589615404",
"",
"",
"2468368736",
"2523096747",
"2768840867",
"",
""
]
}
|
Orthographic Feature Transform for Monocular 3D Object Detection
|
The success of any autonomous agent is contingent on its ability to detect and localize the objects in its surrounding environment. Prediction, avoidance and path planning all depend on robust estimates of the 3D positions and dimensions of other entities in the scene. This has led to 3D bounding box detection emerging as an important problem in computer vision and robotics, particularly in the context of autonomous driving. To date the 3D object detection literature has been dominated by approaches which make use of rich LiDAR point clouds [37,33,15,27,5,6,22,1], while the performance of image-only methods, which lack the absolute depth information of LiDAR, lags significantly behind. Given the high cost of existing LiDAR units, the sparsity of LiDAR point clouds at long ranges, and the need for sensor redundancy, accurate 3D object detection from Figure 1. 3D bounding box detection from monocular images. The proposed system maps image-based features to an orthographic birds-eye-view and predicts confidence maps and bounding box offsets in this space. These outputs are then decoded via nonmaximum suppression to yield discrete bounding box predictions. monocular images remains an important research objective. To this end, we present a novel 3D object detection algorithm which takes a single monocular RGB image as input and produces high quality 3D bounding boxes, achieving state-of-the-art performance among monocular methods on the challenging KITTI benchmark [8].
Images are, in many senses, an extremely challenging modality. Perspective projection implies that the scale of a single object varies considerably with distance from the camera; its appearance can change drastically depending on the viewpoint; and distances in the 3D world cannot be inferred directly. These factors present enormous challenges to a monocular 3D object detection system. A far more innocuous representation is the orthographic birdseye-view map commonly employed in many LiDAR-based methods [37,33,1]. Under this representation, scale is homogeneous; appearance is largely viewpoint-independent; and distances between objects are meaningful. Our key insight therefore is that as much reasoning as possible should be performed in this orthographic space rather than directly on the pixel-based image domain. This insight proves essential to the success of our proposed system.
It is unclear, however, how such a representation could be constructed from a monocular image alone. We therefore introduce the orthographic feature transform (OFT): a differentiable transformation which maps a set of features extracted from a perspective RGB image to an orthographic birds-eye-view feature map. Crucially, we do not rely on any explicit notion of depth: rather our system builds up an internal representation which is able to determine which features from the image are relevant to each location on the birds-eye-view. We apply a deep convolutional neural network, the topdown network, in order to reason locally about the 3D configuration of the scene.
The main contributions of our work are as follows:
1. We introduce the orthographic feature transform (OFT) which maps perspective image-based features into an orthographic birds-eye-view, implemented efficiently using integral images for fast average pooling.
2. We describe a deep learning architecture for predicting 3D bounding boxes from monocular RGB images.
3. We highlight the importance of reasoning in 3D for the object detection task.
The system is evaluated on the challenging KITTI 3D object benchmark and achieves state-of-the-art results among monocular approaches.
3D Object Detection Architecture
In this section we describe our full approach for extracting 3D bounding boxes from monocular images. An overview of the system is illustrated in Figure 3. The algorithm comprises five main components: 1. A front-end ResNet [10] feature extractor which extracts multi-scale feature maps from the input image.
3.
A topdown network, consisting of a series of ResNet residual units, which processes the birds-eye-view feature maps in a manner which is invariant to the perspective effects observed in the image.
4.
A set of output heads which generate, for each object class and each location on the ground plane, a confidence score, position offset, dimension offset and a orientation vector.
5.
A non-maximum suppression and decoding stage, which identifies peaks in the confidence maps and generates discrete bounding box predictions.
The remainder of this section will describe each of these components in detail.
Feature extraction
The first element of our architecture is a convolutional feature extractor which generates a hierarchy of multi-scale 2D feature maps from the raw input image. These features encode information about low-level structures in the image, which form the basic components used by the topdown network to construct an implicit 3D representation of the scene. The front-end network is also responsible for inferring depth information based on the size of image features since subsequent stages of the architecture aim to eliminate variance to scale.
Orthographic feature transform
In order to reason about the 3D world in the absence of perspective effects, we must first apply a mapping from feature maps extracted in the image space to orthographic feature maps in the world space, which we term the Orthographic Feature Transform (OFT).
The objective of the OFT is to populate the 3D voxel feature map g(x, y, z) ∈ R n with relevant n-dimensional features from the image-based feature map f (u, v) ∈ R n extracted by the front-end feature extractor. The voxel map is defined over a uniformly spaced 3D lattice G which is fixed to the ground plane a distance y 0 below the camera and has dimensions W , H, D and a voxel size of r. For a given voxel grid location (x, y, z) ∈ G, we obtain the voxel feature g(x, y, z) by accumulating features over the area of the image feature map f which corresponds to the voxel's 2D projection. In general each voxel, which is a cube of size r, will project to hexagonal region in the image plane. We approximate this by a rectangular bounding box with top-left and bottom-right corners (u 1 , v 1 ) and (u 2 , v 2 ) which are given by
u 1 = f x − 0.5r z + 0.5 x |x| r + c u , u 2 = f x + 0.5r z − 0.5 x |x| r + c u , v 1 = f y − 0.5r z + 0.5 y |y| r + c v , v 2 = f y + 0.5r z − 0.5 y |y| r + c v(1)
where f is the camera focal length and (c u , c v ) the principle point.
We can then assign a feature to the appropriate location in the voxel feature map g by average pooling over the projected voxel's bounding box in the image feature map f :
g(x, y, z) = 1 (u 2 − u 1 )(v 2 − v 1 ) u2 u=u1 v2 v=v1 f (u, v) (2)
The resulting voxel feature map g already provides a representation of the scene which is free from the effects of perspective projection. However deep neural networks which operate on large voxel grids are typically extremely memory intensive. Given that we are predominantly interested in applications such as autonomous driving where most objects are fixed to the 2D ground plane, we can make the problem more tractable by collapsing the 3D voxel feature map down to a third, two-dimensional representation which we term the orthographic feature map h(x, z). The orthographic feature map is obtained by summing voxel features along the vertical axis after multiplication with a set of learned weight matrices W (y) ∈ R n×n :
h(x, z) = y0+H y=y0 W (y)g(x, y, z)(3)
Transforming to an intermediate voxel representation before collapsing to the final orthographic feature map has the advantage that the information about the vertical configuration of the scene is retained. This turns out to be essential for downstream tasks such as estimating the height and vertical position of object bounding boxes. typical voxel grid setting generates around 150k bounding boxes, which far exceeds the ∼2k regions of interest used by the Faster R-CNN [29] architecture, for example. To facilitate pooling over such a large number of regions, we make use of a fast average pooling operation based on integral images [32]. An integral image, or in this case integral feature map, F, is constructed from an input feature map f using the recursive relation
Fast average pooling with integral images
F(u, v) = f (u, v)+F(u−1, v)+F(u, v−1)−F(u−1, v−1).(4)
Given the integral feature map F, the output feature g(x, y, z) corresponding to the region defined by bounding box coordinates (u 1 , v 1 ) and (u 2 , v 2 ) (see Equation 1), is given by
g(x, y, z) = F(u 1 , v 1 ) + F(u 2 , v 2 ) − F(u 1 , v 2 ) − F(u 2 , v 1 ) (u 2 − u 1 )(v 2 − v 1 )(5)
The complexity of this pooling operation is independent of the size of the individual regions, which makes it highly appropriate for our application where the size and shape of the regions varies considerably depending on whether the voxel is close to or far from the camera. It is also fully differentiable in terms of the original feature map f and so can be used as part of an end-to-end deep learning framework.
Topdown network
A core contribution of this work is to emphasize the importance of reasoning in 3D for object recognition and detection in complex 3D scenes. In our architecture, this reasoning component is performed by a sub-network which we term the topdown network. This is a simple convolutional network with ResNet-style skip connections which operates on the the 2D feature maps h generated by the previously described OFT stage. Since the filters of the topdown network are applied convolutionally, all processing is invariant to the location of the feature on the ground plane. This means that feature maps which are distant from the camera receive exactly the same treatment as those that are close, despite corresponding a much smaller region of the image. The ambition is that the final feature representation will therefore capture information purely about the underlying 3D structure of the scene and not its 2D projection.
Confidence map prediction
Among both 2D and 3D approaches, detection is conventionally treated as a classification problem, with a cross entropy loss used to identify regions of the image which contain objects. In our application however we found it to be more effective to adopt the confidence map regression approach of Huang et al. [11]. The confidence map S(x, z) is a smooth function which indicates the probability that there exists an object with a bounding box centred on location (x, y 0 , z), where y 0 is the distance of the ground plane below the camera. Given a set of N ground truth objects with bounding box centres p i = x i y i z i , i = 1, . . . , N , we compute the ground truth confidence map as a smooth Gaussian region of width σ around the center of each object. The confidence at location (x, z) is given by
S(x, z) = max i exp − (x i − x) 2 + (z i − z) 2 2σ 2 .(6)
The confidence map prediction head of our network is trained via an 1 loss to regress to the ground truth confidence for each location on the orthographic grid H. A welldocumented challenge is that there are vastly fewer positive (high confidence) locations than negative ones, which leads to the negative component of the loss dominating optimization [31,18]. To overcome this we scale the loss corresponding to negative locations (which we define as those with S(x, z) < 0.05) by a constant factor of 10 −2 .
Localization and bounding box estimation
The confidence map S encodes a coarse approximation of the location of each object as a peak in the confidence score, which gives a position estimate accurate up to the resolution r of the feature maps. In order to localize each object more precisely, we append an additional network output head which predicts the relative offset ∆ pos from grid cell locations on the ground plane (x, y 0 , z) to the center of the corresponding ground truth object p i :
∆ pos (x, z) = xi−x σ yi−y0 σ zi−z σ(7)
We use the same scale factor σ as described in Section 3.4 to normalize the position offsets within a sensible range. A ground truth object instance i is assigned to a grid location (x, z) if any part of the object's bounding box intersects the given grid cell. Cells which do not intersect any ground truth objects are ignored during training. In addition to localizing each object, we must also determine the size and orientation of each bounding box. We therefore introduce two further network outputs. The first, the dimension head, predicts the logarithmic scale offset ∆ dim between the assigned ground truth object i with dimensions d i = w i h i l i and the mean dimensions d = whl over all objects of the given class.
∆ dim (x, z) = log wī w log hī h log lī l (8)
The second, the orientation head, predicts the sine and cosine of the objects orientation θ i about the y-axis:
∆ ang (x, z) = sin θ i cos θ i(9)
Note that since we are operating in the orthographic birdseye-view space, we are able to predict the y-axis orientation θ directly, unlike other works e.g. [23] which predict the so-called observation angle α to take into account the effects of perspective and relative viewpoint. The position offset ∆ pos , dimension offset ∆ dim and orientation vector ∆ ang are trained using an 1 loss.
Non-maximum suppression
Similarly to other object detection algorithms, we apply a non-maximum suppression (NMS) stage to obtain a final discrete set of object predictions. In a conventional object detection setting this step can be expensive since it requires O(N 2 ) bounding box overlap computations. This is compounded by the fact that pairs of 3D boxes are not necessarily axis aligned, which makes the overlap computation more difficult compared to the 2D case. Fortunately, an additional benefit of the use of confidence maps in place of anchor box classification is that we can apply NMS in the more conventional image processing sense, i.e. searching for local maxima on the 2D confidence maps S. Here, the orthographic birds-eye-view again proves invaluable: the fact that two objects cannot occupy the same volume in the 3D world means that peaks on the confidence maps are naturally separated.
To alleviate the effects of noise in the predictions, we first smooth the confidence maps by applying a Gaussian kernel with width σ N M S . A location (x i , z i ) on the smoothed confidence mapŜ is deemed to be a maximum if
S(x i , z i ) ≥Ŝ(x i +m, z i +n) ∀m, n ∈ {−1, 0, 1}. (10)
Of the produced peak locations, any with a confidence S(x i , y i ) smaller than a given threshold t are eliminated. This results in the final set of predicted object instances, whose bounding box center p i , dimensions d i , and orientation θ i , are given by inverting the relationships in Equations 7, 8 and 9 respectively.
Experiments
Experimental setup
Architecture For our front-end feature extractor we make use of a ResNet-18 network without bottleneck layers. We intentionally choose the front-end network to be relatively shallow, since we wish to put as much emphasis as possible on the 3D reasoning component of the model. We extract features immediately before the final three downsampling layers, resulting in a set of feature maps {f s } at scales s of 1/8, 1/16 and 1/32 of the original input resolution. Convolutional layers with 1×1 kernels are used to map these feature maps to a common feature size of 256, before processing them via the orthographic feature transform to yield orthographic feature maps {h s }. We use a voxel grid with dimensions 80m×4m×80m, which is sufficient to include all annotated instances in KITTI, and set the grid resolution r to be 0.5m. For the topdown network, we use a simple 16-layer ResNet without any downsampling or bottleneck units. The output heads each consist of a single 1×1 convolution layer. Throughout the model we replace all batch normalization [12] layers with group normalization [34] which has been found to perform better for training with small batch sizes.
Dataset We train and evaluate our method using the KITTI 3D object detection benchmark dataset [8]. For all experiments we follow the train-val split of Chen et al. [3] which divides the KITTI training set into 3712 training images and 3769 validation images.
Data augmentation Since our method relies on a fixed mapping from the image plane to the ground plane, we found that extensive data augmentation was essential for the network to learn robustly. We adopt three types of widelyused augmentations: random cropping, scaling and horizontal flipping, adjusting the camera calibration parameters f and (c u , c v ) accordingly to reflect these perturbations. Training procedure The model is trained using SGD for 600 epochs with a batch size of 8, momentum of 0.9 and learning rate of 10 −7 . Following [21], losses are summed rather than averaged, which avoids biasing the gradients towards examples with few object instances. The loss functions from the various output heads are combined using a simple equal weighting strategy.
Comparison to state-of-the-art
We evaluate our approach on two tasks from the KITTI 3D object detection benchmark. The 3D bounding box detection task requires that each predicted 3D bounding box should intersect a corresponding ground truth box by at least 70% in the case of cars and 50% for pedestrians and cyclists. The birds-eye-view detection task meanwhile is slightly more lenient, requiring the same amount of overlap between a 2D birds-eye-view projection of the predicted and ground truth bounding boxes on the ground plane. At the time of writing, the KITTI benchmark included only one published approach operating on monocular RGB images alone ( [24]), which we compare our method against in Table 1. We therefore perform additional evaluation on the KITTI validation split set out by Chen et al. (2016) [3]; the results of which are presented in Table 2. For monocular methods, performance on the pedestrian and cyclist classes is typically insufficient to obtain meaningful results and we therefore follow other works [3,4,24] and focus our evaluation on the car class only.
It can be seen from Tables 1 and 2 that our method is able to outperform all comparable (i.e. monocular only) methods by a considerable margin across both tasks and all difficulty criteria. The improvement is particularly marked on the hard evaluation category, which includes instances which are heavily occluded, truncated or far from the camera. We also show in Table 2 that our method performs competitively with the stereo approach of Chen et al. (2015) [4], achieving close to or in one case better performance than their 3DOP system. This is in spite of the fact that unlike [4], our method does not have access to any explicit knowledge of the depth of the scene.
Qualitative results
Comparison to Mono3D We provide a qualitative comparison of predictions generated by our approach and Mono3D [3] in Figure 4. A notable observation is that our system is able to reliably detect objects at a considerable distance from the camera. This is a common failure case among both 2D and 3D object detectors, and indeed many of the cases which are correctly identified by our system are overlooked by Mono3D. We argue that this ability to recognise objects at distance is a major strength of our system, and we explore this capacity further in Section 5.1. Further qualitative results are included in supplementary material.
Ground plane confidence maps A unique feature of our approach is that we operate largely in the orthographic birds-eye-view feature space. To illustrate this, Figure 5 shows examples of predicted confidence maps S(x, z) both in the topdown view and projected into the image on the ground plane. It can be seen that the predicted confidence maps are well localized around each object center.
Ablation study
A central claim of our approach is that reasoning in the orthographic birds-eye-view space significantly improves performance. To validate this claim, we perform an ablation study where we progressively remove layers from the topdown network. In the extreme case, when the depth of the topdown network is zero, the architecture is effectively reduced to RoI pooling [9] over projected bounding boxes, rendering it similar to R-CNN-based architectures. Figure 7 shows a plot of average precision against the total number of parameters for two different architectures.
The trend is clear: removing layers from the topdown network significantly reduces performance. Some of this Figure 5. Examples of confidence maps generated by our approach, which we visualize both in birds-eye-view (right) and projected onto the ground plane in the image view (left). We use the pre-computed ground planes of [4] to obtain the road position: note that this is for visualization purposes only and the ground planes are not used elsewehere in our approach. Best viewed in color. decline in performance may be explained by the fact that reducing the size of the topdown network reduces the overall depth of the network, and therefore its representational power. However, as can be seen from Figure 7, adopting a shallow front-end (ResNet-18) with a large topdown network achieves significantly better performance than a deeper network (ResNet-34) without any topdown layers, despite the two architectures having roughly the same number of parameters. This strongly suggests that a significant part of the success of our architecture comes from its ability to reason in 3D, as afforded by the 2D convolution layers operating on the orthographic feature maps.
Discussion
Performance as a function of depth
Motivated by the qualitative results in Section 4.2, we wished to further quantify the ability of our system to detect and localize distant objects. Figure 8 plots performance of each system when evaluated only on objects which are at least the given distance away from the camera. Whilst we outperform Mono3D over all depths, it is also apparent that the performance of our system degrades much more slowly as we consider objects further from the camera. We believe that this is a key strength of our approach.
Evolution of confidence maps during training
While the confidence maps predicted by our network are not necessarily calibrated estimates of model certainty, observing their evolution over the course of training does give valuable insights into the learned representation. Figure 6 shows an example of a confidence map predicted by the network at various points during training. During the early Mono3D [3] OFT-Net (Ours) Figure 8. Average BEV precision (val) as a function of the minimum distance of objects from the camera. We use an IoU threshold of 0.5 to better compare performance at large depths. stages of training, the network very quickly learns to identify regions of the image which contain objects, which can be seen by the fact that high confidence regions correspond to projection lines from the optical center at (0, 0) which intersect a ground truth object. However, there exists significant uncertainty about the depth of each object, leading to the predicted confidences being blurred out in the depth direction. This fits well with our intuition that for a monocular system depth estimation is significantly more challenging than recognition. As training progresses, the network is increasingly able to resolve the depth of the objects, producing sharper confidence regions clustered about the ground truth centers. It can be observed that even in the latter stages of training, there is considerably greater uncertainty in the depth of distant objects than that of nearby ones, evoking the well-known result from stereo that depth estimation error increases quadratically with distance.
Conclusions
In this work we have presented a novel approach to monocular 3D object detection, based on the intuition that operating in the birds-eye-view domain alleviates many undesirable properties of images which make it difficult to infer the 3D configuration of the world. We have proposed a simple orthographic feature transform which transforms image-based features into this birds-eye-view representation, and described how to implement it efficiently using integral images. This was then incorporated into part of a deep learning pipeline, in which we particularly emphasized the importance of spatial reasoning in the form of a deep 2D convolutional network applied to the extracted birds-eyeview features. Finally, we experimentally validated our hypothesis that reasoning in the topdown space does achieve significantly better results, and demonstrated state-of-theart performance on the KITTI 3D object benchmark.
| 4,139 |
1811.08188
|
2901707509
|
3D object detection from monocular images has proven to be an enormously challenging task, with the performance of leading systems not yet achieving even 10 of that of LiDAR-based counterparts. One explanation for this performance gap is that existing systems are entirely at the mercy of the perspective image-based representation, in which the appearance and scale of objects varies drastically with depth and meaningful distances are difficult to infer. In this work we argue that the ability to reason about the world in 3D is an essential element of the 3D object detection task. To this end, we introduce the orthographic feature transform, which enables us to escape the image domain by mapping image-based features into an orthographic 3D space. This allows us to reason holistically about the spatial configuration of the scene in a domain where scale is consistent and distances between objects are meaningful. We apply this transformation as part of an end-to-end deep learning architecture and achieve state-of-the-art performance on the KITTI 3D object benchmark. We will release full source code and pretrained models upon acceptance of this manuscript for publication.
|
Integral images have been fundamentally associated with object detection ever since their introduction in the seminal work of Viola and Jones @cite_22 . They have formed an important component in many contemporary 3D object detection approaches including AVOD @cite_6 , MV3D @cite_2 , Mono3D @cite_24 and 3DOP @cite_14 . In all of these cases however, integral images do not backpropagate gradients or form part of a fully end-to-end deep learning architecture. To our knowledge, the only prior work to do so is that of Kasagi al @cite_31 , which combines a convolutional layer and an average pooling layer to reduce computational cost.
|
{
"abstract": [
"The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.",
"We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: this https URL",
"The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain highquality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.",
"This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.",
""
],
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_31"
],
"mid": [
"2184393491",
"2164598857",
"2774996270",
"2468368736",
"2950952351",
""
]
}
|
Orthographic Feature Transform for Monocular 3D Object Detection
|
The success of any autonomous agent is contingent on its ability to detect and localize the objects in its surrounding environment. Prediction, avoidance and path planning all depend on robust estimates of the 3D positions and dimensions of other entities in the scene. This has led to 3D bounding box detection emerging as an important problem in computer vision and robotics, particularly in the context of autonomous driving. To date the 3D object detection literature has been dominated by approaches which make use of rich LiDAR point clouds [37,33,15,27,5,6,22,1], while the performance of image-only methods, which lack the absolute depth information of LiDAR, lags significantly behind. Given the high cost of existing LiDAR units, the sparsity of LiDAR point clouds at long ranges, and the need for sensor redundancy, accurate 3D object detection from Figure 1. 3D bounding box detection from monocular images. The proposed system maps image-based features to an orthographic birds-eye-view and predicts confidence maps and bounding box offsets in this space. These outputs are then decoded via nonmaximum suppression to yield discrete bounding box predictions. monocular images remains an important research objective. To this end, we present a novel 3D object detection algorithm which takes a single monocular RGB image as input and produces high quality 3D bounding boxes, achieving state-of-the-art performance among monocular methods on the challenging KITTI benchmark [8].
Images are, in many senses, an extremely challenging modality. Perspective projection implies that the scale of a single object varies considerably with distance from the camera; its appearance can change drastically depending on the viewpoint; and distances in the 3D world cannot be inferred directly. These factors present enormous challenges to a monocular 3D object detection system. A far more innocuous representation is the orthographic birdseye-view map commonly employed in many LiDAR-based methods [37,33,1]. Under this representation, scale is homogeneous; appearance is largely viewpoint-independent; and distances between objects are meaningful. Our key insight therefore is that as much reasoning as possible should be performed in this orthographic space rather than directly on the pixel-based image domain. This insight proves essential to the success of our proposed system.
It is unclear, however, how such a representation could be constructed from a monocular image alone. We therefore introduce the orthographic feature transform (OFT): a differentiable transformation which maps a set of features extracted from a perspective RGB image to an orthographic birds-eye-view feature map. Crucially, we do not rely on any explicit notion of depth: rather our system builds up an internal representation which is able to determine which features from the image are relevant to each location on the birds-eye-view. We apply a deep convolutional neural network, the topdown network, in order to reason locally about the 3D configuration of the scene.
The main contributions of our work are as follows:
1. We introduce the orthographic feature transform (OFT) which maps perspective image-based features into an orthographic birds-eye-view, implemented efficiently using integral images for fast average pooling.
2. We describe a deep learning architecture for predicting 3D bounding boxes from monocular RGB images.
3. We highlight the importance of reasoning in 3D for the object detection task.
The system is evaluated on the challenging KITTI 3D object benchmark and achieves state-of-the-art results among monocular approaches.
3D Object Detection Architecture
In this section we describe our full approach for extracting 3D bounding boxes from monocular images. An overview of the system is illustrated in Figure 3. The algorithm comprises five main components: 1. A front-end ResNet [10] feature extractor which extracts multi-scale feature maps from the input image.
3.
A topdown network, consisting of a series of ResNet residual units, which processes the birds-eye-view feature maps in a manner which is invariant to the perspective effects observed in the image.
4.
A set of output heads which generate, for each object class and each location on the ground plane, a confidence score, position offset, dimension offset and a orientation vector.
5.
A non-maximum suppression and decoding stage, which identifies peaks in the confidence maps and generates discrete bounding box predictions.
The remainder of this section will describe each of these components in detail.
Feature extraction
The first element of our architecture is a convolutional feature extractor which generates a hierarchy of multi-scale 2D feature maps from the raw input image. These features encode information about low-level structures in the image, which form the basic components used by the topdown network to construct an implicit 3D representation of the scene. The front-end network is also responsible for inferring depth information based on the size of image features since subsequent stages of the architecture aim to eliminate variance to scale.
Orthographic feature transform
In order to reason about the 3D world in the absence of perspective effects, we must first apply a mapping from feature maps extracted in the image space to orthographic feature maps in the world space, which we term the Orthographic Feature Transform (OFT).
The objective of the OFT is to populate the 3D voxel feature map g(x, y, z) ∈ R n with relevant n-dimensional features from the image-based feature map f (u, v) ∈ R n extracted by the front-end feature extractor. The voxel map is defined over a uniformly spaced 3D lattice G which is fixed to the ground plane a distance y 0 below the camera and has dimensions W , H, D and a voxel size of r. For a given voxel grid location (x, y, z) ∈ G, we obtain the voxel feature g(x, y, z) by accumulating features over the area of the image feature map f which corresponds to the voxel's 2D projection. In general each voxel, which is a cube of size r, will project to hexagonal region in the image plane. We approximate this by a rectangular bounding box with top-left and bottom-right corners (u 1 , v 1 ) and (u 2 , v 2 ) which are given by
u 1 = f x − 0.5r z + 0.5 x |x| r + c u , u 2 = f x + 0.5r z − 0.5 x |x| r + c u , v 1 = f y − 0.5r z + 0.5 y |y| r + c v , v 2 = f y + 0.5r z − 0.5 y |y| r + c v(1)
where f is the camera focal length and (c u , c v ) the principle point.
We can then assign a feature to the appropriate location in the voxel feature map g by average pooling over the projected voxel's bounding box in the image feature map f :
g(x, y, z) = 1 (u 2 − u 1 )(v 2 − v 1 ) u2 u=u1 v2 v=v1 f (u, v) (2)
The resulting voxel feature map g already provides a representation of the scene which is free from the effects of perspective projection. However deep neural networks which operate on large voxel grids are typically extremely memory intensive. Given that we are predominantly interested in applications such as autonomous driving where most objects are fixed to the 2D ground plane, we can make the problem more tractable by collapsing the 3D voxel feature map down to a third, two-dimensional representation which we term the orthographic feature map h(x, z). The orthographic feature map is obtained by summing voxel features along the vertical axis after multiplication with a set of learned weight matrices W (y) ∈ R n×n :
h(x, z) = y0+H y=y0 W (y)g(x, y, z)(3)
Transforming to an intermediate voxel representation before collapsing to the final orthographic feature map has the advantage that the information about the vertical configuration of the scene is retained. This turns out to be essential for downstream tasks such as estimating the height and vertical position of object bounding boxes. typical voxel grid setting generates around 150k bounding boxes, which far exceeds the ∼2k regions of interest used by the Faster R-CNN [29] architecture, for example. To facilitate pooling over such a large number of regions, we make use of a fast average pooling operation based on integral images [32]. An integral image, or in this case integral feature map, F, is constructed from an input feature map f using the recursive relation
Fast average pooling with integral images
F(u, v) = f (u, v)+F(u−1, v)+F(u, v−1)−F(u−1, v−1).(4)
Given the integral feature map F, the output feature g(x, y, z) corresponding to the region defined by bounding box coordinates (u 1 , v 1 ) and (u 2 , v 2 ) (see Equation 1), is given by
g(x, y, z) = F(u 1 , v 1 ) + F(u 2 , v 2 ) − F(u 1 , v 2 ) − F(u 2 , v 1 ) (u 2 − u 1 )(v 2 − v 1 )(5)
The complexity of this pooling operation is independent of the size of the individual regions, which makes it highly appropriate for our application where the size and shape of the regions varies considerably depending on whether the voxel is close to or far from the camera. It is also fully differentiable in terms of the original feature map f and so can be used as part of an end-to-end deep learning framework.
Topdown network
A core contribution of this work is to emphasize the importance of reasoning in 3D for object recognition and detection in complex 3D scenes. In our architecture, this reasoning component is performed by a sub-network which we term the topdown network. This is a simple convolutional network with ResNet-style skip connections which operates on the the 2D feature maps h generated by the previously described OFT stage. Since the filters of the topdown network are applied convolutionally, all processing is invariant to the location of the feature on the ground plane. This means that feature maps which are distant from the camera receive exactly the same treatment as those that are close, despite corresponding a much smaller region of the image. The ambition is that the final feature representation will therefore capture information purely about the underlying 3D structure of the scene and not its 2D projection.
Confidence map prediction
Among both 2D and 3D approaches, detection is conventionally treated as a classification problem, with a cross entropy loss used to identify regions of the image which contain objects. In our application however we found it to be more effective to adopt the confidence map regression approach of Huang et al. [11]. The confidence map S(x, z) is a smooth function which indicates the probability that there exists an object with a bounding box centred on location (x, y 0 , z), where y 0 is the distance of the ground plane below the camera. Given a set of N ground truth objects with bounding box centres p i = x i y i z i , i = 1, . . . , N , we compute the ground truth confidence map as a smooth Gaussian region of width σ around the center of each object. The confidence at location (x, z) is given by
S(x, z) = max i exp − (x i − x) 2 + (z i − z) 2 2σ 2 .(6)
The confidence map prediction head of our network is trained via an 1 loss to regress to the ground truth confidence for each location on the orthographic grid H. A welldocumented challenge is that there are vastly fewer positive (high confidence) locations than negative ones, which leads to the negative component of the loss dominating optimization [31,18]. To overcome this we scale the loss corresponding to negative locations (which we define as those with S(x, z) < 0.05) by a constant factor of 10 −2 .
Localization and bounding box estimation
The confidence map S encodes a coarse approximation of the location of each object as a peak in the confidence score, which gives a position estimate accurate up to the resolution r of the feature maps. In order to localize each object more precisely, we append an additional network output head which predicts the relative offset ∆ pos from grid cell locations on the ground plane (x, y 0 , z) to the center of the corresponding ground truth object p i :
∆ pos (x, z) = xi−x σ yi−y0 σ zi−z σ(7)
We use the same scale factor σ as described in Section 3.4 to normalize the position offsets within a sensible range. A ground truth object instance i is assigned to a grid location (x, z) if any part of the object's bounding box intersects the given grid cell. Cells which do not intersect any ground truth objects are ignored during training. In addition to localizing each object, we must also determine the size and orientation of each bounding box. We therefore introduce two further network outputs. The first, the dimension head, predicts the logarithmic scale offset ∆ dim between the assigned ground truth object i with dimensions d i = w i h i l i and the mean dimensions d = whl over all objects of the given class.
∆ dim (x, z) = log wī w log hī h log lī l (8)
The second, the orientation head, predicts the sine and cosine of the objects orientation θ i about the y-axis:
∆ ang (x, z) = sin θ i cos θ i(9)
Note that since we are operating in the orthographic birdseye-view space, we are able to predict the y-axis orientation θ directly, unlike other works e.g. [23] which predict the so-called observation angle α to take into account the effects of perspective and relative viewpoint. The position offset ∆ pos , dimension offset ∆ dim and orientation vector ∆ ang are trained using an 1 loss.
Non-maximum suppression
Similarly to other object detection algorithms, we apply a non-maximum suppression (NMS) stage to obtain a final discrete set of object predictions. In a conventional object detection setting this step can be expensive since it requires O(N 2 ) bounding box overlap computations. This is compounded by the fact that pairs of 3D boxes are not necessarily axis aligned, which makes the overlap computation more difficult compared to the 2D case. Fortunately, an additional benefit of the use of confidence maps in place of anchor box classification is that we can apply NMS in the more conventional image processing sense, i.e. searching for local maxima on the 2D confidence maps S. Here, the orthographic birds-eye-view again proves invaluable: the fact that two objects cannot occupy the same volume in the 3D world means that peaks on the confidence maps are naturally separated.
To alleviate the effects of noise in the predictions, we first smooth the confidence maps by applying a Gaussian kernel with width σ N M S . A location (x i , z i ) on the smoothed confidence mapŜ is deemed to be a maximum if
S(x i , z i ) ≥Ŝ(x i +m, z i +n) ∀m, n ∈ {−1, 0, 1}. (10)
Of the produced peak locations, any with a confidence S(x i , y i ) smaller than a given threshold t are eliminated. This results in the final set of predicted object instances, whose bounding box center p i , dimensions d i , and orientation θ i , are given by inverting the relationships in Equations 7, 8 and 9 respectively.
Experiments
Experimental setup
Architecture For our front-end feature extractor we make use of a ResNet-18 network without bottleneck layers. We intentionally choose the front-end network to be relatively shallow, since we wish to put as much emphasis as possible on the 3D reasoning component of the model. We extract features immediately before the final three downsampling layers, resulting in a set of feature maps {f s } at scales s of 1/8, 1/16 and 1/32 of the original input resolution. Convolutional layers with 1×1 kernels are used to map these feature maps to a common feature size of 256, before processing them via the orthographic feature transform to yield orthographic feature maps {h s }. We use a voxel grid with dimensions 80m×4m×80m, which is sufficient to include all annotated instances in KITTI, and set the grid resolution r to be 0.5m. For the topdown network, we use a simple 16-layer ResNet without any downsampling or bottleneck units. The output heads each consist of a single 1×1 convolution layer. Throughout the model we replace all batch normalization [12] layers with group normalization [34] which has been found to perform better for training with small batch sizes.
Dataset We train and evaluate our method using the KITTI 3D object detection benchmark dataset [8]. For all experiments we follow the train-val split of Chen et al. [3] which divides the KITTI training set into 3712 training images and 3769 validation images.
Data augmentation Since our method relies on a fixed mapping from the image plane to the ground plane, we found that extensive data augmentation was essential for the network to learn robustly. We adopt three types of widelyused augmentations: random cropping, scaling and horizontal flipping, adjusting the camera calibration parameters f and (c u , c v ) accordingly to reflect these perturbations. Training procedure The model is trained using SGD for 600 epochs with a batch size of 8, momentum of 0.9 and learning rate of 10 −7 . Following [21], losses are summed rather than averaged, which avoids biasing the gradients towards examples with few object instances. The loss functions from the various output heads are combined using a simple equal weighting strategy.
Comparison to state-of-the-art
We evaluate our approach on two tasks from the KITTI 3D object detection benchmark. The 3D bounding box detection task requires that each predicted 3D bounding box should intersect a corresponding ground truth box by at least 70% in the case of cars and 50% for pedestrians and cyclists. The birds-eye-view detection task meanwhile is slightly more lenient, requiring the same amount of overlap between a 2D birds-eye-view projection of the predicted and ground truth bounding boxes on the ground plane. At the time of writing, the KITTI benchmark included only one published approach operating on monocular RGB images alone ( [24]), which we compare our method against in Table 1. We therefore perform additional evaluation on the KITTI validation split set out by Chen et al. (2016) [3]; the results of which are presented in Table 2. For monocular methods, performance on the pedestrian and cyclist classes is typically insufficient to obtain meaningful results and we therefore follow other works [3,4,24] and focus our evaluation on the car class only.
It can be seen from Tables 1 and 2 that our method is able to outperform all comparable (i.e. monocular only) methods by a considerable margin across both tasks and all difficulty criteria. The improvement is particularly marked on the hard evaluation category, which includes instances which are heavily occluded, truncated or far from the camera. We also show in Table 2 that our method performs competitively with the stereo approach of Chen et al. (2015) [4], achieving close to or in one case better performance than their 3DOP system. This is in spite of the fact that unlike [4], our method does not have access to any explicit knowledge of the depth of the scene.
Qualitative results
Comparison to Mono3D We provide a qualitative comparison of predictions generated by our approach and Mono3D [3] in Figure 4. A notable observation is that our system is able to reliably detect objects at a considerable distance from the camera. This is a common failure case among both 2D and 3D object detectors, and indeed many of the cases which are correctly identified by our system are overlooked by Mono3D. We argue that this ability to recognise objects at distance is a major strength of our system, and we explore this capacity further in Section 5.1. Further qualitative results are included in supplementary material.
Ground plane confidence maps A unique feature of our approach is that we operate largely in the orthographic birds-eye-view feature space. To illustrate this, Figure 5 shows examples of predicted confidence maps S(x, z) both in the topdown view and projected into the image on the ground plane. It can be seen that the predicted confidence maps are well localized around each object center.
Ablation study
A central claim of our approach is that reasoning in the orthographic birds-eye-view space significantly improves performance. To validate this claim, we perform an ablation study where we progressively remove layers from the topdown network. In the extreme case, when the depth of the topdown network is zero, the architecture is effectively reduced to RoI pooling [9] over projected bounding boxes, rendering it similar to R-CNN-based architectures. Figure 7 shows a plot of average precision against the total number of parameters for two different architectures.
The trend is clear: removing layers from the topdown network significantly reduces performance. Some of this Figure 5. Examples of confidence maps generated by our approach, which we visualize both in birds-eye-view (right) and projected onto the ground plane in the image view (left). We use the pre-computed ground planes of [4] to obtain the road position: note that this is for visualization purposes only and the ground planes are not used elsewehere in our approach. Best viewed in color. decline in performance may be explained by the fact that reducing the size of the topdown network reduces the overall depth of the network, and therefore its representational power. However, as can be seen from Figure 7, adopting a shallow front-end (ResNet-18) with a large topdown network achieves significantly better performance than a deeper network (ResNet-34) without any topdown layers, despite the two architectures having roughly the same number of parameters. This strongly suggests that a significant part of the success of our architecture comes from its ability to reason in 3D, as afforded by the 2D convolution layers operating on the orthographic feature maps.
Discussion
Performance as a function of depth
Motivated by the qualitative results in Section 4.2, we wished to further quantify the ability of our system to detect and localize distant objects. Figure 8 plots performance of each system when evaluated only on objects which are at least the given distance away from the camera. Whilst we outperform Mono3D over all depths, it is also apparent that the performance of our system degrades much more slowly as we consider objects further from the camera. We believe that this is a key strength of our approach.
Evolution of confidence maps during training
While the confidence maps predicted by our network are not necessarily calibrated estimates of model certainty, observing their evolution over the course of training does give valuable insights into the learned representation. Figure 6 shows an example of a confidence map predicted by the network at various points during training. During the early Mono3D [3] OFT-Net (Ours) Figure 8. Average BEV precision (val) as a function of the minimum distance of objects from the camera. We use an IoU threshold of 0.5 to better compare performance at large depths. stages of training, the network very quickly learns to identify regions of the image which contain objects, which can be seen by the fact that high confidence regions correspond to projection lines from the optical center at (0, 0) which intersect a ground truth object. However, there exists significant uncertainty about the depth of each object, leading to the predicted confidences being blurred out in the depth direction. This fits well with our intuition that for a monocular system depth estimation is significantly more challenging than recognition. As training progresses, the network is increasingly able to resolve the depth of the objects, producing sharper confidence regions clustered about the ground truth centers. It can be observed that even in the latter stages of training, there is considerably greater uncertainty in the depth of distant objects than that of nearby ones, evoking the well-known result from stereo that depth estimation error increases quadratically with distance.
Conclusions
In this work we have presented a novel approach to monocular 3D object detection, based on the intuition that operating in the birds-eye-view domain alleviates many undesirable properties of images which make it difficult to infer the 3D configuration of the world. We have proposed a simple orthographic feature transform which transforms image-based features into this birds-eye-view representation, and described how to implement it efficiently using integral images. This was then incorporated into part of a deep learning pipeline, in which we particularly emphasized the importance of spatial reasoning in the form of a deep 2D convolutional network applied to the extracted birds-eyeview features. Finally, we experimentally validated our hypothesis that reasoning in the topdown space does achieve significantly better results, and demonstrated state-of-theart performance on the KITTI 3D object benchmark.
| 4,139 |
1811.08541
|
2901423179
|
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacy-oriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level chrF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.
|
Recent work shows that maximum likelihood training could be sub-optimal due to the different conditions between training and test modes @cite_10 @cite_4 . In order to address the exposure bias and the loss which does not operate at the sequence level, Ranzato:2016:ICLR employ the REINFORCE algorithm @cite_14 to decide whether or not tokens from a sampled prediction could contribute to a high task-specific score ( BLEU). bahdanau2016actor use the actor-critic method from reinforcement learning to directly optimize a task-specific score.
|
{
"abstract": [
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
],
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_4"
],
"mid": [
"2119717200",
"2950304420",
"2176263492"
]
}
|
Neural Machine Translation with Adequacy-Oriented Learning
|
During the past several years, rapid progress has been made in the field of Neural Machine Translation (NMT) (Kalchbrenner and Blunsom 2013;Sutskever, Vinyals, and Le 2014;Bahdanau, Cho, and Bengio 2015;Gehring et al. 2017;Wu et al. 2016;Vaswani et al. 2017).
Although NMT models have advanced the community, they still face inadequate translation problems: one or multiple parts of the input sentence are not translated (Tu et al. 2016). We attribute this problem to the lack of the mechanism to guarantee the generated translation being as sufficient as human translation. NMT models are generally trained in an end-to-end manner to maximize the likelihood of the output sentence. Maximum Likelihood Estimation (MLE), however, could not judge the real quality of generated translation due to its several limitations 1. Exposure bias (Ranzato et al. 2016): The models are trained on the groundtruth data distribution, but at test time are used to generate target words based on previous model predictions, which can be erroneous;
Some recent work partially alleviates one or two of the above problems with advanced training strategies. For example, the first two problems are tackled by sequence level training using the REINFORCE algorithm (Ranzato et al. 2016;Bahdanau et al. 2017), minimum risk training (Shen et al. 2016), beam search optimization (Wiseman and Rush 2016) or adversarial learning (Wu et al. 2017;. The last problem can be alleviated by introducing an auxiliary reconstruction-based training objective to measure translation adequacy .
In this work, we aim to fully solve all the three problems in a unified framework. Specifically, we model the translation as a stochastic policy in Reinforcement Learning (RL) and directly perform gradient policy update. The RL reward is estimated on a complete sequence produced by the NMT model, which is able to correlate well with a sequencelevel task-specific metric. To explicitly measure translation adequacy, we propose a novel metric called Coverage Difference Ratio (CDR) which is calculated by counting how many source words are under-translated via directly comparing generated translation with human translation. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, the proposed approach is able to alleviate all the aforementioned limitations of MLE-based training.
We conduct experiments on Chinese⇒English and German⇔English translation tasks, using both the RNNbased NMT model (Bahdanau, Cho, and Bengio 2015) and the recently proposed TRANSFORMER (Vaswani et al. 2017). The consistent improvements across language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach. The proposed adequacy-oriented learning improves translation performance not only over a standard attention model, but also over a coverage-augmented attention model (Tu et al. 2016) that alleviates the inadequate translation problem at the word-level. In addition, the proposed metric -CDR score, consistently outperforms the commonly-used word-level BLEU (Papineni et al. 2002) and character-level CHRF3 (Popović 2015) scores in both the reinforcement learning and adversarial learning frameworks, indicating the superiority and necessity of an adequacy-oriented metric in training effective NMT models.
Approach Intuition
In this work, we try to solve the three problems mentioned above in a unified framework. Our objective is three-fold: 1. We solve the exposure bias problem by modeling the translation as a stochastic policy in reinforcement learning (RL) and directly performing policy gradient update.
2. The RL reward is estimated on a complete sequence, which correlates well with either sequence-level BLEU or a more adequacy-oriented metric, as described below.
3. We design a sequence-level metric -Coverage Difference Ratio (CDR) -to explicitly measure translation adequacy which focuses on the commonly-cited weaknesses of NMT models: producing fluent yet inadequate translations. We expect that the model can benefit from linguistic insights that correlate well with human intuitions.
Coverage Difference Ratio (CDR) We measure translation adequacy by the number of under-translated words via comparing generated translation with human translation. We take an example to illustrate how to measure translation adequacy in terms of coverage difference ratio. Figure 1(a) shows one inadequate translation. Following (Luong, Pham, and Manning 2015;Tu et al. 2016), we extract only oneto-one alignments (hard alignments) by selecting the source word with the highest alignment for each target word from the word alignments produced by NMT models. 1 A source word is considered to be translated when it is covered by the hard alignments, as shown in Figure 1(b). Comparing source words covered by generated translation with those covered by human translation, we can find that the two sets are very different for inadequate translation. Specifically, the difference generally lies in the untranslated source words that cause inadequate translation problem, indicating that coverage difference ratio is a good way to measure the adequacy of generated translation. Formally, we calculate the CDR score of a given generated Figure 2: Architecture of adequacy-oriented NMT. The newly added orientator O reads coverages of generated and human translations to generate a CDR score for each generated translation, which guides the discriminator D to differentiate good generated translations from bad ones. translationŷ by
CDR(ŷ|y, x) = 1 − |C ref \ C gen | |C ref |(5)
where C ref and C gen is the set of source words covered by human translation and generated translation, respectively. C ref \ C gen denotes the covered source words in C ref but not in C gen . We use C ref as the reference coverage to eliminate the effect of null-aligned source words which are not aligned to any target word. As seen, CDR(ŷ|y, x) is a number between 0 and 1, where 1 means "completely adequate translation" and 0 means "completely inadequate translation". Taking Figure 1(b) as an example, the CDR score is 1 − (7 − 4)/7 = 0.57.
Architecture
As shown in Figure 2, the proposed model consists of a generator, a discriminator, and an orientator.
Generator The generator G generates the translationŷ conditioned on the input sentence x. Because we need word alignments to calculate adequacy scores in terms of CDR, an attention-based NMT model is employed as the generator.
Orientator The orientator O reads the word alignments produced by NMT attention model when generating (or force decoding) the two translations and outputs an adequacy score for the generated translation in terms of the aforementioned CDR score. Then, the orientator is used to guide the discriminator to distinguish adequate translation from inadequate ones. Accordingly, adequate translations with higher CDR scores would contribute more to parameter tuning, as described in the following section.
Discriminator We employ a RNN-based discriminator to differentiate generated translation from human translation, given the input sentence. The discriminator reads the input sentence x and its translation (either y orŷ), and use two RNNs to summarize the two sentences individually. The concatenation of the two summarized representation vectors is fed into a fully-connected neural network.
Adequacy-Oriented Training
In order to train the system efficiently and effectively, we employ a periodical training strategy, which is commonly used in adversarial training (Goodfellow et al. 2014;Wu et al. 2017). Specifically, we optimize two networks with two objective functions and periodically freeze the parameters of each network during training.
Train Generator and Freeze Discriminator Following Wu et al. (2017), we use the REINFORCE algorithm (Williams 1992) to back-propagate the error signals from D to G, given the discretely generatedŷ from G. The objective of the generator is to maximize the expected reward:
L = E (x,ŷ)∈G θ [D(x,ŷ)](6)
whose gradient is
θ = E (x,ŷ)∈G θ [D(x,ŷ) θ log G θ (ŷ|x)](7)
The gradient is approximated by a sample from G using the REINFORCE algorithm (Williams 1992):
θ ≈ˆ θ = D(x,ŷ) θ log G θ (ŷ|x)(8)
where θ log G(ŷ|x) is the standard NMT gradient which is calculated by the maximum likelihood estimation. Therefore, the final update function for the generator is:
θ = θ − ηˆ θ (9)
where the η is the learning rate. Based on the update function, when the D(x,ŷ) is large (i.e., ideally, the generated translationŷ has a high adequacy score) , the larger reward the NMT model will get, and thus parameters are updated more based on the adequate training instance (x,ŷ).
Train Discriminator Oriented by Adequacy and Freeze
Generator Ideally, a good translationŷ should be assigned a high adequacy score D(x,ŷ) and thus contribute more to updating the generator. Therefore, we expect the discriminator to not only differentiate generated translations from human translations but also distinguish bad generated translations from good ones. Therefore, a new objective of discriminator is to assign a precise score for each generated translation, which is consistent with their adequacy score:
min D |CDR(ŷ|x, y) − D(x,ŷ)| 2(10)
where CDR(ŷ|x, y) is the coverage difference ratio ofŷ. As seen, a well trained discriminator would assign a distinct score to each generated translation, which can better measure its adequacy.
Related Work
This work is related to modeling translation as policy gradient and adequacy modeling. For the former, we take minimum risk training, reinforcement learning and adversarial learning as representative strategies.
Minimum Risk Training In response to the exposure bias and word-level loss problems of MLE training, Shen et al. (2016) minimize the expected loss in terms of evaluation metrics on the training data. Our simplified model is analogous to their MRT model, if we directly use CDR as the reward to update parameters:
θ = CDR(ŷ|x, y)) θ log G θ (ŷ|x)(11)
The simplified model differs in that (1) we use adequacyoriented metric (i.e., CDR) while they use sequence-level BLEU, and (2) we only need to sample one candidate to calculate reinforcement reward while they generate multiple samples to calculate the expected risk. In addition, our discriminator gives a smoother and dynamically-updated objective compared with directly using the adequacy-oriented metric, because the latter is highly sensitive to the slight coverage difference (Koehn and Knowles 2017).
Reinforcement Learning Recent work shows that maximum likelihood training could be sub-optimal due to the different conditions between training and test modes Ranzato et al. 2016). In order to address the exposure bias and the loss which does not operate at the sequence level, Ranzato et al. (2016) employ the REINFORCE algorithm (Williams 1992) to decide whether or not tokens from a sampled prediction could contribute to a high taskspecific score (e.g., BLEU). Bahdanau et al. (2017) use the actor-critic method from reinforcement learning to directly optimize a task-specific score.
Adversarial Learning Recently, adversarial learning (Goodfellow et al. 2014) has been successfully applied to neural machine translation (Wu et al. 2017;Cheng et al. 2018). In the adversarial framework, NMT models generally serve as the generator which defines the policy to generate the target sentence y given the source sentence x. A discriminator tries to distinguish the translation resultŷ = G(x) from the human-generated one y, given the source sentence x.
If we remove the orientator O, our model is roll-backed to the adversarial NMT, and the training objective of the discriminator D is rewritten as
max D {log D(x, y) + log(1 − D(x,ŷ))}(12)
The goal of the discriminator is try to maximize the likelihood of human translation D(x, y) to 1 and minimize that of generated translation D(x,ŷ) to 0. As seen, the discriminator uses a binary classification by uniformly treating all generated translations as negative examples (i.e., labeling "0") and all human translations as positive examples (i.e., labeling "1"), regardless of the quality of the generated translations. However, intuitively, high-quality translations and low-quality translations should be treated differently by the discriminator, otherwise, inaccurate reward signals would be propagated back to the generator. In our proposed architecture, this problem can be alleviated by replacing the simple binary outputs with the more informative adequacy-oriented metric CDR, which is calculated by directly comparing generated and human translations.
Adequacy Modeling Inadequate translation problem is a commonly-cited weakness of NMT models (Tu et al. 2016). A number of recent efforts have explored ways to alleviate this problem. For example, Tu et al. (2016) and Mi et al. (2016) Our approach is complementary to theirs since they model the adequacy learning at the word-level inside the generator (i.e., NMT models), while we model it at the sequencelevel outside the generator. We take the representative coverage mechanism (Tu et al. 2016) as another stronger baseline model for its simplicity and efficiency, and experimental results show that our model can further improve performance.
In the context of adequacy-oriented training, introduce an auxiliary objective to measure the adequacy of translation candidates, which is calculated by reconstructing generated translations back to the original inputs. Benefiting from the flexible framework of reinforcement training, we are able to directly compare generated translations with human translations and define a more straightforward metric, i.e., CDR to measure adequacy of generated sentences.
Experiments Setup
We conduct experiments on the widely-used Chinese (Zh) ⇒English (En) and German The former contains 153K sentence pairs and the latter consists of 4.56M sentence pairs. The 4-gram NIST BLEU score (Papineni et al. 2002) is used as the evaluation metric and sign-test (Collins, Koehn, and Kučerová 2005) is employed to test statistical significance.
For training all neural models, we set the vocabulary size to 30K for Zh⇒En, for IWSLT 2014 De⇒En, we follow the preprocessing procedure as used in Ranzato et al. (2016) Table 1: Evaluation of translation performance on Zh⇒En translation. "D" denotes discriminator and "O" denotes orientator. "MRT" indicates minimum risk training (Shen et al. 2016), and "D CNN " indicates adversarial training with a CNN-based discriminator (Wu et al. 2017). "# Para." denotes the number of parameters, and "Speed" denotes the training speed (words/second). " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the corresponding baseline.
and for WMT 2014 En⇒De, preprocessing method described in Vaswani et al. (2017) is borrowed. We pre-train the discriminator on translation samples produced by the pre-trained generator. After that, the discriminator and the generator are trained together, and the generator is updated by the REINFORCE algorithm mentioned above. We also follow the training tips mentioned in Shen et al. (2016) and Wu et al. (2017). The hyper-parameter α which could control the sharpness of the generator distribution in our system is 1e-4, which could also be regarded as a baseline to reduce the variance of the REINFORCE algorithm. We also randomly choose 50% minibatches trained with our objective function and the other with the MLE principle. In MRT training strategy (Shen et al. 2016), the sample size is 25, the hyper-parameter α is 5e-3 and the loss function is negative smoothed sentence-level BLEU. We validate our models on two representative model architectures, namely RNNSEARCH and TRANSFORMER. For the RNNSEARCH model, mini-batch size is 80, the word-embedding dimension is 620, and the hidden layer size is 1000. We use a neural coverage model for RNNSEARCH-COVERAGE and the dimensionality of coverage vector is 100. The baseline models are trained for 15 epochs, which are used as the initial generator in the proposed framework. For the TRANSFORMER model, we implement our proposed approach on top of an open source toolkit THMUT . Configurations in Vaswani et al. (2017) are used to train the baseline models. Table 1 lists the results of various translation models on Zh⇒En corpus. As seen, all advanced systems significantly outperform the baseline system (i.e., RNNSEARCH), although there are still considerable differences among different variants.
Chinese-English Translation Task
Architectures of Discriminator (Rows 3-4) We evaluate two architectures for the discriminator. The CNN-based discriminator is composed of two convolution layers with 3 × 3 window, two max-pooling layers with 2 × 2 window and one softmax layer. The feature map size is 10 and the feedforward hidden size is 20. The RNN-based discriminator consists of two two-layer RNN encoders with 32 LSTM units and a fully-connected neural network with 32 units. We find that the RNN discriminator achieves similar performance with its CNN counterpart (37.59 vs. 37.54), while has a faster training speed (1.2K vs. 1.0K words/second). The main reason is that the CNN-based discriminator requires high computation and space cost to utilize multiple layers with convolution and pooling from a large input matrix.
Adequacy Metrics for Orientator (Rows 5-7) As aforementioned, the CDR score can be directly used as a reward to update the parameters, which is in analogy to the MRT (Shen et al. 2016) except that we use 1-best sample while they use n-best samples. For comparison, we also used the word-level BLEU score (Row 5) and character-level CHRF3 score (Popović 2015) (Row 6) as the rewards.
As seen, this strategy consistently improves translation performance, without introducing any new parameters. The extra computation cost is mainly from generating translation sentence and force decoding the human translation with the NMT model. We find that CDR not only outperforms its 1best counterpart "O BLEU " and "O CHRF3 ", but also surpasses "MRT BLEU " using 25 samples. We attribute this to the fact that CDR can better estimate the adequacy of the translation, which is the key problem of NMT models, and go beyond the the simple low-level n-gram matching measured by BLEU and CHRF3.
Combining Them Together (Row 8) By combining advantages of both reinforcement learning and adequacyoriented objective, our model achieves the best performance, which is 1.66 BLEU points better than the baseline "RNNSEARCH", up to 0.98 BLEU points better than using single component and significantly improve the performance of "MRT BLEU " model. One more observation can be made. "+D+O" outperforms its "+O" counterpart (e.g., 8 vs. 7), which confirms our claim that the discriminator gives a System Model De⇒En Existing end-to-end NMT systems (Ranzato et al. 2016) CNN encoder + Sequence level objective 20.73 (Bahdanau et al. 2017 Table 2: Comparing with previous works of applying reinforcement learning for NMT on IWSLT 2014 De⇒En translation task. " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the RNNSEARCH model.
Model BLEU GNMT + RL 26.30 ConvS2S (Gehring et al. 2017) 26.43 Transformer (Base) (Vaswani et al. 2017) 27.3 Transformer (Big) (Vaswani et al. 2017) 28 smoother and dynamically-updated score than directly using the calculated one.
Working with Coverage Model (Rows 11-12) Tu et al. (2016) propose a coverage model to indicate whether a source word is translated or not, which alleviates the inadequate translation problem of NMT models. We argue that our model is complementary to theirs, because we model the adequacy learning outside the generator by using an additional adequacy-oriented discriminator, while they model it inside the generator. Experimental results validate our hypothesis: the proposed approach further improves performance by 0.58 BLEU points over the coverage-augmented model RNNSEARCH-COVERAGE.
English-German Translation Tasks
To compare with previous work of applying reinforcement learning for NMT (Ranzato et al. 2016;Bahdanau et al. 2017;Wiseman and Rush 2016;Wu et al. 2017), we first conduct experiments on IWSLT 2014 De⇒En translation task. As listed in Table 2, we reproduce the results of adversarial training reported by Wu et al. (2017) (27.24 vs. 26.98 Table 4: Adequacy scores on randomly selected 100 sentences on Zh⇒En task, which are measured by CDR and human evaluation ("MAN").
our models.
We also evaluate our model on the recently proposed TRANSFORMER model (Vaswani et al. 2017) on WMT 2014 En⇒De corpus. As shown in Table 3, our models significantly improve performances in all cases. Combining with previous results, our model consistently improve translation performance across various language pairs and NMT architectures, demonstrating the effectiveness and universality of the proposed approach.
Analysis
To better understand our models, we conduct extensive analyses on the Zh⇒En translation task.
Adequacy Evaluation To better evaluate the adequacy, we randomly choose 100 sentences from the test set, and ask two human evaluators to judge the quality of generated translations. Five scales have been set up, i.e., {1, 2, 3, 4, 5}, where "1" means that it is irrelevant between the source sentence and the translation sentence, and "5" means that from semantic and syntactic aspect, the translation sentence and the source sentence is completely equivalent. Table 4 lists the results of human evaluation and the proposed CDR score. First, our models consistently improve the translation adequacy under both human evaluation and the CDR score, indicating that the proposed approaches indeed alleviate the inadequate translation problem. Second, the relative improvement on CDR is consistent with that on subjective evaluation. The Pearson Correlation Coefficient between CDR and manual evaluation score is 0.64, indicat- ing that the proposed CDR is a reasonable metric to measure translation adequacy.
Length Analysis We group sentences of similar lengths and compute both the BLEU score and CDR score for each group, as shown in Figure 3. The four length spans contain 1386, 2284, 1285, and 498 sentences, respectively. From the perspective of the BLEU score, the proposed model (i.e., "+D+O") outperforms RNNSEARCH in all length segments. In contrast, using discriminator only (i.e., "+D") outperforms RNNSEARCH in most cases, except long sentences (i.e., > 45). One possible reason is that it is difficult for the discriminator to differentiate generated translations from human translations for long source sentences, thus the generator cannot learn well about these instances due to the "mistaken" rewards from the discriminator. Accordingly, using the CDR score (i.e., "+O") alleviates this problem by providing a sequence-level score, which better estimates the adequacy of the translations. The final model combines the advantages of both a smoother and dynamically-updated objective from the discriminator ("+D"), and a more accurate objective specifically designed for the translation task from the orientator ("+O").
The CDR scores for all models degrade when the length of source sentence increases. This is mainly due to that inadequate translation problem is more serious on longer sentences for NMT models (Tu et al. 2016). The adversarial model (i.e., "+D") improves CDR scores while the improvement degrades faster with the increase of sentence length. However, our proposed approach consistently improves CDR performance in all length segments. Koehn and Knowles (2017) point out that the attention model does not always corre- spond to word alignment and may considerably diverge. Accordingly, the attention matrix-based CDR score may not always correctly reflect the adequacy of generation sentences. However, our discriminator is able to give a smoother and dynamically-updated objective, and thus could provide more accurate adequacy scores of generation sentences. From the above quantitative and qualitative results, the discriminator indeed leads to better performance (i.e., "+D+O" vs. "+O").
Effect of the Discriminator
Case Study To better understand the advantage of our proposed model, we show a translation case in Figure 4. Specially, we provide a Zh⇒En example with two translation results from the RNNSearch and Adequacy-NMT models respectively, as well as the corresponding CDR and BLEU scores. We emphasize on their different parts with bold fonts which lead to different translation quality. As seen, the latter part of the source sentence is not translated by the RNNSearch model while our proposed model correct this mistake. Accordingly, our model improves both CDR and BLEU scores.
| 3,791 |
1811.08541
|
2901423179
|
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacy-oriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level chrF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.
|
Recently, adversarial learning @cite_7 has been successfully applied to neural machine translation @cite_15 @cite_26 @cite_11 . In the adversarial framework, NMT models generally serve as the generator which defines the policy to generate the target sentence y given the source sentence x . A discriminator tries to distinguish the translation result @math from the human-generated one @math , given the source sentence @math .
|
{
"abstract": [
"In this paper, we study a new learning paradigm for Neural Machine Translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as Adversarial-NMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed Convolutional Neural Network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English @math French and German @math English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.",
"This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences (i.e., the golden target sentences), And the discriminator makes efforts to discriminate the machine-generated sentences from human-translated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-of-the-art Transformer on English-German and Chinese-English translation tasks.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models. In this paper, we propose to improve the robustness of NMT models with adversarial stability training. The basic idea is to make both the encoder and decoder in NMT models robust against input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. Experimental results on Chinese-English, English-German and English-French translation tasks show that our approaches can not only achieve significant improvements over strong NMT systems but also improve the robustness of NMT models."
],
"cite_N": [
"@cite_15",
"@cite_26",
"@cite_7",
"@cite_11"
],
"mid": [
"2607987856",
"2952030765",
"2099471712",
"2896388224"
]
}
|
Neural Machine Translation with Adequacy-Oriented Learning
|
During the past several years, rapid progress has been made in the field of Neural Machine Translation (NMT) (Kalchbrenner and Blunsom 2013;Sutskever, Vinyals, and Le 2014;Bahdanau, Cho, and Bengio 2015;Gehring et al. 2017;Wu et al. 2016;Vaswani et al. 2017).
Although NMT models have advanced the community, they still face inadequate translation problems: one or multiple parts of the input sentence are not translated (Tu et al. 2016). We attribute this problem to the lack of the mechanism to guarantee the generated translation being as sufficient as human translation. NMT models are generally trained in an end-to-end manner to maximize the likelihood of the output sentence. Maximum Likelihood Estimation (MLE), however, could not judge the real quality of generated translation due to its several limitations 1. Exposure bias (Ranzato et al. 2016): The models are trained on the groundtruth data distribution, but at test time are used to generate target words based on previous model predictions, which can be erroneous;
Some recent work partially alleviates one or two of the above problems with advanced training strategies. For example, the first two problems are tackled by sequence level training using the REINFORCE algorithm (Ranzato et al. 2016;Bahdanau et al. 2017), minimum risk training (Shen et al. 2016), beam search optimization (Wiseman and Rush 2016) or adversarial learning (Wu et al. 2017;. The last problem can be alleviated by introducing an auxiliary reconstruction-based training objective to measure translation adequacy .
In this work, we aim to fully solve all the three problems in a unified framework. Specifically, we model the translation as a stochastic policy in Reinforcement Learning (RL) and directly perform gradient policy update. The RL reward is estimated on a complete sequence produced by the NMT model, which is able to correlate well with a sequencelevel task-specific metric. To explicitly measure translation adequacy, we propose a novel metric called Coverage Difference Ratio (CDR) which is calculated by counting how many source words are under-translated via directly comparing generated translation with human translation. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, the proposed approach is able to alleviate all the aforementioned limitations of MLE-based training.
We conduct experiments on Chinese⇒English and German⇔English translation tasks, using both the RNNbased NMT model (Bahdanau, Cho, and Bengio 2015) and the recently proposed TRANSFORMER (Vaswani et al. 2017). The consistent improvements across language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach. The proposed adequacy-oriented learning improves translation performance not only over a standard attention model, but also over a coverage-augmented attention model (Tu et al. 2016) that alleviates the inadequate translation problem at the word-level. In addition, the proposed metric -CDR score, consistently outperforms the commonly-used word-level BLEU (Papineni et al. 2002) and character-level CHRF3 (Popović 2015) scores in both the reinforcement learning and adversarial learning frameworks, indicating the superiority and necessity of an adequacy-oriented metric in training effective NMT models.
Approach Intuition
In this work, we try to solve the three problems mentioned above in a unified framework. Our objective is three-fold: 1. We solve the exposure bias problem by modeling the translation as a stochastic policy in reinforcement learning (RL) and directly performing policy gradient update.
2. The RL reward is estimated on a complete sequence, which correlates well with either sequence-level BLEU or a more adequacy-oriented metric, as described below.
3. We design a sequence-level metric -Coverage Difference Ratio (CDR) -to explicitly measure translation adequacy which focuses on the commonly-cited weaknesses of NMT models: producing fluent yet inadequate translations. We expect that the model can benefit from linguistic insights that correlate well with human intuitions.
Coverage Difference Ratio (CDR) We measure translation adequacy by the number of under-translated words via comparing generated translation with human translation. We take an example to illustrate how to measure translation adequacy in terms of coverage difference ratio. Figure 1(a) shows one inadequate translation. Following (Luong, Pham, and Manning 2015;Tu et al. 2016), we extract only oneto-one alignments (hard alignments) by selecting the source word with the highest alignment for each target word from the word alignments produced by NMT models. 1 A source word is considered to be translated when it is covered by the hard alignments, as shown in Figure 1(b). Comparing source words covered by generated translation with those covered by human translation, we can find that the two sets are very different for inadequate translation. Specifically, the difference generally lies in the untranslated source words that cause inadequate translation problem, indicating that coverage difference ratio is a good way to measure the adequacy of generated translation. Formally, we calculate the CDR score of a given generated Figure 2: Architecture of adequacy-oriented NMT. The newly added orientator O reads coverages of generated and human translations to generate a CDR score for each generated translation, which guides the discriminator D to differentiate good generated translations from bad ones. translationŷ by
CDR(ŷ|y, x) = 1 − |C ref \ C gen | |C ref |(5)
where C ref and C gen is the set of source words covered by human translation and generated translation, respectively. C ref \ C gen denotes the covered source words in C ref but not in C gen . We use C ref as the reference coverage to eliminate the effect of null-aligned source words which are not aligned to any target word. As seen, CDR(ŷ|y, x) is a number between 0 and 1, where 1 means "completely adequate translation" and 0 means "completely inadequate translation". Taking Figure 1(b) as an example, the CDR score is 1 − (7 − 4)/7 = 0.57.
Architecture
As shown in Figure 2, the proposed model consists of a generator, a discriminator, and an orientator.
Generator The generator G generates the translationŷ conditioned on the input sentence x. Because we need word alignments to calculate adequacy scores in terms of CDR, an attention-based NMT model is employed as the generator.
Orientator The orientator O reads the word alignments produced by NMT attention model when generating (or force decoding) the two translations and outputs an adequacy score for the generated translation in terms of the aforementioned CDR score. Then, the orientator is used to guide the discriminator to distinguish adequate translation from inadequate ones. Accordingly, adequate translations with higher CDR scores would contribute more to parameter tuning, as described in the following section.
Discriminator We employ a RNN-based discriminator to differentiate generated translation from human translation, given the input sentence. The discriminator reads the input sentence x and its translation (either y orŷ), and use two RNNs to summarize the two sentences individually. The concatenation of the two summarized representation vectors is fed into a fully-connected neural network.
Adequacy-Oriented Training
In order to train the system efficiently and effectively, we employ a periodical training strategy, which is commonly used in adversarial training (Goodfellow et al. 2014;Wu et al. 2017). Specifically, we optimize two networks with two objective functions and periodically freeze the parameters of each network during training.
Train Generator and Freeze Discriminator Following Wu et al. (2017), we use the REINFORCE algorithm (Williams 1992) to back-propagate the error signals from D to G, given the discretely generatedŷ from G. The objective of the generator is to maximize the expected reward:
L = E (x,ŷ)∈G θ [D(x,ŷ)](6)
whose gradient is
θ = E (x,ŷ)∈G θ [D(x,ŷ) θ log G θ (ŷ|x)](7)
The gradient is approximated by a sample from G using the REINFORCE algorithm (Williams 1992):
θ ≈ˆ θ = D(x,ŷ) θ log G θ (ŷ|x)(8)
where θ log G(ŷ|x) is the standard NMT gradient which is calculated by the maximum likelihood estimation. Therefore, the final update function for the generator is:
θ = θ − ηˆ θ (9)
where the η is the learning rate. Based on the update function, when the D(x,ŷ) is large (i.e., ideally, the generated translationŷ has a high adequacy score) , the larger reward the NMT model will get, and thus parameters are updated more based on the adequate training instance (x,ŷ).
Train Discriminator Oriented by Adequacy and Freeze
Generator Ideally, a good translationŷ should be assigned a high adequacy score D(x,ŷ) and thus contribute more to updating the generator. Therefore, we expect the discriminator to not only differentiate generated translations from human translations but also distinguish bad generated translations from good ones. Therefore, a new objective of discriminator is to assign a precise score for each generated translation, which is consistent with their adequacy score:
min D |CDR(ŷ|x, y) − D(x,ŷ)| 2(10)
where CDR(ŷ|x, y) is the coverage difference ratio ofŷ. As seen, a well trained discriminator would assign a distinct score to each generated translation, which can better measure its adequacy.
Related Work
This work is related to modeling translation as policy gradient and adequacy modeling. For the former, we take minimum risk training, reinforcement learning and adversarial learning as representative strategies.
Minimum Risk Training In response to the exposure bias and word-level loss problems of MLE training, Shen et al. (2016) minimize the expected loss in terms of evaluation metrics on the training data. Our simplified model is analogous to their MRT model, if we directly use CDR as the reward to update parameters:
θ = CDR(ŷ|x, y)) θ log G θ (ŷ|x)(11)
The simplified model differs in that (1) we use adequacyoriented metric (i.e., CDR) while they use sequence-level BLEU, and (2) we only need to sample one candidate to calculate reinforcement reward while they generate multiple samples to calculate the expected risk. In addition, our discriminator gives a smoother and dynamically-updated objective compared with directly using the adequacy-oriented metric, because the latter is highly sensitive to the slight coverage difference (Koehn and Knowles 2017).
Reinforcement Learning Recent work shows that maximum likelihood training could be sub-optimal due to the different conditions between training and test modes Ranzato et al. 2016). In order to address the exposure bias and the loss which does not operate at the sequence level, Ranzato et al. (2016) employ the REINFORCE algorithm (Williams 1992) to decide whether or not tokens from a sampled prediction could contribute to a high taskspecific score (e.g., BLEU). Bahdanau et al. (2017) use the actor-critic method from reinforcement learning to directly optimize a task-specific score.
Adversarial Learning Recently, adversarial learning (Goodfellow et al. 2014) has been successfully applied to neural machine translation (Wu et al. 2017;Cheng et al. 2018). In the adversarial framework, NMT models generally serve as the generator which defines the policy to generate the target sentence y given the source sentence x. A discriminator tries to distinguish the translation resultŷ = G(x) from the human-generated one y, given the source sentence x.
If we remove the orientator O, our model is roll-backed to the adversarial NMT, and the training objective of the discriminator D is rewritten as
max D {log D(x, y) + log(1 − D(x,ŷ))}(12)
The goal of the discriminator is try to maximize the likelihood of human translation D(x, y) to 1 and minimize that of generated translation D(x,ŷ) to 0. As seen, the discriminator uses a binary classification by uniformly treating all generated translations as negative examples (i.e., labeling "0") and all human translations as positive examples (i.e., labeling "1"), regardless of the quality of the generated translations. However, intuitively, high-quality translations and low-quality translations should be treated differently by the discriminator, otherwise, inaccurate reward signals would be propagated back to the generator. In our proposed architecture, this problem can be alleviated by replacing the simple binary outputs with the more informative adequacy-oriented metric CDR, which is calculated by directly comparing generated and human translations.
Adequacy Modeling Inadequate translation problem is a commonly-cited weakness of NMT models (Tu et al. 2016). A number of recent efforts have explored ways to alleviate this problem. For example, Tu et al. (2016) and Mi et al. (2016) Our approach is complementary to theirs since they model the adequacy learning at the word-level inside the generator (i.e., NMT models), while we model it at the sequencelevel outside the generator. We take the representative coverage mechanism (Tu et al. 2016) as another stronger baseline model for its simplicity and efficiency, and experimental results show that our model can further improve performance.
In the context of adequacy-oriented training, introduce an auxiliary objective to measure the adequacy of translation candidates, which is calculated by reconstructing generated translations back to the original inputs. Benefiting from the flexible framework of reinforcement training, we are able to directly compare generated translations with human translations and define a more straightforward metric, i.e., CDR to measure adequacy of generated sentences.
Experiments Setup
We conduct experiments on the widely-used Chinese (Zh) ⇒English (En) and German The former contains 153K sentence pairs and the latter consists of 4.56M sentence pairs. The 4-gram NIST BLEU score (Papineni et al. 2002) is used as the evaluation metric and sign-test (Collins, Koehn, and Kučerová 2005) is employed to test statistical significance.
For training all neural models, we set the vocabulary size to 30K for Zh⇒En, for IWSLT 2014 De⇒En, we follow the preprocessing procedure as used in Ranzato et al. (2016) Table 1: Evaluation of translation performance on Zh⇒En translation. "D" denotes discriminator and "O" denotes orientator. "MRT" indicates minimum risk training (Shen et al. 2016), and "D CNN " indicates adversarial training with a CNN-based discriminator (Wu et al. 2017). "# Para." denotes the number of parameters, and "Speed" denotes the training speed (words/second). " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the corresponding baseline.
and for WMT 2014 En⇒De, preprocessing method described in Vaswani et al. (2017) is borrowed. We pre-train the discriminator on translation samples produced by the pre-trained generator. After that, the discriminator and the generator are trained together, and the generator is updated by the REINFORCE algorithm mentioned above. We also follow the training tips mentioned in Shen et al. (2016) and Wu et al. (2017). The hyper-parameter α which could control the sharpness of the generator distribution in our system is 1e-4, which could also be regarded as a baseline to reduce the variance of the REINFORCE algorithm. We also randomly choose 50% minibatches trained with our objective function and the other with the MLE principle. In MRT training strategy (Shen et al. 2016), the sample size is 25, the hyper-parameter α is 5e-3 and the loss function is negative smoothed sentence-level BLEU. We validate our models on two representative model architectures, namely RNNSEARCH and TRANSFORMER. For the RNNSEARCH model, mini-batch size is 80, the word-embedding dimension is 620, and the hidden layer size is 1000. We use a neural coverage model for RNNSEARCH-COVERAGE and the dimensionality of coverage vector is 100. The baseline models are trained for 15 epochs, which are used as the initial generator in the proposed framework. For the TRANSFORMER model, we implement our proposed approach on top of an open source toolkit THMUT . Configurations in Vaswani et al. (2017) are used to train the baseline models. Table 1 lists the results of various translation models on Zh⇒En corpus. As seen, all advanced systems significantly outperform the baseline system (i.e., RNNSEARCH), although there are still considerable differences among different variants.
Chinese-English Translation Task
Architectures of Discriminator (Rows 3-4) We evaluate two architectures for the discriminator. The CNN-based discriminator is composed of two convolution layers with 3 × 3 window, two max-pooling layers with 2 × 2 window and one softmax layer. The feature map size is 10 and the feedforward hidden size is 20. The RNN-based discriminator consists of two two-layer RNN encoders with 32 LSTM units and a fully-connected neural network with 32 units. We find that the RNN discriminator achieves similar performance with its CNN counterpart (37.59 vs. 37.54), while has a faster training speed (1.2K vs. 1.0K words/second). The main reason is that the CNN-based discriminator requires high computation and space cost to utilize multiple layers with convolution and pooling from a large input matrix.
Adequacy Metrics for Orientator (Rows 5-7) As aforementioned, the CDR score can be directly used as a reward to update the parameters, which is in analogy to the MRT (Shen et al. 2016) except that we use 1-best sample while they use n-best samples. For comparison, we also used the word-level BLEU score (Row 5) and character-level CHRF3 score (Popović 2015) (Row 6) as the rewards.
As seen, this strategy consistently improves translation performance, without introducing any new parameters. The extra computation cost is mainly from generating translation sentence and force decoding the human translation with the NMT model. We find that CDR not only outperforms its 1best counterpart "O BLEU " and "O CHRF3 ", but also surpasses "MRT BLEU " using 25 samples. We attribute this to the fact that CDR can better estimate the adequacy of the translation, which is the key problem of NMT models, and go beyond the the simple low-level n-gram matching measured by BLEU and CHRF3.
Combining Them Together (Row 8) By combining advantages of both reinforcement learning and adequacyoriented objective, our model achieves the best performance, which is 1.66 BLEU points better than the baseline "RNNSEARCH", up to 0.98 BLEU points better than using single component and significantly improve the performance of "MRT BLEU " model. One more observation can be made. "+D+O" outperforms its "+O" counterpart (e.g., 8 vs. 7), which confirms our claim that the discriminator gives a System Model De⇒En Existing end-to-end NMT systems (Ranzato et al. 2016) CNN encoder + Sequence level objective 20.73 (Bahdanau et al. 2017 Table 2: Comparing with previous works of applying reinforcement learning for NMT on IWSLT 2014 De⇒En translation task. " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the RNNSEARCH model.
Model BLEU GNMT + RL 26.30 ConvS2S (Gehring et al. 2017) 26.43 Transformer (Base) (Vaswani et al. 2017) 27.3 Transformer (Big) (Vaswani et al. 2017) 28 smoother and dynamically-updated score than directly using the calculated one.
Working with Coverage Model (Rows 11-12) Tu et al. (2016) propose a coverage model to indicate whether a source word is translated or not, which alleviates the inadequate translation problem of NMT models. We argue that our model is complementary to theirs, because we model the adequacy learning outside the generator by using an additional adequacy-oriented discriminator, while they model it inside the generator. Experimental results validate our hypothesis: the proposed approach further improves performance by 0.58 BLEU points over the coverage-augmented model RNNSEARCH-COVERAGE.
English-German Translation Tasks
To compare with previous work of applying reinforcement learning for NMT (Ranzato et al. 2016;Bahdanau et al. 2017;Wiseman and Rush 2016;Wu et al. 2017), we first conduct experiments on IWSLT 2014 De⇒En translation task. As listed in Table 2, we reproduce the results of adversarial training reported by Wu et al. (2017) (27.24 vs. 26.98 Table 4: Adequacy scores on randomly selected 100 sentences on Zh⇒En task, which are measured by CDR and human evaluation ("MAN").
our models.
We also evaluate our model on the recently proposed TRANSFORMER model (Vaswani et al. 2017) on WMT 2014 En⇒De corpus. As shown in Table 3, our models significantly improve performances in all cases. Combining with previous results, our model consistently improve translation performance across various language pairs and NMT architectures, demonstrating the effectiveness and universality of the proposed approach.
Analysis
To better understand our models, we conduct extensive analyses on the Zh⇒En translation task.
Adequacy Evaluation To better evaluate the adequacy, we randomly choose 100 sentences from the test set, and ask two human evaluators to judge the quality of generated translations. Five scales have been set up, i.e., {1, 2, 3, 4, 5}, where "1" means that it is irrelevant between the source sentence and the translation sentence, and "5" means that from semantic and syntactic aspect, the translation sentence and the source sentence is completely equivalent. Table 4 lists the results of human evaluation and the proposed CDR score. First, our models consistently improve the translation adequacy under both human evaluation and the CDR score, indicating that the proposed approaches indeed alleviate the inadequate translation problem. Second, the relative improvement on CDR is consistent with that on subjective evaluation. The Pearson Correlation Coefficient between CDR and manual evaluation score is 0.64, indicat- ing that the proposed CDR is a reasonable metric to measure translation adequacy.
Length Analysis We group sentences of similar lengths and compute both the BLEU score and CDR score for each group, as shown in Figure 3. The four length spans contain 1386, 2284, 1285, and 498 sentences, respectively. From the perspective of the BLEU score, the proposed model (i.e., "+D+O") outperforms RNNSEARCH in all length segments. In contrast, using discriminator only (i.e., "+D") outperforms RNNSEARCH in most cases, except long sentences (i.e., > 45). One possible reason is that it is difficult for the discriminator to differentiate generated translations from human translations for long source sentences, thus the generator cannot learn well about these instances due to the "mistaken" rewards from the discriminator. Accordingly, using the CDR score (i.e., "+O") alleviates this problem by providing a sequence-level score, which better estimates the adequacy of the translations. The final model combines the advantages of both a smoother and dynamically-updated objective from the discriminator ("+D"), and a more accurate objective specifically designed for the translation task from the orientator ("+O").
The CDR scores for all models degrade when the length of source sentence increases. This is mainly due to that inadequate translation problem is more serious on longer sentences for NMT models (Tu et al. 2016). The adversarial model (i.e., "+D") improves CDR scores while the improvement degrades faster with the increase of sentence length. However, our proposed approach consistently improves CDR performance in all length segments. Koehn and Knowles (2017) point out that the attention model does not always corre- spond to word alignment and may considerably diverge. Accordingly, the attention matrix-based CDR score may not always correctly reflect the adequacy of generation sentences. However, our discriminator is able to give a smoother and dynamically-updated objective, and thus could provide more accurate adequacy scores of generation sentences. From the above quantitative and qualitative results, the discriminator indeed leads to better performance (i.e., "+D+O" vs. "+O").
Effect of the Discriminator
Case Study To better understand the advantage of our proposed model, we show a translation case in Figure 4. Specially, we provide a Zh⇒En example with two translation results from the RNNSearch and Adequacy-NMT models respectively, as well as the corresponding CDR and BLEU scores. We emphasize on their different parts with bold fonts which lead to different translation quality. As seen, the latter part of the source sentence is not translated by the RNNSearch model while our proposed model correct this mistake. Accordingly, our model improves both CDR and BLEU scores.
| 3,791 |
1811.08541
|
2901423179
|
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacy-oriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level chrF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.
|
Inadequate translation problem is a commonly-cited weakness of NMT models @cite_16 . A number of recent efforts have explored ways to alleviate this problem. For example, tu2016modeling and Mi:2016:EMNLP employ coverage vector as a lexical-level indicator to indicate whether a source word is translated or not. Zheng:2018:TACL and Meng:2018:IJCAI move one step further and directly model translated and untranslated source contents by operating on the attention context vector. He:2017:NIPS use a prediction network to estimate the future cost of translating the uncovered source words. Our approach is complementary to theirs since they model the adequacy learning at the word-level inside the generator (i.e., NMT models), while we model it at the sequence-level outside the generator. We take the representative coverage mechanism @cite_16 as another stronger baseline model for its simplicity and efficiency, and experimental results show that our model can further improve performance.
|
{
"abstract": [
"Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT."
],
"cite_N": [
"@cite_16"
],
"mid": [
"2410539690"
]
}
|
Neural Machine Translation with Adequacy-Oriented Learning
|
During the past several years, rapid progress has been made in the field of Neural Machine Translation (NMT) (Kalchbrenner and Blunsom 2013;Sutskever, Vinyals, and Le 2014;Bahdanau, Cho, and Bengio 2015;Gehring et al. 2017;Wu et al. 2016;Vaswani et al. 2017).
Although NMT models have advanced the community, they still face inadequate translation problems: one or multiple parts of the input sentence are not translated (Tu et al. 2016). We attribute this problem to the lack of the mechanism to guarantee the generated translation being as sufficient as human translation. NMT models are generally trained in an end-to-end manner to maximize the likelihood of the output sentence. Maximum Likelihood Estimation (MLE), however, could not judge the real quality of generated translation due to its several limitations 1. Exposure bias (Ranzato et al. 2016): The models are trained on the groundtruth data distribution, but at test time are used to generate target words based on previous model predictions, which can be erroneous;
Some recent work partially alleviates one or two of the above problems with advanced training strategies. For example, the first two problems are tackled by sequence level training using the REINFORCE algorithm (Ranzato et al. 2016;Bahdanau et al. 2017), minimum risk training (Shen et al. 2016), beam search optimization (Wiseman and Rush 2016) or adversarial learning (Wu et al. 2017;. The last problem can be alleviated by introducing an auxiliary reconstruction-based training objective to measure translation adequacy .
In this work, we aim to fully solve all the three problems in a unified framework. Specifically, we model the translation as a stochastic policy in Reinforcement Learning (RL) and directly perform gradient policy update. The RL reward is estimated on a complete sequence produced by the NMT model, which is able to correlate well with a sequencelevel task-specific metric. To explicitly measure translation adequacy, we propose a novel metric called Coverage Difference Ratio (CDR) which is calculated by counting how many source words are under-translated via directly comparing generated translation with human translation. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, the proposed approach is able to alleviate all the aforementioned limitations of MLE-based training.
We conduct experiments on Chinese⇒English and German⇔English translation tasks, using both the RNNbased NMT model (Bahdanau, Cho, and Bengio 2015) and the recently proposed TRANSFORMER (Vaswani et al. 2017). The consistent improvements across language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach. The proposed adequacy-oriented learning improves translation performance not only over a standard attention model, but also over a coverage-augmented attention model (Tu et al. 2016) that alleviates the inadequate translation problem at the word-level. In addition, the proposed metric -CDR score, consistently outperforms the commonly-used word-level BLEU (Papineni et al. 2002) and character-level CHRF3 (Popović 2015) scores in both the reinforcement learning and adversarial learning frameworks, indicating the superiority and necessity of an adequacy-oriented metric in training effective NMT models.
Approach Intuition
In this work, we try to solve the three problems mentioned above in a unified framework. Our objective is three-fold: 1. We solve the exposure bias problem by modeling the translation as a stochastic policy in reinforcement learning (RL) and directly performing policy gradient update.
2. The RL reward is estimated on a complete sequence, which correlates well with either sequence-level BLEU or a more adequacy-oriented metric, as described below.
3. We design a sequence-level metric -Coverage Difference Ratio (CDR) -to explicitly measure translation adequacy which focuses on the commonly-cited weaknesses of NMT models: producing fluent yet inadequate translations. We expect that the model can benefit from linguistic insights that correlate well with human intuitions.
Coverage Difference Ratio (CDR) We measure translation adequacy by the number of under-translated words via comparing generated translation with human translation. We take an example to illustrate how to measure translation adequacy in terms of coverage difference ratio. Figure 1(a) shows one inadequate translation. Following (Luong, Pham, and Manning 2015;Tu et al. 2016), we extract only oneto-one alignments (hard alignments) by selecting the source word with the highest alignment for each target word from the word alignments produced by NMT models. 1 A source word is considered to be translated when it is covered by the hard alignments, as shown in Figure 1(b). Comparing source words covered by generated translation with those covered by human translation, we can find that the two sets are very different for inadequate translation. Specifically, the difference generally lies in the untranslated source words that cause inadequate translation problem, indicating that coverage difference ratio is a good way to measure the adequacy of generated translation. Formally, we calculate the CDR score of a given generated Figure 2: Architecture of adequacy-oriented NMT. The newly added orientator O reads coverages of generated and human translations to generate a CDR score for each generated translation, which guides the discriminator D to differentiate good generated translations from bad ones. translationŷ by
CDR(ŷ|y, x) = 1 − |C ref \ C gen | |C ref |(5)
where C ref and C gen is the set of source words covered by human translation and generated translation, respectively. C ref \ C gen denotes the covered source words in C ref but not in C gen . We use C ref as the reference coverage to eliminate the effect of null-aligned source words which are not aligned to any target word. As seen, CDR(ŷ|y, x) is a number between 0 and 1, where 1 means "completely adequate translation" and 0 means "completely inadequate translation". Taking Figure 1(b) as an example, the CDR score is 1 − (7 − 4)/7 = 0.57.
Architecture
As shown in Figure 2, the proposed model consists of a generator, a discriminator, and an orientator.
Generator The generator G generates the translationŷ conditioned on the input sentence x. Because we need word alignments to calculate adequacy scores in terms of CDR, an attention-based NMT model is employed as the generator.
Orientator The orientator O reads the word alignments produced by NMT attention model when generating (or force decoding) the two translations and outputs an adequacy score for the generated translation in terms of the aforementioned CDR score. Then, the orientator is used to guide the discriminator to distinguish adequate translation from inadequate ones. Accordingly, adequate translations with higher CDR scores would contribute more to parameter tuning, as described in the following section.
Discriminator We employ a RNN-based discriminator to differentiate generated translation from human translation, given the input sentence. The discriminator reads the input sentence x and its translation (either y orŷ), and use two RNNs to summarize the two sentences individually. The concatenation of the two summarized representation vectors is fed into a fully-connected neural network.
Adequacy-Oriented Training
In order to train the system efficiently and effectively, we employ a periodical training strategy, which is commonly used in adversarial training (Goodfellow et al. 2014;Wu et al. 2017). Specifically, we optimize two networks with two objective functions and periodically freeze the parameters of each network during training.
Train Generator and Freeze Discriminator Following Wu et al. (2017), we use the REINFORCE algorithm (Williams 1992) to back-propagate the error signals from D to G, given the discretely generatedŷ from G. The objective of the generator is to maximize the expected reward:
L = E (x,ŷ)∈G θ [D(x,ŷ)](6)
whose gradient is
θ = E (x,ŷ)∈G θ [D(x,ŷ) θ log G θ (ŷ|x)](7)
The gradient is approximated by a sample from G using the REINFORCE algorithm (Williams 1992):
θ ≈ˆ θ = D(x,ŷ) θ log G θ (ŷ|x)(8)
where θ log G(ŷ|x) is the standard NMT gradient which is calculated by the maximum likelihood estimation. Therefore, the final update function for the generator is:
θ = θ − ηˆ θ (9)
where the η is the learning rate. Based on the update function, when the D(x,ŷ) is large (i.e., ideally, the generated translationŷ has a high adequacy score) , the larger reward the NMT model will get, and thus parameters are updated more based on the adequate training instance (x,ŷ).
Train Discriminator Oriented by Adequacy and Freeze
Generator Ideally, a good translationŷ should be assigned a high adequacy score D(x,ŷ) and thus contribute more to updating the generator. Therefore, we expect the discriminator to not only differentiate generated translations from human translations but also distinguish bad generated translations from good ones. Therefore, a new objective of discriminator is to assign a precise score for each generated translation, which is consistent with their adequacy score:
min D |CDR(ŷ|x, y) − D(x,ŷ)| 2(10)
where CDR(ŷ|x, y) is the coverage difference ratio ofŷ. As seen, a well trained discriminator would assign a distinct score to each generated translation, which can better measure its adequacy.
Related Work
This work is related to modeling translation as policy gradient and adequacy modeling. For the former, we take minimum risk training, reinforcement learning and adversarial learning as representative strategies.
Minimum Risk Training In response to the exposure bias and word-level loss problems of MLE training, Shen et al. (2016) minimize the expected loss in terms of evaluation metrics on the training data. Our simplified model is analogous to their MRT model, if we directly use CDR as the reward to update parameters:
θ = CDR(ŷ|x, y)) θ log G θ (ŷ|x)(11)
The simplified model differs in that (1) we use adequacyoriented metric (i.e., CDR) while they use sequence-level BLEU, and (2) we only need to sample one candidate to calculate reinforcement reward while they generate multiple samples to calculate the expected risk. In addition, our discriminator gives a smoother and dynamically-updated objective compared with directly using the adequacy-oriented metric, because the latter is highly sensitive to the slight coverage difference (Koehn and Knowles 2017).
Reinforcement Learning Recent work shows that maximum likelihood training could be sub-optimal due to the different conditions between training and test modes Ranzato et al. 2016). In order to address the exposure bias and the loss which does not operate at the sequence level, Ranzato et al. (2016) employ the REINFORCE algorithm (Williams 1992) to decide whether or not tokens from a sampled prediction could contribute to a high taskspecific score (e.g., BLEU). Bahdanau et al. (2017) use the actor-critic method from reinforcement learning to directly optimize a task-specific score.
Adversarial Learning Recently, adversarial learning (Goodfellow et al. 2014) has been successfully applied to neural machine translation (Wu et al. 2017;Cheng et al. 2018). In the adversarial framework, NMT models generally serve as the generator which defines the policy to generate the target sentence y given the source sentence x. A discriminator tries to distinguish the translation resultŷ = G(x) from the human-generated one y, given the source sentence x.
If we remove the orientator O, our model is roll-backed to the adversarial NMT, and the training objective of the discriminator D is rewritten as
max D {log D(x, y) + log(1 − D(x,ŷ))}(12)
The goal of the discriminator is try to maximize the likelihood of human translation D(x, y) to 1 and minimize that of generated translation D(x,ŷ) to 0. As seen, the discriminator uses a binary classification by uniformly treating all generated translations as negative examples (i.e., labeling "0") and all human translations as positive examples (i.e., labeling "1"), regardless of the quality of the generated translations. However, intuitively, high-quality translations and low-quality translations should be treated differently by the discriminator, otherwise, inaccurate reward signals would be propagated back to the generator. In our proposed architecture, this problem can be alleviated by replacing the simple binary outputs with the more informative adequacy-oriented metric CDR, which is calculated by directly comparing generated and human translations.
Adequacy Modeling Inadequate translation problem is a commonly-cited weakness of NMT models (Tu et al. 2016). A number of recent efforts have explored ways to alleviate this problem. For example, Tu et al. (2016) and Mi et al. (2016) Our approach is complementary to theirs since they model the adequacy learning at the word-level inside the generator (i.e., NMT models), while we model it at the sequencelevel outside the generator. We take the representative coverage mechanism (Tu et al. 2016) as another stronger baseline model for its simplicity and efficiency, and experimental results show that our model can further improve performance.
In the context of adequacy-oriented training, introduce an auxiliary objective to measure the adequacy of translation candidates, which is calculated by reconstructing generated translations back to the original inputs. Benefiting from the flexible framework of reinforcement training, we are able to directly compare generated translations with human translations and define a more straightforward metric, i.e., CDR to measure adequacy of generated sentences.
Experiments Setup
We conduct experiments on the widely-used Chinese (Zh) ⇒English (En) and German The former contains 153K sentence pairs and the latter consists of 4.56M sentence pairs. The 4-gram NIST BLEU score (Papineni et al. 2002) is used as the evaluation metric and sign-test (Collins, Koehn, and Kučerová 2005) is employed to test statistical significance.
For training all neural models, we set the vocabulary size to 30K for Zh⇒En, for IWSLT 2014 De⇒En, we follow the preprocessing procedure as used in Ranzato et al. (2016) Table 1: Evaluation of translation performance on Zh⇒En translation. "D" denotes discriminator and "O" denotes orientator. "MRT" indicates minimum risk training (Shen et al. 2016), and "D CNN " indicates adversarial training with a CNN-based discriminator (Wu et al. 2017). "# Para." denotes the number of parameters, and "Speed" denotes the training speed (words/second). " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the corresponding baseline.
and for WMT 2014 En⇒De, preprocessing method described in Vaswani et al. (2017) is borrowed. We pre-train the discriminator on translation samples produced by the pre-trained generator. After that, the discriminator and the generator are trained together, and the generator is updated by the REINFORCE algorithm mentioned above. We also follow the training tips mentioned in Shen et al. (2016) and Wu et al. (2017). The hyper-parameter α which could control the sharpness of the generator distribution in our system is 1e-4, which could also be regarded as a baseline to reduce the variance of the REINFORCE algorithm. We also randomly choose 50% minibatches trained with our objective function and the other with the MLE principle. In MRT training strategy (Shen et al. 2016), the sample size is 25, the hyper-parameter α is 5e-3 and the loss function is negative smoothed sentence-level BLEU. We validate our models on two representative model architectures, namely RNNSEARCH and TRANSFORMER. For the RNNSEARCH model, mini-batch size is 80, the word-embedding dimension is 620, and the hidden layer size is 1000. We use a neural coverage model for RNNSEARCH-COVERAGE and the dimensionality of coverage vector is 100. The baseline models are trained for 15 epochs, which are used as the initial generator in the proposed framework. For the TRANSFORMER model, we implement our proposed approach on top of an open source toolkit THMUT . Configurations in Vaswani et al. (2017) are used to train the baseline models. Table 1 lists the results of various translation models on Zh⇒En corpus. As seen, all advanced systems significantly outperform the baseline system (i.e., RNNSEARCH), although there are still considerable differences among different variants.
Chinese-English Translation Task
Architectures of Discriminator (Rows 3-4) We evaluate two architectures for the discriminator. The CNN-based discriminator is composed of two convolution layers with 3 × 3 window, two max-pooling layers with 2 × 2 window and one softmax layer. The feature map size is 10 and the feedforward hidden size is 20. The RNN-based discriminator consists of two two-layer RNN encoders with 32 LSTM units and a fully-connected neural network with 32 units. We find that the RNN discriminator achieves similar performance with its CNN counterpart (37.59 vs. 37.54), while has a faster training speed (1.2K vs. 1.0K words/second). The main reason is that the CNN-based discriminator requires high computation and space cost to utilize multiple layers with convolution and pooling from a large input matrix.
Adequacy Metrics for Orientator (Rows 5-7) As aforementioned, the CDR score can be directly used as a reward to update the parameters, which is in analogy to the MRT (Shen et al. 2016) except that we use 1-best sample while they use n-best samples. For comparison, we also used the word-level BLEU score (Row 5) and character-level CHRF3 score (Popović 2015) (Row 6) as the rewards.
As seen, this strategy consistently improves translation performance, without introducing any new parameters. The extra computation cost is mainly from generating translation sentence and force decoding the human translation with the NMT model. We find that CDR not only outperforms its 1best counterpart "O BLEU " and "O CHRF3 ", but also surpasses "MRT BLEU " using 25 samples. We attribute this to the fact that CDR can better estimate the adequacy of the translation, which is the key problem of NMT models, and go beyond the the simple low-level n-gram matching measured by BLEU and CHRF3.
Combining Them Together (Row 8) By combining advantages of both reinforcement learning and adequacyoriented objective, our model achieves the best performance, which is 1.66 BLEU points better than the baseline "RNNSEARCH", up to 0.98 BLEU points better than using single component and significantly improve the performance of "MRT BLEU " model. One more observation can be made. "+D+O" outperforms its "+O" counterpart (e.g., 8 vs. 7), which confirms our claim that the discriminator gives a System Model De⇒En Existing end-to-end NMT systems (Ranzato et al. 2016) CNN encoder + Sequence level objective 20.73 (Bahdanau et al. 2017 Table 2: Comparing with previous works of applying reinforcement learning for NMT on IWSLT 2014 De⇒En translation task. " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the RNNSEARCH model.
Model BLEU GNMT + RL 26.30 ConvS2S (Gehring et al. 2017) 26.43 Transformer (Base) (Vaswani et al. 2017) 27.3 Transformer (Big) (Vaswani et al. 2017) 28 smoother and dynamically-updated score than directly using the calculated one.
Working with Coverage Model (Rows 11-12) Tu et al. (2016) propose a coverage model to indicate whether a source word is translated or not, which alleviates the inadequate translation problem of NMT models. We argue that our model is complementary to theirs, because we model the adequacy learning outside the generator by using an additional adequacy-oriented discriminator, while they model it inside the generator. Experimental results validate our hypothesis: the proposed approach further improves performance by 0.58 BLEU points over the coverage-augmented model RNNSEARCH-COVERAGE.
English-German Translation Tasks
To compare with previous work of applying reinforcement learning for NMT (Ranzato et al. 2016;Bahdanau et al. 2017;Wiseman and Rush 2016;Wu et al. 2017), we first conduct experiments on IWSLT 2014 De⇒En translation task. As listed in Table 2, we reproduce the results of adversarial training reported by Wu et al. (2017) (27.24 vs. 26.98 Table 4: Adequacy scores on randomly selected 100 sentences on Zh⇒En task, which are measured by CDR and human evaluation ("MAN").
our models.
We also evaluate our model on the recently proposed TRANSFORMER model (Vaswani et al. 2017) on WMT 2014 En⇒De corpus. As shown in Table 3, our models significantly improve performances in all cases. Combining with previous results, our model consistently improve translation performance across various language pairs and NMT architectures, demonstrating the effectiveness and universality of the proposed approach.
Analysis
To better understand our models, we conduct extensive analyses on the Zh⇒En translation task.
Adequacy Evaluation To better evaluate the adequacy, we randomly choose 100 sentences from the test set, and ask two human evaluators to judge the quality of generated translations. Five scales have been set up, i.e., {1, 2, 3, 4, 5}, where "1" means that it is irrelevant between the source sentence and the translation sentence, and "5" means that from semantic and syntactic aspect, the translation sentence and the source sentence is completely equivalent. Table 4 lists the results of human evaluation and the proposed CDR score. First, our models consistently improve the translation adequacy under both human evaluation and the CDR score, indicating that the proposed approaches indeed alleviate the inadequate translation problem. Second, the relative improvement on CDR is consistent with that on subjective evaluation. The Pearson Correlation Coefficient between CDR and manual evaluation score is 0.64, indicat- ing that the proposed CDR is a reasonable metric to measure translation adequacy.
Length Analysis We group sentences of similar lengths and compute both the BLEU score and CDR score for each group, as shown in Figure 3. The four length spans contain 1386, 2284, 1285, and 498 sentences, respectively. From the perspective of the BLEU score, the proposed model (i.e., "+D+O") outperforms RNNSEARCH in all length segments. In contrast, using discriminator only (i.e., "+D") outperforms RNNSEARCH in most cases, except long sentences (i.e., > 45). One possible reason is that it is difficult for the discriminator to differentiate generated translations from human translations for long source sentences, thus the generator cannot learn well about these instances due to the "mistaken" rewards from the discriminator. Accordingly, using the CDR score (i.e., "+O") alleviates this problem by providing a sequence-level score, which better estimates the adequacy of the translations. The final model combines the advantages of both a smoother and dynamically-updated objective from the discriminator ("+D"), and a more accurate objective specifically designed for the translation task from the orientator ("+O").
The CDR scores for all models degrade when the length of source sentence increases. This is mainly due to that inadequate translation problem is more serious on longer sentences for NMT models (Tu et al. 2016). The adversarial model (i.e., "+D") improves CDR scores while the improvement degrades faster with the increase of sentence length. However, our proposed approach consistently improves CDR performance in all length segments. Koehn and Knowles (2017) point out that the attention model does not always corre- spond to word alignment and may considerably diverge. Accordingly, the attention matrix-based CDR score may not always correctly reflect the adequacy of generation sentences. However, our discriminator is able to give a smoother and dynamically-updated objective, and thus could provide more accurate adequacy scores of generation sentences. From the above quantitative and qualitative results, the discriminator indeed leads to better performance (i.e., "+D+O" vs. "+O").
Effect of the Discriminator
Case Study To better understand the advantage of our proposed model, we show a translation case in Figure 4. Specially, we provide a Zh⇒En example with two translation results from the RNNSearch and Adequacy-NMT models respectively, as well as the corresponding CDR and BLEU scores. We emphasize on their different parts with bold fonts which lead to different translation quality. As seen, the latter part of the source sentence is not translated by the RNNSearch model while our proposed model correct this mistake. Accordingly, our model improves both CDR and BLEU scores.
| 3,791 |
1811.08541
|
2901423179
|
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacy-oriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level chrF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.
|
'' denotes discriminator and O '' denotes orientator. MRT '' indicates minimum risk training @cite_22 , and D @math '' indicates adversarial training with a CNN-based discriminator @cite_15 . # Para.'' denotes the number of parameters, and Speed'' denotes the training speed (words second). @math '' and @math '' indicate statistically significant difference ( @math and @math respectively) from the corresponding baseline.
|
{
"abstract": [
"In this paper, we study a new learning paradigm for Neural Machine Translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as Adversarial-NMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed Convolutional Neural Network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English @math French and German @math English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.",
"We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks."
],
"cite_N": [
"@cite_15",
"@cite_22"
],
"mid": [
"2607987856",
"2195405088"
]
}
|
Neural Machine Translation with Adequacy-Oriented Learning
|
During the past several years, rapid progress has been made in the field of Neural Machine Translation (NMT) (Kalchbrenner and Blunsom 2013;Sutskever, Vinyals, and Le 2014;Bahdanau, Cho, and Bengio 2015;Gehring et al. 2017;Wu et al. 2016;Vaswani et al. 2017).
Although NMT models have advanced the community, they still face inadequate translation problems: one or multiple parts of the input sentence are not translated (Tu et al. 2016). We attribute this problem to the lack of the mechanism to guarantee the generated translation being as sufficient as human translation. NMT models are generally trained in an end-to-end manner to maximize the likelihood of the output sentence. Maximum Likelihood Estimation (MLE), however, could not judge the real quality of generated translation due to its several limitations 1. Exposure bias (Ranzato et al. 2016): The models are trained on the groundtruth data distribution, but at test time are used to generate target words based on previous model predictions, which can be erroneous;
Some recent work partially alleviates one or two of the above problems with advanced training strategies. For example, the first two problems are tackled by sequence level training using the REINFORCE algorithm (Ranzato et al. 2016;Bahdanau et al. 2017), minimum risk training (Shen et al. 2016), beam search optimization (Wiseman and Rush 2016) or adversarial learning (Wu et al. 2017;. The last problem can be alleviated by introducing an auxiliary reconstruction-based training objective to measure translation adequacy .
In this work, we aim to fully solve all the three problems in a unified framework. Specifically, we model the translation as a stochastic policy in Reinforcement Learning (RL) and directly perform gradient policy update. The RL reward is estimated on a complete sequence produced by the NMT model, which is able to correlate well with a sequencelevel task-specific metric. To explicitly measure translation adequacy, we propose a novel metric called Coverage Difference Ratio (CDR) which is calculated by counting how many source words are under-translated via directly comparing generated translation with human translation. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, the proposed approach is able to alleviate all the aforementioned limitations of MLE-based training.
We conduct experiments on Chinese⇒English and German⇔English translation tasks, using both the RNNbased NMT model (Bahdanau, Cho, and Bengio 2015) and the recently proposed TRANSFORMER (Vaswani et al. 2017). The consistent improvements across language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach. The proposed adequacy-oriented learning improves translation performance not only over a standard attention model, but also over a coverage-augmented attention model (Tu et al. 2016) that alleviates the inadequate translation problem at the word-level. In addition, the proposed metric -CDR score, consistently outperforms the commonly-used word-level BLEU (Papineni et al. 2002) and character-level CHRF3 (Popović 2015) scores in both the reinforcement learning and adversarial learning frameworks, indicating the superiority and necessity of an adequacy-oriented metric in training effective NMT models.
Approach Intuition
In this work, we try to solve the three problems mentioned above in a unified framework. Our objective is three-fold: 1. We solve the exposure bias problem by modeling the translation as a stochastic policy in reinforcement learning (RL) and directly performing policy gradient update.
2. The RL reward is estimated on a complete sequence, which correlates well with either sequence-level BLEU or a more adequacy-oriented metric, as described below.
3. We design a sequence-level metric -Coverage Difference Ratio (CDR) -to explicitly measure translation adequacy which focuses on the commonly-cited weaknesses of NMT models: producing fluent yet inadequate translations. We expect that the model can benefit from linguistic insights that correlate well with human intuitions.
Coverage Difference Ratio (CDR) We measure translation adequacy by the number of under-translated words via comparing generated translation with human translation. We take an example to illustrate how to measure translation adequacy in terms of coverage difference ratio. Figure 1(a) shows one inadequate translation. Following (Luong, Pham, and Manning 2015;Tu et al. 2016), we extract only oneto-one alignments (hard alignments) by selecting the source word with the highest alignment for each target word from the word alignments produced by NMT models. 1 A source word is considered to be translated when it is covered by the hard alignments, as shown in Figure 1(b). Comparing source words covered by generated translation with those covered by human translation, we can find that the two sets are very different for inadequate translation. Specifically, the difference generally lies in the untranslated source words that cause inadequate translation problem, indicating that coverage difference ratio is a good way to measure the adequacy of generated translation. Formally, we calculate the CDR score of a given generated Figure 2: Architecture of adequacy-oriented NMT. The newly added orientator O reads coverages of generated and human translations to generate a CDR score for each generated translation, which guides the discriminator D to differentiate good generated translations from bad ones. translationŷ by
CDR(ŷ|y, x) = 1 − |C ref \ C gen | |C ref |(5)
where C ref and C gen is the set of source words covered by human translation and generated translation, respectively. C ref \ C gen denotes the covered source words in C ref but not in C gen . We use C ref as the reference coverage to eliminate the effect of null-aligned source words which are not aligned to any target word. As seen, CDR(ŷ|y, x) is a number between 0 and 1, where 1 means "completely adequate translation" and 0 means "completely inadequate translation". Taking Figure 1(b) as an example, the CDR score is 1 − (7 − 4)/7 = 0.57.
Architecture
As shown in Figure 2, the proposed model consists of a generator, a discriminator, and an orientator.
Generator The generator G generates the translationŷ conditioned on the input sentence x. Because we need word alignments to calculate adequacy scores in terms of CDR, an attention-based NMT model is employed as the generator.
Orientator The orientator O reads the word alignments produced by NMT attention model when generating (or force decoding) the two translations and outputs an adequacy score for the generated translation in terms of the aforementioned CDR score. Then, the orientator is used to guide the discriminator to distinguish adequate translation from inadequate ones. Accordingly, adequate translations with higher CDR scores would contribute more to parameter tuning, as described in the following section.
Discriminator We employ a RNN-based discriminator to differentiate generated translation from human translation, given the input sentence. The discriminator reads the input sentence x and its translation (either y orŷ), and use two RNNs to summarize the two sentences individually. The concatenation of the two summarized representation vectors is fed into a fully-connected neural network.
Adequacy-Oriented Training
In order to train the system efficiently and effectively, we employ a periodical training strategy, which is commonly used in adversarial training (Goodfellow et al. 2014;Wu et al. 2017). Specifically, we optimize two networks with two objective functions and periodically freeze the parameters of each network during training.
Train Generator and Freeze Discriminator Following Wu et al. (2017), we use the REINFORCE algorithm (Williams 1992) to back-propagate the error signals from D to G, given the discretely generatedŷ from G. The objective of the generator is to maximize the expected reward:
L = E (x,ŷ)∈G θ [D(x,ŷ)](6)
whose gradient is
θ = E (x,ŷ)∈G θ [D(x,ŷ) θ log G θ (ŷ|x)](7)
The gradient is approximated by a sample from G using the REINFORCE algorithm (Williams 1992):
θ ≈ˆ θ = D(x,ŷ) θ log G θ (ŷ|x)(8)
where θ log G(ŷ|x) is the standard NMT gradient which is calculated by the maximum likelihood estimation. Therefore, the final update function for the generator is:
θ = θ − ηˆ θ (9)
where the η is the learning rate. Based on the update function, when the D(x,ŷ) is large (i.e., ideally, the generated translationŷ has a high adequacy score) , the larger reward the NMT model will get, and thus parameters are updated more based on the adequate training instance (x,ŷ).
Train Discriminator Oriented by Adequacy and Freeze
Generator Ideally, a good translationŷ should be assigned a high adequacy score D(x,ŷ) and thus contribute more to updating the generator. Therefore, we expect the discriminator to not only differentiate generated translations from human translations but also distinguish bad generated translations from good ones. Therefore, a new objective of discriminator is to assign a precise score for each generated translation, which is consistent with their adequacy score:
min D |CDR(ŷ|x, y) − D(x,ŷ)| 2(10)
where CDR(ŷ|x, y) is the coverage difference ratio ofŷ. As seen, a well trained discriminator would assign a distinct score to each generated translation, which can better measure its adequacy.
Related Work
This work is related to modeling translation as policy gradient and adequacy modeling. For the former, we take minimum risk training, reinforcement learning and adversarial learning as representative strategies.
Minimum Risk Training In response to the exposure bias and word-level loss problems of MLE training, Shen et al. (2016) minimize the expected loss in terms of evaluation metrics on the training data. Our simplified model is analogous to their MRT model, if we directly use CDR as the reward to update parameters:
θ = CDR(ŷ|x, y)) θ log G θ (ŷ|x)(11)
The simplified model differs in that (1) we use adequacyoriented metric (i.e., CDR) while they use sequence-level BLEU, and (2) we only need to sample one candidate to calculate reinforcement reward while they generate multiple samples to calculate the expected risk. In addition, our discriminator gives a smoother and dynamically-updated objective compared with directly using the adequacy-oriented metric, because the latter is highly sensitive to the slight coverage difference (Koehn and Knowles 2017).
Reinforcement Learning Recent work shows that maximum likelihood training could be sub-optimal due to the different conditions between training and test modes Ranzato et al. 2016). In order to address the exposure bias and the loss which does not operate at the sequence level, Ranzato et al. (2016) employ the REINFORCE algorithm (Williams 1992) to decide whether or not tokens from a sampled prediction could contribute to a high taskspecific score (e.g., BLEU). Bahdanau et al. (2017) use the actor-critic method from reinforcement learning to directly optimize a task-specific score.
Adversarial Learning Recently, adversarial learning (Goodfellow et al. 2014) has been successfully applied to neural machine translation (Wu et al. 2017;Cheng et al. 2018). In the adversarial framework, NMT models generally serve as the generator which defines the policy to generate the target sentence y given the source sentence x. A discriminator tries to distinguish the translation resultŷ = G(x) from the human-generated one y, given the source sentence x.
If we remove the orientator O, our model is roll-backed to the adversarial NMT, and the training objective of the discriminator D is rewritten as
max D {log D(x, y) + log(1 − D(x,ŷ))}(12)
The goal of the discriminator is try to maximize the likelihood of human translation D(x, y) to 1 and minimize that of generated translation D(x,ŷ) to 0. As seen, the discriminator uses a binary classification by uniformly treating all generated translations as negative examples (i.e., labeling "0") and all human translations as positive examples (i.e., labeling "1"), regardless of the quality of the generated translations. However, intuitively, high-quality translations and low-quality translations should be treated differently by the discriminator, otherwise, inaccurate reward signals would be propagated back to the generator. In our proposed architecture, this problem can be alleviated by replacing the simple binary outputs with the more informative adequacy-oriented metric CDR, which is calculated by directly comparing generated and human translations.
Adequacy Modeling Inadequate translation problem is a commonly-cited weakness of NMT models (Tu et al. 2016). A number of recent efforts have explored ways to alleviate this problem. For example, Tu et al. (2016) and Mi et al. (2016) Our approach is complementary to theirs since they model the adequacy learning at the word-level inside the generator (i.e., NMT models), while we model it at the sequencelevel outside the generator. We take the representative coverage mechanism (Tu et al. 2016) as another stronger baseline model for its simplicity and efficiency, and experimental results show that our model can further improve performance.
In the context of adequacy-oriented training, introduce an auxiliary objective to measure the adequacy of translation candidates, which is calculated by reconstructing generated translations back to the original inputs. Benefiting from the flexible framework of reinforcement training, we are able to directly compare generated translations with human translations and define a more straightforward metric, i.e., CDR to measure adequacy of generated sentences.
Experiments Setup
We conduct experiments on the widely-used Chinese (Zh) ⇒English (En) and German The former contains 153K sentence pairs and the latter consists of 4.56M sentence pairs. The 4-gram NIST BLEU score (Papineni et al. 2002) is used as the evaluation metric and sign-test (Collins, Koehn, and Kučerová 2005) is employed to test statistical significance.
For training all neural models, we set the vocabulary size to 30K for Zh⇒En, for IWSLT 2014 De⇒En, we follow the preprocessing procedure as used in Ranzato et al. (2016) Table 1: Evaluation of translation performance on Zh⇒En translation. "D" denotes discriminator and "O" denotes orientator. "MRT" indicates minimum risk training (Shen et al. 2016), and "D CNN " indicates adversarial training with a CNN-based discriminator (Wu et al. 2017). "# Para." denotes the number of parameters, and "Speed" denotes the training speed (words/second). " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the corresponding baseline.
and for WMT 2014 En⇒De, preprocessing method described in Vaswani et al. (2017) is borrowed. We pre-train the discriminator on translation samples produced by the pre-trained generator. After that, the discriminator and the generator are trained together, and the generator is updated by the REINFORCE algorithm mentioned above. We also follow the training tips mentioned in Shen et al. (2016) and Wu et al. (2017). The hyper-parameter α which could control the sharpness of the generator distribution in our system is 1e-4, which could also be regarded as a baseline to reduce the variance of the REINFORCE algorithm. We also randomly choose 50% minibatches trained with our objective function and the other with the MLE principle. In MRT training strategy (Shen et al. 2016), the sample size is 25, the hyper-parameter α is 5e-3 and the loss function is negative smoothed sentence-level BLEU. We validate our models on two representative model architectures, namely RNNSEARCH and TRANSFORMER. For the RNNSEARCH model, mini-batch size is 80, the word-embedding dimension is 620, and the hidden layer size is 1000. We use a neural coverage model for RNNSEARCH-COVERAGE and the dimensionality of coverage vector is 100. The baseline models are trained for 15 epochs, which are used as the initial generator in the proposed framework. For the TRANSFORMER model, we implement our proposed approach on top of an open source toolkit THMUT . Configurations in Vaswani et al. (2017) are used to train the baseline models. Table 1 lists the results of various translation models on Zh⇒En corpus. As seen, all advanced systems significantly outperform the baseline system (i.e., RNNSEARCH), although there are still considerable differences among different variants.
Chinese-English Translation Task
Architectures of Discriminator (Rows 3-4) We evaluate two architectures for the discriminator. The CNN-based discriminator is composed of two convolution layers with 3 × 3 window, two max-pooling layers with 2 × 2 window and one softmax layer. The feature map size is 10 and the feedforward hidden size is 20. The RNN-based discriminator consists of two two-layer RNN encoders with 32 LSTM units and a fully-connected neural network with 32 units. We find that the RNN discriminator achieves similar performance with its CNN counterpart (37.59 vs. 37.54), while has a faster training speed (1.2K vs. 1.0K words/second). The main reason is that the CNN-based discriminator requires high computation and space cost to utilize multiple layers with convolution and pooling from a large input matrix.
Adequacy Metrics for Orientator (Rows 5-7) As aforementioned, the CDR score can be directly used as a reward to update the parameters, which is in analogy to the MRT (Shen et al. 2016) except that we use 1-best sample while they use n-best samples. For comparison, we also used the word-level BLEU score (Row 5) and character-level CHRF3 score (Popović 2015) (Row 6) as the rewards.
As seen, this strategy consistently improves translation performance, without introducing any new parameters. The extra computation cost is mainly from generating translation sentence and force decoding the human translation with the NMT model. We find that CDR not only outperforms its 1best counterpart "O BLEU " and "O CHRF3 ", but also surpasses "MRT BLEU " using 25 samples. We attribute this to the fact that CDR can better estimate the adequacy of the translation, which is the key problem of NMT models, and go beyond the the simple low-level n-gram matching measured by BLEU and CHRF3.
Combining Them Together (Row 8) By combining advantages of both reinforcement learning and adequacyoriented objective, our model achieves the best performance, which is 1.66 BLEU points better than the baseline "RNNSEARCH", up to 0.98 BLEU points better than using single component and significantly improve the performance of "MRT BLEU " model. One more observation can be made. "+D+O" outperforms its "+O" counterpart (e.g., 8 vs. 7), which confirms our claim that the discriminator gives a System Model De⇒En Existing end-to-end NMT systems (Ranzato et al. 2016) CNN encoder + Sequence level objective 20.73 (Bahdanau et al. 2017 Table 2: Comparing with previous works of applying reinforcement learning for NMT on IWSLT 2014 De⇒En translation task. " †" and " ‡" indicate statistically significant difference (p < 0.05 and p < 0.01 respectively) from the RNNSEARCH model.
Model BLEU GNMT + RL 26.30 ConvS2S (Gehring et al. 2017) 26.43 Transformer (Base) (Vaswani et al. 2017) 27.3 Transformer (Big) (Vaswani et al. 2017) 28 smoother and dynamically-updated score than directly using the calculated one.
Working with Coverage Model (Rows 11-12) Tu et al. (2016) propose a coverage model to indicate whether a source word is translated or not, which alleviates the inadequate translation problem of NMT models. We argue that our model is complementary to theirs, because we model the adequacy learning outside the generator by using an additional adequacy-oriented discriminator, while they model it inside the generator. Experimental results validate our hypothesis: the proposed approach further improves performance by 0.58 BLEU points over the coverage-augmented model RNNSEARCH-COVERAGE.
English-German Translation Tasks
To compare with previous work of applying reinforcement learning for NMT (Ranzato et al. 2016;Bahdanau et al. 2017;Wiseman and Rush 2016;Wu et al. 2017), we first conduct experiments on IWSLT 2014 De⇒En translation task. As listed in Table 2, we reproduce the results of adversarial training reported by Wu et al. (2017) (27.24 vs. 26.98 Table 4: Adequacy scores on randomly selected 100 sentences on Zh⇒En task, which are measured by CDR and human evaluation ("MAN").
our models.
We also evaluate our model on the recently proposed TRANSFORMER model (Vaswani et al. 2017) on WMT 2014 En⇒De corpus. As shown in Table 3, our models significantly improve performances in all cases. Combining with previous results, our model consistently improve translation performance across various language pairs and NMT architectures, demonstrating the effectiveness and universality of the proposed approach.
Analysis
To better understand our models, we conduct extensive analyses on the Zh⇒En translation task.
Adequacy Evaluation To better evaluate the adequacy, we randomly choose 100 sentences from the test set, and ask two human evaluators to judge the quality of generated translations. Five scales have been set up, i.e., {1, 2, 3, 4, 5}, where "1" means that it is irrelevant between the source sentence and the translation sentence, and "5" means that from semantic and syntactic aspect, the translation sentence and the source sentence is completely equivalent. Table 4 lists the results of human evaluation and the proposed CDR score. First, our models consistently improve the translation adequacy under both human evaluation and the CDR score, indicating that the proposed approaches indeed alleviate the inadequate translation problem. Second, the relative improvement on CDR is consistent with that on subjective evaluation. The Pearson Correlation Coefficient between CDR and manual evaluation score is 0.64, indicat- ing that the proposed CDR is a reasonable metric to measure translation adequacy.
Length Analysis We group sentences of similar lengths and compute both the BLEU score and CDR score for each group, as shown in Figure 3. The four length spans contain 1386, 2284, 1285, and 498 sentences, respectively. From the perspective of the BLEU score, the proposed model (i.e., "+D+O") outperforms RNNSEARCH in all length segments. In contrast, using discriminator only (i.e., "+D") outperforms RNNSEARCH in most cases, except long sentences (i.e., > 45). One possible reason is that it is difficult for the discriminator to differentiate generated translations from human translations for long source sentences, thus the generator cannot learn well about these instances due to the "mistaken" rewards from the discriminator. Accordingly, using the CDR score (i.e., "+O") alleviates this problem by providing a sequence-level score, which better estimates the adequacy of the translations. The final model combines the advantages of both a smoother and dynamically-updated objective from the discriminator ("+D"), and a more accurate objective specifically designed for the translation task from the orientator ("+O").
The CDR scores for all models degrade when the length of source sentence increases. This is mainly due to that inadequate translation problem is more serious on longer sentences for NMT models (Tu et al. 2016). The adversarial model (i.e., "+D") improves CDR scores while the improvement degrades faster with the increase of sentence length. However, our proposed approach consistently improves CDR performance in all length segments. Koehn and Knowles (2017) point out that the attention model does not always corre- spond to word alignment and may considerably diverge. Accordingly, the attention matrix-based CDR score may not always correctly reflect the adequacy of generation sentences. However, our discriminator is able to give a smoother and dynamically-updated objective, and thus could provide more accurate adequacy scores of generation sentences. From the above quantitative and qualitative results, the discriminator indeed leads to better performance (i.e., "+D+O" vs. "+O").
Effect of the Discriminator
Case Study To better understand the advantage of our proposed model, we show a translation case in Figure 4. Specially, we provide a Zh⇒En example with two translation results from the RNNSearch and Adequacy-NMT models respectively, as well as the corresponding CDR and BLEU scores. We emphasize on their different parts with bold fonts which lead to different translation quality. As seen, the latter part of the source sentence is not translated by the RNNSearch model while our proposed model correct this mistake. Accordingly, our model improves both CDR and BLEU scores.
| 3,791 |
1811.08111
|
2901254300
|
This paper presents methods of making using of text supervision to improve the performance of sequence-to-sequence (seq2seq) voice conversion. Compared with conventional frame-to-frame voice conversion approaches, the seq2seq acoustic modeling method proposed in our previous work achieved higher naturalness and similarity. In this paper, we further improve its performance by utilizing the text transcriptions of parallel training data. First, a multi-task learning structure is designed which adds auxiliary classifiers to the middle layers of the seq2seq model and predicts linguistic labels as a secondary task. Second, a data-augmentation method is proposed which utilizes text alignment to produce extra parallel sequences for model training. Experiments are conducted to evaluate our proposed method with training sets at different sizes. Experimental results show that the multi-task learning with linguistic labels is effective at reducing the errors of seq2seq voice conversion. The data-augmentation method can further improve the performance of seq2seq voice conversion when only 50 or 100 training utterances are available.
|
On image processing tasks, cropping images is common approach of data augmentation @cite_10 . In this paper, we propose to slice fragments from parallel utterances according to text alignment and use them as training samples. This technique could make use of more alignment information within the parallel utterances and is expected to reduce overfitting of the built seq2seq model.
|
{
"abstract": [
"In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2775795276"
]
}
|
IMPROVING SEQUENCE-TO-SEQUENCE VOICE CONVERSION BY ADDING TEXT-SUPERVISION
|
Voice conversion (VC) aims to convert the speech of a source speaker into that of target while keeping linguistic content unchanged [1]. The VC technique has various applications such as identity switching in a text-to-speech (TTS) system, vocal restoration in cases of language impairment and entertainment applications [2].
The most widely-used approach for voice conversion is adopting a statistical acoustic model to capture the relationship between acoustic features of source and target speakers. In conventional method, frame-aligned training data is first prepared using dynamic time wrapping algorithm (DTW) [3]. Then, an acoustic model is trained based on the paired source-target frames. During conversion, a mapping function is derived from the acoustic model, and target acoustic features are predicted from those of source frame by frame. The acoustic model can be a joint density Gaussian mixture model (JD-GMM) [4,5], a deep neural network (DNN) [6,7] or a recurrent neural network (RNN) [8,9].
Our previous work [10] proposed a sequence-to-sequence (seq2seq) method for VC. A Seq2seq ConvErsion NeTwork (SCENT) is designed to model pairs of input and output acoustic feature sequences directly without explicit frame-toframe alignment. The SCENT followed the encoder-decoder with attention architecture [11][12][13][14][15].
This method achieved effective duration conversion, higher naturalness and similarity compared with conventional GMM and DNN-based methods [10]. However, utterances converted by the seq2seq method may have mispronunciations and other instability problems such as repeating phonemes and skipped phonemes.
In practical voice conversion tasks with parallel training data, text transcriptions of both speakers are usually available. Thus, this paper presents methods of utilizing text-supervision to improve the seq2seq VC model. First, a multi-task learning structure is designed. Auxiliary classifiers are added to the output layer of the encoder and the input layer of the decoder RNN, and are trained to predict the linguistic labels from the hidden vectors. Thus, the middle layers of the seq2seq model are regularized by the secondary task to be more linguistic-related, which is expected to reduce the issue of mispronunciations at conversion time. Second, a data-augmentation method is proposed by utilizing the text alignment information. In previous seq2seq VC method, the whole utterances are used as the sequences for model training. In order to increase the generalization ability of the trained seq2seq model, additional parallel fragments of utterances are derived using the alignment points given by text transcriptions, and are used as training samples.
The proposed method is evaluated using training sets of different sizes. Experimental results show that our method of adding text supervision to seq2seq VC can generate utterances with higher naturalness and sometimes better similarity. The multi-task learning structure is effective at reducing pronunciation errors. The proposed data-augmentation method can further improve the model performance when the training set contains only 50 or 100 utterances.
PREVIOUS WORK
Sequence-to-sequence voice conversion
In our previous work [10], we proposed SCENT, a seq2seq acoustic model for VC. Ignoring the component of auxiliary classifiers, Figure 1 shows the structure diagram of SCENT, which follows the popular encoder-decoder with attention architecture. Specifically, it is composed of an encoder, a decoder with attention and a postfiltering network (PostNet).
The input sequence of the model is the concatenation of melspectrograms and bottleneck features of source utterance. The bottleneck features are linguistic-related features which are extracted from speech signal using a speaker-independent automatic speech recognition (ASR) model. The encoder accepts input sequence and transforms it into hidden representations which are more suitable for the decoder to deal with. At each decoder time step, the previous generated acoustic frame is fed back into a preprocessing network Texts are presented as Chinese pinyin and "sil" represents the silence. "a","b" and "c" represent totally three alignment points in this pair of parallel utterances. Therefore, totally C 2 3 (i.e., 3) potential parallel fragments can be sliced from original utterances.
(PreNet), the output of which is passed through an attention RNN. The output of the attention RNN is processed by the attention module, which produces a summary of encoder output entries by weighted combination. The weighting factors are attention probabilities. Then the concatenation of this summary and the output of attention RNN is passed through the decoder RNN to predict output acoustic frame. In order to enhance the quality of the prediction, a PostNet is further employed to produce the final melspectrograms of target speaker. At last, a WaveNet neural vocoder [24] conditioned on mel-spectrograms is utilized for the waveform reconstruction.
PROPOSED METHODS
Linguistic labels, such as phoneme identity, are firstly extracted from the text transcriptions and then aligned to source and target utterances respectively at the data preparation stage. The alignment can be obtained by manual annotation or automatic methods such as force alignment using a hidden Markov model (HMM). Two methods of making use of the text supervision to improve the performance of the seq2seq VC model are introduced in this section.
Multi-task learning with linguistic labels
In parallel with learning to predict the acoustic features of target speaker, a secondary task is conducted to predicted linguistic labels from middle layers of the model. As presented in Figure 1, two auxiliary classifiers are added to the outputs of encoder and the inputs of decoder RNN. In each classifier, the input hidden representations are first passed through a dropout layer for increasing generalization. Then, the outputs of the dropout layer are projected to the category number of linguistic labels followed by a softmax operation. The targets of the two classifiers are the linguistic labels that current hidden representations of encoder and decoder RNN correspond to respectively. The cross-entropy losses of these two classifiers are weighted and added with the original loss of melspectrograms for training the model.
The auxiliary classifiers are designed for improving the seq2seq VC model by using stronger supervision from the text. Intuitively, they help to guide the model to generate more meaningful intermediate representations which are linguistic-related. Adding classifier to both the encoder and decoder part is also supposed to help the attention module to predict correct alignments. It should be noticed that the classifiers are only used at the training stage and are discarded at the conversion time. Therefore, no extra input and computation are required during conversion.
Data-augmentation by text alignment
In our previous seq2seq VC method, pairs of whole utterances are used as the input and output sequences for model training. With text alignments, intra-utterance alignments can also be utilized to produce more sequence pairs. In our method, an "alignment point" is defined as a common silence fragment in a pair of parallel utterances. Figure 2 presents an example for illustration. Parallel fragments, which contain the same linguistic contents within two utterances, are extracted by selecting two alignment points as the starting and ending positions. The reason that alignment points are defined at silences is to make sure that the parallel fragments are less influenced by surrounding contents. For a pair of parallel utterances containing N alignment points, totally C 2 N parallel fragments can be extracted. When processing each pair of utterances at training time, a pair of parallel fragments are randomly selected from all C 2 N possibilities instead of using the whole utterances.
EXPERIMENTS
Experimental conditions
Our dataset for experiments contained 1060 parallel Mandarin utterances of one male speaker (about 53 min) and one female (about 72 min) speaker, which were separated into a training set with 1000 utterances, a validation set with 30 utterances and a test set with 30 utterances. Smaller training sets containing 50, 100, 200 and 400 utterances were also constructed by randomly selecting a subset of the 1000 utterances for training. The recordings were sampled at 16kHz. 80-dimensional mel-scale spectrograms were extracted every 10 ms with Hann windowing of 50 ms frame length and 1024point Fourier transform. 512-dimensional bottleneck features were extracted using an ASR model every 40 ms and were then upsampled by repeating to match the frame rate of mel-spectrograms. Text transcriptions were firstly converted into sequences of phonemes with tone using a rule-based grapheme-to-phoneme model. The phoneme with tone sequences were then aligned to the speech using an HMM aligner.
Details of the seq2seq model and the WaveNet vocoder were kept the same as our previous work [10]. The output layer of the decoder in SCENT was a mixture density network (MDN) layer with 2 mixture components.
We used the batch size of 4 and Adam optimizer [25] for model training. The learning rate was 0.001 in the first 20 epochs and exponentially decay 0.95 for 50 more epochs. For WaveNet training, the µ-law companded waveforms were quantized into 10 bits. The learning rate was 10 −4 . The focus of this paper was acoustic modeling, not WaveNet vocoder. Therefore, the WaveNet vocoder of each speaker was trained using the waveforms of his or her full training set for convenience.
Three methods were compared in our experiments. The configuration of each method is described as follows 1 :
• Seq2seq:
Baseline method using previous proposed sequence-to-sequence acoustic model [10].
• Seq2seq-MT: Improving the baseline method using the multi-task learning structure proposed in Section 3.1. Auxiliary classifiers were adopted for predicting linguistic labels at training time. Each classifier contained two separated linear projection with the softmax activation for predicting phoneme identity and tone category simultaneously. The weighting factors for phoneme and tone classification were 0.1 and 0.05 respectively, which were tuned on the validation set.
• Seq2seq-MT-DA: In addition to multi-task learning, the data-augmentation method introduced in Section 3.2 was also adopted. In our full training set, the average number of alignment points in each pair of utterances was 3.15. The learning rate was fixed in first 40 epochs for better model convergence. We also tried to use larger batch size because the average length of each training sample became shorter. However, the results showed no improvement on the validation set.
Objective evaluation
F0 and mel-cepstra were extracted from the converted utterances using STRAIGHT [26]. Then, mel-cepstrum distortions (MCD) and root mean square error of F0 (F0 RMSE) on test set were reported in Table 1. From the table, we can see that all methods obtained lower MCD and F0 RMSE given more training data. When the training data was limited, i.e. only 50 or 100 training utterances available, the proposed method using multi-task learning outperformed the baseline seq2seq method with a large margin.
Adopting the data-augmentation method can further improve the performance of acoustic models. When more training data became available (e.g., 200 and 400 utterances), the performances of the Seq2seq-MT and Seq2seq-MT-DA methods were close and still better than the baseline method. When training with all parallel data, the proposed method obtained close MCD but higher F0 RMSE than the baseline method. In summary, the proposed method achieved lower objective error when the training set contains 50, 100, 200 and 400 utterances respectively. Compared with Seq2seq-MT, the Seq2seq-MT-DA method can further improve the prediction accuracy when the size of training data was 50 and 100. When training with 1000 utterances, no significant objective improvement was observed after data augmentation. The reason may be that the fragments used after data augmentation neglected the influence of their contexts in utterances. This negative effect may counteract the positive effect of reducing overfitting. Besides, the MCD and F0 RMSE of our proposed method was not better than the baseline method when models were trained with 1000 utterances. Subjective evaluations were conducted to further investigate the effectiveness of our proposed method.
Subjective evaluation
The first subjective evaluation was conducted to evaluate the stability of Seq2seq and Seq2seq-MT methods. A native listener was asked to identify the mistakes occurred in the test utterances converted using these two methods, which included mispronunciation, repeating phoneme, skipped phoneme and unclear voice. The counted numbers of mistakes are presented in Table 2. The evaluation results indicate that multi-task learning with linguistic labels can alleviate the problem of instability under all size of training data. A closer inspection on the mistakes of the Seq2seq method found that the main problem of instability was mispronunciation when the size of training data was relatively large, i.e. 400 or 1000 utterances. Converted utterances sometimes suffered from unnatural tone or incorrect phoneme. When the size of training data got smaller, mistakes of skipped phone, repeating phone increased, which were usually caused by improper attention alignments. The multi-task learning could help to alleviate both kind of problems.
Furthermore, ABX preference tests were conducted on both similarity and naturalness. Two conditions with 50 and 1000 training utterances were investigated. As we described in Section 4.2, when the size of training data was small, data augmentation method further improved the objective performance of the model. When training with 1000 utterances, no significant objective improvement was observed after data augmentation. Therefore, we compared Seq2seq with Seq2seq-MT-DA for using 50 training utterances and Seq2seq with Seq2seq-MT for using 1000 training utterances respectively. 10 native listeners were involved in the evaluation. 20 sentences in the test set were randomly selected. The conversion results were presented for listeners in random order.
The experimental results are presented in Figure 3 and the training data was limited. Figure 4 shows that the multi-task learning method improved the naturalness of converted speech when 1000 training utterances were available. The similarity improvement on female-to-male conversion was insignificant since the p-value was 0.218.
CONCLUSION
This paper has presented two methods to improving seq2seq voice conversion by utilizing text supervision. First, a secondary task is introduced based on the framework of multi-task learning. Auxiliary classifiers are added for predicting corresponding linguistic labels from the middle layers of the model. Second, a data-augmentation method is proposed, in which fragments of original utterances are randomly extracted at each training step. Experimental results validated the effectiveness of our proposed method for improving model training. The multi-task learning alleviates the instability problems, such as mispronunciations, in the conversion results of seq2seq model. The data-augmentation method can further improve the performance of seq2seq VC model with limited training data. Although the proposed methods can enhance the seq2seq VC model effectively, the degradation of model performance is still significant when only small training sets are available. Future work includes further improving the seq2seq model using other techniques in the resource-limited situation, such as model adaptation.
| 2,360 |
1811.08051
|
2901678097
|
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach, called Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss ( @math ), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding @math to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
|
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field @cite_0 @cite_3 . Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
|
{
"abstract": [
"Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.",
"When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance."
],
"cite_N": [
"@cite_0",
"@cite_3"
],
"mid": [
"2560647685",
"2473930607"
]
}
|
Learning without Memorizing
|
Most state-of-the-art solutions to recognition tasks in computer vision require using models which are specifically trained for these tasks [6,13]. For the tasks involving categories (such as object classification, segmentation), the complexity of the task (i.e. the possible number of target classes) limits the ability of these trained models. For example, a trained model aimed for object recognition can only classify object categories on which it has been trained. However, if the number of target classes increases, the model must be updated in such a way that it performs well on the original classes on which it has been trained, also known as base classes, while it incrementally learns new classes as well.
If we retrain the model only on new, previously unseen classes, it would completely forget the base classes, which is known as catastrophic forgetting [9,10], a phenomenon which is not observed in humane learning. Therefore, most existing solutions [4,14,17] explore incremental learning (IL) by allowing the model to retain a fraction of the training data of base classes, while incrementally learning new classes. Yu et al. [17] have proposed retaining trained models encoding base class information, to transfer their knowledge to the model learning new classes. However, this process is not scalable. This is because storing base class data or models encoding base class information is a memory expensive task, and hence is cumbersome when used in a lifelong learning setting. Also, in an industrial setting, when a trained object classification model is delivered to the customer, the training data is kept private for proprietary reasons. Due to this, the customer would not be able to update the trained model to incorporate new target classes in the absence of base class data. Moreover, storing base class data for incrementally learning new classes is not biologically inspired. For example, when a toddler learns to recognize new shapes/objects, it is observed that it does not completely forget the shapes or objects it already knows. It also does not always need to revisit the old information when learning new entities. Inspired by this, we aim to explore incremental learning in object classification by adding a stream of new classes without storing data belonging to classes that the classifier has already seen. While IL solutions which do not require base class data, such as [1,9] have been proposed, these methods mostly aim at incrementally learning new tasks, which means that at test time the model cannot confuse the incrementally learned tasks with tasks it has already learned, making the problem setup much easier.
We aim to explore the problem of incrementally learning object classes, without storing any data or model associated with base classes (Figure 1), while allowing the model to confuse new classes with old ones. In our problem setup, an ideal incremental learner should have the following properties:
i It should help a trained model to learn new classes obtained from a stream of data, while preserving the model's knowledge of base class information.
ii At test time, it should enable the model to consider all the classes it has learned when the model makes a prediction.
iii The size of the memory footprint should not grow at all, irrespective of the number of classes seen thus far.
An existing work targeting the same problem is LwF-MC, which is one of the baselines in [14]. For the ease of explanation, we use the following terminology (introduced in [ Initialized using M t−1 , M t is then trained to learn new classes using a classification loss, L C . However, an IPP is also applied to M t so as to minimize the divergence between the representations of M t−1 and M t . While L C helps M t learn new classes, IPP prevents M t from diverging too much from M t−1 . Since M t is already initialized as M t−1 , the initial value of IPP is expected to be close to zero. However, as M t keeps learning new classes with L C , it starts diverging from M t−1 , which leads the IPP to increase. The purpose of the IPP is to prevent the divergence of M t from M t−1 . Once M t is trained for a fixed number of epochs, it is used as a teacher in the next incremental step, using which a new student model is initialized.
In LwF-MC [14], the IPP is the knowledge distillation loss. The knowledge distillation loss L D , in this context, was first introduced in [12]. It captures the divergence between the prediction vectors of M t−1 and M t . In an incremental setup, when an image belonging to a new class (I n ) is fed to M t−1 , the base classes which have some resemblance in I n are captured. L D enforces M t to capture the same base classes. Thus, L D essentially makes M t learn 'what' are the possible base classes in I n , as shown in Figure 1. The pixels which have a high influence on the models' prediction constitute the attention region of the network. However, L D does not explicitly take into account the degree of each pixel influencing the models predictions. For example, in Figure 2, in the first row, it is seen that at step n, even though the network focuses on an incorrect region while predicting 'dial telephone', the numerical value of L D (0.09) is same as that when the network focuses on the correct region in step n, in the bottom row.
We hypothesize that attention regions encode the models' representation more precisely. Hence, constraining the attention regions of M t and M t−1 using an Attention Distillation Loss (L D , explained in Sec. 4.1), to minimize the divergence of the representations of M t from that of M t−1 is more meaningful. This is because, instead of finding which base classes are resembled in the new data, attention maps explain 'why' hints of a base class are present (as shown in Figure 1). Using these hints, L D , in an attempt to make the attention maps of M t−1 and M t equivalent, helps to encode some visual knowledge of base class in M t . The utility of L AD is seen in the example in Figure 2, where even though the model correctly predicts the image as 'dial telephone', the value of L D in step n increases if the attention regions diverge too much from the region in Step 0.
We propose an approach where an Attention Distillation Loss (L AD ) is applied to M t to prevent its divergence from M t−1 , at incremental step t. Precisely, we propose to constrain the L 1 distance between the attention maps generated by M t−1 and M t in order to preserve the knowledge of base classes. The reasoning behind this strategy is described in Sec 4.1. This is applied in addition to the distillation loss L D and a classification loss for the student model to incrementally learn new classes.
The main contribution of this work is to provide an attention-based approach, termed as 'Learning without Memorizing (LwM)', that helps a model to incrementally learn new classes by restricting the divergence between student and teacher model. LwM does not require any data of the base classes when learning new classes. Different from the contemporary approaches which explore the same problem, LwM takes into account the gradient flow information of teacher and student models by generating attention maps using these models. It then constrains this infor- Step 0
Step n … …
Sample
Step 1 0.08 0.12 0.08 0.12 -- mation to be equivalent for teacher and student models, thus preventing the student model to diverge too much from the teacher model. Finally, we show that LwM consistently outperforms the state-of-the-art performance in the iILSVRC-small [14] and iCIFAR-100 [14] datasets.
Related work
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field [9,12]. Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
Task incremental (TI) methods: In this problem, a model trained to perform object classification on a specific dataset is incrementally trained to classify objects in a new dataset. A key characteristic of these experiments is that during evaluation, the final model is tested on different datasets (base and incrementally learned) separately. This is known as multi-headed evaluation [4]. In such an evaluation, the classes belonging to two different tasks have no chance to confuse with one another. One of the earlier works in this category is LwF [12], where a distillation loss is used to preserve information of the base classes. Also, the data from base classes is used during training, while the classifier learns new classes. A prominent work in this area is EWC [9], where at each incremental task the weights of the student model are set to those of their corresponding teacher model, according to their importance of network weights. Aljundi et al. present MAS [1], a technique to train the agents to learn what information not to forget. All experiments in this category use multi-headed evaluation, which is different from the problem setting of this paper where we use single-headed evaluation, defined explicitly in [4]. Single-headed evaluation is another evaluation method wherein the model is evaluated on both base and incrementally learned classes jointly. Clearly, multi-headed evaluation is relatively easier than single-headed evaluation, as explained in [4].
Class incremental (CI) methods: In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL [14] belongs to this category. In iCaRL [14], Rebuffi et al. propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in [14], which is a class incremental version of LwF [12]. LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is [4], which evaluates using both single-headed and multi-headed evaluations and highlight their difference. Chaudhry et al. [4] introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
A key specification of most incremental learning frameworks is whether or not they allow storing the data of base classes (i.e. classes on which the classifier is originally trained). We can also divide existing methods based on this specification:
Methods which use base class data: Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL [14] uses the exemplars of base classes, while incrementally learning new classes. Similarly, Chaudhry et al. [4] also use a fraction of the data of base classes. Chaudhry et al. [4] also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement Constraints Use base class data No base class data CI methods iCaRL [14], [4], [17] LwF-MC [14], LwM TI methods LwF [12] IMM [10], EWC [9], MAS [1], [2], [8] at each incremental step, which is not feasible when the memory budget is limited.
Methods which do not use base class data: Several TI methods described earlier (such as [1,9] ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC [14] is the only CI method which does not use base class data but uses single-headed evaluation. Table 1 presents a categorization summary of previous works in this field. We propose a technique to solve the CI problem, without using any base class data. We can infer from the discussion above that LwF-MC [14] is the only existing work which uses single-headed evaluation, and hence use it as our baseline. We intend to use attention maps in an incremental setup, instead of only knowledge distillation, to transfer more comprehensive knowledge of base classes from teacher to student model. Although in [18], enforcing equivalence of attention maps of teacher and student models has been explored previously for transferring knowledge from teacher to student models, the same approach cannot be applied to an incremental learning setting. In our incremental problem setup, due to the absence of base class data, we intend to utilize the attention region in the new data which resembles one of the base classes. But these regions are not prominent since the data does not belong to any of the base classes, thus making class-specific attention maps a necessity. Class-specificity is required to mine out base class regions in a more targeted fashion, which is why generic attention maps such as what is used in [18] are not applicable as they can not provide a class-specific explanation about relevant patterns corresponding to the target class. Moreover, our problem setup is different from knowledge distillation because at incremental step t, we freeze M t−1 while training M t , and do not allow M t to access data from the base classes, and therefore M t−1 and M t are trained using a completely different set of classes. This makes the problem more challenging as the output of M t on feeding data from unseen classes is the only source of base class data. This is further explained in Sec. 4.1.
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC [14]. In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
Distillation loss (L D )
L D was first introduced in [12] for incremental learning. It is defined as follows:
L D (y,ŷ) = − N i=1 y i . log(ŷ i ),(1)
where y andŷ are prediction vectors (composed of probability scores) of M t−1 and M t for base classes at incremental step t, each of length N (assuming that M t−1 is trained on N base classes). Also y i = σ(y i ) andŷ i = σ(ŷ i ) (where σ(·) is sigmoid activation). This definition of L D is consistent with that defined in LwF-MC [14]. Essentially, L D enforces the base class prediction of M t and M t−1 to be equivalent, when an image belonging to one of the incrementally added classes is fed to each of them. Moreover, we believe that there exist common visual semantics or patterns in both base and new class data. Therefore, it makes sense to encourage the feature responses of M t and M t−1 to be equivalent, when new class data is given as input. This helps to retain the old class knowledge (in terms of the common visual semantics).
Generating attention maps
We describe the technique employed to generate attention maps. In our experiments we use the Grad-CAM [15] for this task. For using the Grad-CAM, the image is first forwarded to the model, obtaining a raw score for every class. Following this, the gradient of score y c for a desired class c is computed with respect to each convolutional feature map A k . For each A k , global average pooling is performed to obtain the neuron importance α k of A k . Finally, all the A k weighted by α k are passed through a ReLU activation function to obtain a final attention map for class c.
More precisely, let α k = ∂y c ∂A k . Let α = the number of convolutional feature maps in the layer using which attention map is to be generated. The attention map Q can be defined as
Q = ReLU (α T A)(2)
Proposed approach
We introduce an information preserving penalty (L AD ) based on attention maps. We combine L AD with distillation loss L D and a classification loss L C to construct LwM, an approach which encourages attention maps of teacher and student to be similar. Our LwM framework in shown in Figure 3. The loss function of our LwM approach is defined below:
L LW M = L C + βL D + γL AD(3)
Here β, γ are the weights used for L D , L AD respectively. A representation of our LwM approach is presented in Figure 3. In comparison to LwM, LwF-MC [14] only uses a classification loss combined with distillation loss and is our baseline.
Attention distillation loss (L AD )
At incremental step t, we define student model M t , initialized using a teacher model M t−1 . We assume M t is proficient in classifying N base classes. M t is required to recognize N + k classes, where k is the number of previously unseen classes added incrementally. Hence, the sizes of the prediction vectors of M t−1 and M t are N and N + k respectively. For any given input image i, we denote the vectorized attention maps generated by M t−1 and M t , for class c as Q i,c t−1 and Q i,c t , respectively. We generate these maps using Grad-CAM [15], as explained above.
Q i,c t−1 = vector(Grad-CAM(i, M t−1 , c)) (4) Q i,c t = vector(Grad-CAM(i, M t , c))(5)
We assume that the lengths of each vectorized attention map is l. In [18], it has been mentioned that normalizing the attention map by dividing it by the L 2 norm of the map is an important step for student training. Hence we perform this step while computing L AD . During training of M t , an image belonging to one of the new classes to be learned (denoted as I n ), is given as input to both M t−1 and M t . Let b be the top base class predicted by M t (i.e. base class having the highest score) for I n . For this input, L AD is defined as the sum of element wise L 1 difference of the normalized, vectorized attention map:
L AD = l j=1 Q In,b t−1,j Q In,b t−1,j 2 − Q In,b t,j Q In,b t,j 2 1(6)
From the explanation above, we know that for training M t , M t−1 is fed with the data from the classes that it has not seen before (I n ). Essentially, the attention regions generated by M t−1 for I n , represent the regions in the image which resemble the base classes. If M t and M t−1 have equivalent knowledge of base classes, they should have a similar response to these regions, and therefore Q In,b t should be similar to Q In,b t−1 . This implies that the attention outputs of M t−1 are the only traces of base data, which guides M t 's knowledge of base classes. We use the L 1 distance between Q In,b t−1 and Q In,b t as a penalty to enforce their similarity. We experimented with both L 1 and L 2 distance in this context. However, as we obtained better results with L 1 distance on held-out data, we chose L 1 over L 2 distance.
According to Eq. 2, attention maps encode gradient of the score of class b, y b with respect to convolutional feature maps A. This information is not explicitly captured by the distribution of class scores (used by L D Table 2: The statistics of the datasets used in our experiments, in accordance with [14]. Additionally, we also perform experiments on the CUB-200-2011 [16] dataset. In the table, "acc." represents accuracy.
are a 2D manifestation of the prediction vectors (y,ŷ), which means that they capture more spatial information than these vectors, and hence it is more advantageous to use attention maps than using only prediction vectors.
Experiments
We first explain our baseline, which is LwF-MC [14]. Following that, we provide information about the datasets used in our experiments. After that, we describe the iterative protocol to perform classification at every incremental step. We also provide implementation details including architectural information.
Baseline
As our baseline is LwF-MC [14], we firstly implement its objective function, which is a sum of a classification loss and distillation loss (L C + L D ). In all our experiments, we use a cross entropy loss for L C to be consistent with [14]. However, it should be highlighted that the official implementation of L D in LwF-MC by [14] is different from the definition of L D in [12]. As LwF-MC (but not LwF) is our baseline, we use iCaRL's implementation of LwF-MC in our work. LwF cannot handle CI problems where no base class training data is available (according to Table 1), which is the reason why we choose LwF-MC as the baseline and iCaRL's implementation.
Datasets
We use two datasets used in LwF-MC [14] for our experiments. Additionally, we also perform experiments on Caltech-101 [5] as well as CUBS-200-2011 [16] datasets. The details for the datasets are provided in Table 2. These datasets are constructed by randomly selecting a batch of classes at every incremental step. In both datasets, the classes belonging to different batches are disjoint. For a fair comparison, the data preparation for all the datsets and evaluation strategy are the same as that for LwF-MC [14]. Table 3: Experiment configurations used in this work, identified by their respective experiment IDs.
Experimental protocol
We now describe the protocol using which we iteratively train M t , so that it preserves the knowledge of the base classes while incrementally learning new classes.
Initialization: Before the first incremental step (t = 1), we train a teacher model M 0 on 10 base classes, using a classification loss for 10 epochs. The classification loss is a cross entropy loss L C . Following this, for t = 1 to t = k we initialize student M t using M t−1 as its teacher, and feed data from a new batch of images that is to be incrementally learned, to both of these models. Here k is the number of incremental steps.
Applying IPP and classification loss to student model: Given the data from new classes as inputs, we generate the output of M t and M t−1 with respect to base class having the highest score. These outputs can either be classspecific attention maps (required for computing L AD ) or class-specific scores (required for computing L D ). Using these outputs we compute an IPP which can either be L AD or L D . In addition, we apply a classification loss to M t based on its outputs with respect to the new classes which are to be learned incrementally. We jointly apply classification loss and IPP to M t and train it for 10 epochs. Once M t is trained, we use it as a teacher model in the next incremental step, and follow the aforementioned steps iteratively, until all the k incremental steps are completed.
Implementation details
We use the ResNet-18 [7] architecture for training student and teacher models on the iILSVRC-small dataset, and the ResNet-34 [7] for training models on the iCIFAR-100 dataset. This is consistent with the networks and datasets used in [14]. We used a learning rate of 0.01. The feature maps of the final convolutional layer are used to generate attention maps using Grad-CAM [15]. The combinations of classification loss and IPP, along with their experiment IDs are provided in Table 3. The experiment configurations will be referred to as their respective experiment IDs from now on. Referring to Eq. 3, we provide details regarding the weights of each loss used in LwM in the supplementary material. Figure 5: The performance comparison between our method, LwM, and the baselines. LwM outperforms LwF-MC [14] and "using only classification loss with fine-tuning" on the iILSVRC-small and iCIFAR-100 datasets [14]. LwM even outperforms iCaRL [14] on the iILSVRC-small dataset given that iCaRL has the unfair advantage of accessing the base-class data.
Results
Before discussing the quantitative results and advantages of our proposed penalties, we show some qualitative results to demonstrate the advantage of using L AD . We show that we can retain attention regions of base classes for a longer time when more classes are incrementally added to the classifier by using LwM as compared to LwF-MC [14]. Before the first incremental step t = 1, we have M 0 trained on 10 base classes. Now, following the protocol in Sec. 5.3, we incrementally add 10 classes at each incremental step. At every incremental step t, we train M t with 3 configurations: C, LwF-MC [14], and LwM. We use the M t to generate the attention maps for the data from base classes (using which M 0 was trained), which it has not seen, and show the results in Figure 4. Additionally, we also generate corresponding attention maps using M 0 (i.e. the first teacher model), which can be considered 'ideal' (as target maps) as M 0 was given full access to base class data. For the M t s trained with C, it is seen that attention regions for base classes are quickly forgotten after every incremental step. This can be attributed to catastrophic forgetting [9,10]. M t trained with LwF-MC [14] have slightly better attention preserving ability but as the number of incremental steps increases, the attention regions diverge from the 'ideal' attention regions. Interestingly, the attention maps generated by M t trained with LwM configuration retain the attention regions for base classes for all incremental steps shown in Figure 4, and are most similar to the target attention maps. These examples support that LwM delays forgetting of base class knowledge. We show more results in more incremental steps in the supplementary material.
We now show the quantitative results of the following configurations: C , LwF-MC [14] and LwM. To show the efficacy of LwM across, we evaluate these configurations on multiple datasets. The results on the iILSVRC-small and iCIFAR-100 datasets are presented in Figure 5. For the iILSVRC-small dataset, the performance of LwM is better than that of the baseline LwF-MC [14]. LwM outperforms the baseline by a margin of more than 30% when the number of classes is 40 or more. Especially for 100 classes, LwM achieves an improvement of more than 50% over the baseline LwF-MC [14]. In addition, LwM outperforms iCaRL [14], at every incremental step, even though iCaRL has an unfair advantage of storing the exemplars of base classes while training the student model for iILSVRCsmall dataset.
To be consistent with the LwF-MC experiments in [14], we perform experiments by constructing the iCIFAR-100 datasets by using batches of 10, 20, and 50 classes at each incremental step. The results are provided in Figure 5. It can be seen that LwM outperforms LwF-MC for all three sizes of incremental batches in iCIFAR-100 dataset. Hence, we conclude that LwM consistently outperforms LwF-MC [14] in iILSVRC-small and iCIFAR-100 datasets. Additionally, we also perform these experiments using the Caltech-101 and CUBS-200-2011 dataset [5] Table 4: Results obtained on Caltech-101 [5] and CUBS-200-2011 [16]. Here FT refers to fine-tuning. The first step refers to the training of first teacher model using 10 classes.
batch of 10 classes at every incremental step and compare it with fine-tuning. The results for these two datasets are shown in Table 4. Also, the advantage of incrementally adding every loss on top of L C is also established in Figure 5. Here, we show that the performance with only C is very poor due to the catastrophic forgetting [9,10]. We achieve some improvement when L D is added as an IPP in LwF-MC. The performance further improves with the addition of L AD in LwM configuration. For both the iILSVRC-small and iCIFAR100 datasets, the tabular version of their results in Figures 5 is provided in the supplementary material.
Conclusion and future work
We explored the IL problem for the task of object classification, and proposed a technique: LwM by combining L D with L AD , for utilizing attention maps to transfer the knowledge of base classes from the teacher to student model, without requiring any data of base classes during training. This technique outperforms the baseline in all the scenarios that we investigate. Regarding future applications, LwM can be used in many real world scenarios. For instance, the capability of a face recognition network trained on specific identities can be trained in an incremental setup using our frameworks to include more identities and increase the number of identities that can be recognized using a network. Also, while we explore IL problem for classification in this work, we believe that this can also be extended for segmentation. We believe that incremental segmentation is a challenging problem due to the absence of abundant ground truth maps. The importance of incremental segmentation has already been underscored in [3]. Also, as visual attention is more meaningful for segmentation (as shown in [11]), we intend to apply our frame-works to incremental segmentation in the near future. Other applications may also include incrementally learning from the data belonging to the same class but different domains, to build a student model capable of adapting to various domains.
| 5,055 |
1811.08051
|
2901678097
|
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach, called Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss ( @math ), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding @math to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
|
In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL @cite_4 belongs to this category. In iCaRL @cite_4 , propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in @cite_4 , which is a class incremental version of LwF @cite_3 . LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is @cite_2 , which evaluates using both single-headed and multi-headed evaluations and highlight their difference. @cite_2 introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
|
{
"abstract": [
"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.",
"When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.",
"Incremental learning (il) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the il problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of il. The main challenge for an il algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, il also suffers from a problem we call intransigence, its inability to update knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of il algorithms. Furthermore, we present RWalk, a generalization of ewc++ (our efficient version of ewc [6]) and Path Integral [25] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various il algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off for forgetting and intransigence."
],
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_2"
],
"mid": [
"2964189064",
"2473930607",
"2786446225"
]
}
|
Learning without Memorizing
|
Most state-of-the-art solutions to recognition tasks in computer vision require using models which are specifically trained for these tasks [6,13]. For the tasks involving categories (such as object classification, segmentation), the complexity of the task (i.e. the possible number of target classes) limits the ability of these trained models. For example, a trained model aimed for object recognition can only classify object categories on which it has been trained. However, if the number of target classes increases, the model must be updated in such a way that it performs well on the original classes on which it has been trained, also known as base classes, while it incrementally learns new classes as well.
If we retrain the model only on new, previously unseen classes, it would completely forget the base classes, which is known as catastrophic forgetting [9,10], a phenomenon which is not observed in humane learning. Therefore, most existing solutions [4,14,17] explore incremental learning (IL) by allowing the model to retain a fraction of the training data of base classes, while incrementally learning new classes. Yu et al. [17] have proposed retaining trained models encoding base class information, to transfer their knowledge to the model learning new classes. However, this process is not scalable. This is because storing base class data or models encoding base class information is a memory expensive task, and hence is cumbersome when used in a lifelong learning setting. Also, in an industrial setting, when a trained object classification model is delivered to the customer, the training data is kept private for proprietary reasons. Due to this, the customer would not be able to update the trained model to incorporate new target classes in the absence of base class data. Moreover, storing base class data for incrementally learning new classes is not biologically inspired. For example, when a toddler learns to recognize new shapes/objects, it is observed that it does not completely forget the shapes or objects it already knows. It also does not always need to revisit the old information when learning new entities. Inspired by this, we aim to explore incremental learning in object classification by adding a stream of new classes without storing data belonging to classes that the classifier has already seen. While IL solutions which do not require base class data, such as [1,9] have been proposed, these methods mostly aim at incrementally learning new tasks, which means that at test time the model cannot confuse the incrementally learned tasks with tasks it has already learned, making the problem setup much easier.
We aim to explore the problem of incrementally learning object classes, without storing any data or model associated with base classes (Figure 1), while allowing the model to confuse new classes with old ones. In our problem setup, an ideal incremental learner should have the following properties:
i It should help a trained model to learn new classes obtained from a stream of data, while preserving the model's knowledge of base class information.
ii At test time, it should enable the model to consider all the classes it has learned when the model makes a prediction.
iii The size of the memory footprint should not grow at all, irrespective of the number of classes seen thus far.
An existing work targeting the same problem is LwF-MC, which is one of the baselines in [14]. For the ease of explanation, we use the following terminology (introduced in [ Initialized using M t−1 , M t is then trained to learn new classes using a classification loss, L C . However, an IPP is also applied to M t so as to minimize the divergence between the representations of M t−1 and M t . While L C helps M t learn new classes, IPP prevents M t from diverging too much from M t−1 . Since M t is already initialized as M t−1 , the initial value of IPP is expected to be close to zero. However, as M t keeps learning new classes with L C , it starts diverging from M t−1 , which leads the IPP to increase. The purpose of the IPP is to prevent the divergence of M t from M t−1 . Once M t is trained for a fixed number of epochs, it is used as a teacher in the next incremental step, using which a new student model is initialized.
In LwF-MC [14], the IPP is the knowledge distillation loss. The knowledge distillation loss L D , in this context, was first introduced in [12]. It captures the divergence between the prediction vectors of M t−1 and M t . In an incremental setup, when an image belonging to a new class (I n ) is fed to M t−1 , the base classes which have some resemblance in I n are captured. L D enforces M t to capture the same base classes. Thus, L D essentially makes M t learn 'what' are the possible base classes in I n , as shown in Figure 1. The pixels which have a high influence on the models' prediction constitute the attention region of the network. However, L D does not explicitly take into account the degree of each pixel influencing the models predictions. For example, in Figure 2, in the first row, it is seen that at step n, even though the network focuses on an incorrect region while predicting 'dial telephone', the numerical value of L D (0.09) is same as that when the network focuses on the correct region in step n, in the bottom row.
We hypothesize that attention regions encode the models' representation more precisely. Hence, constraining the attention regions of M t and M t−1 using an Attention Distillation Loss (L D , explained in Sec. 4.1), to minimize the divergence of the representations of M t from that of M t−1 is more meaningful. This is because, instead of finding which base classes are resembled in the new data, attention maps explain 'why' hints of a base class are present (as shown in Figure 1). Using these hints, L D , in an attempt to make the attention maps of M t−1 and M t equivalent, helps to encode some visual knowledge of base class in M t . The utility of L AD is seen in the example in Figure 2, where even though the model correctly predicts the image as 'dial telephone', the value of L D in step n increases if the attention regions diverge too much from the region in Step 0.
We propose an approach where an Attention Distillation Loss (L AD ) is applied to M t to prevent its divergence from M t−1 , at incremental step t. Precisely, we propose to constrain the L 1 distance between the attention maps generated by M t−1 and M t in order to preserve the knowledge of base classes. The reasoning behind this strategy is described in Sec 4.1. This is applied in addition to the distillation loss L D and a classification loss for the student model to incrementally learn new classes.
The main contribution of this work is to provide an attention-based approach, termed as 'Learning without Memorizing (LwM)', that helps a model to incrementally learn new classes by restricting the divergence between student and teacher model. LwM does not require any data of the base classes when learning new classes. Different from the contemporary approaches which explore the same problem, LwM takes into account the gradient flow information of teacher and student models by generating attention maps using these models. It then constrains this infor- Step 0
Step n … …
Sample
Step 1 0.08 0.12 0.08 0.12 -- mation to be equivalent for teacher and student models, thus preventing the student model to diverge too much from the teacher model. Finally, we show that LwM consistently outperforms the state-of-the-art performance in the iILSVRC-small [14] and iCIFAR-100 [14] datasets.
Related work
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field [9,12]. Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
Task incremental (TI) methods: In this problem, a model trained to perform object classification on a specific dataset is incrementally trained to classify objects in a new dataset. A key characteristic of these experiments is that during evaluation, the final model is tested on different datasets (base and incrementally learned) separately. This is known as multi-headed evaluation [4]. In such an evaluation, the classes belonging to two different tasks have no chance to confuse with one another. One of the earlier works in this category is LwF [12], where a distillation loss is used to preserve information of the base classes. Also, the data from base classes is used during training, while the classifier learns new classes. A prominent work in this area is EWC [9], where at each incremental task the weights of the student model are set to those of their corresponding teacher model, according to their importance of network weights. Aljundi et al. present MAS [1], a technique to train the agents to learn what information not to forget. All experiments in this category use multi-headed evaluation, which is different from the problem setting of this paper where we use single-headed evaluation, defined explicitly in [4]. Single-headed evaluation is another evaluation method wherein the model is evaluated on both base and incrementally learned classes jointly. Clearly, multi-headed evaluation is relatively easier than single-headed evaluation, as explained in [4].
Class incremental (CI) methods: In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL [14] belongs to this category. In iCaRL [14], Rebuffi et al. propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in [14], which is a class incremental version of LwF [12]. LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is [4], which evaluates using both single-headed and multi-headed evaluations and highlight their difference. Chaudhry et al. [4] introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
A key specification of most incremental learning frameworks is whether or not they allow storing the data of base classes (i.e. classes on which the classifier is originally trained). We can also divide existing methods based on this specification:
Methods which use base class data: Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL [14] uses the exemplars of base classes, while incrementally learning new classes. Similarly, Chaudhry et al. [4] also use a fraction of the data of base classes. Chaudhry et al. [4] also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement Constraints Use base class data No base class data CI methods iCaRL [14], [4], [17] LwF-MC [14], LwM TI methods LwF [12] IMM [10], EWC [9], MAS [1], [2], [8] at each incremental step, which is not feasible when the memory budget is limited.
Methods which do not use base class data: Several TI methods described earlier (such as [1,9] ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC [14] is the only CI method which does not use base class data but uses single-headed evaluation. Table 1 presents a categorization summary of previous works in this field. We propose a technique to solve the CI problem, without using any base class data. We can infer from the discussion above that LwF-MC [14] is the only existing work which uses single-headed evaluation, and hence use it as our baseline. We intend to use attention maps in an incremental setup, instead of only knowledge distillation, to transfer more comprehensive knowledge of base classes from teacher to student model. Although in [18], enforcing equivalence of attention maps of teacher and student models has been explored previously for transferring knowledge from teacher to student models, the same approach cannot be applied to an incremental learning setting. In our incremental problem setup, due to the absence of base class data, we intend to utilize the attention region in the new data which resembles one of the base classes. But these regions are not prominent since the data does not belong to any of the base classes, thus making class-specific attention maps a necessity. Class-specificity is required to mine out base class regions in a more targeted fashion, which is why generic attention maps such as what is used in [18] are not applicable as they can not provide a class-specific explanation about relevant patterns corresponding to the target class. Moreover, our problem setup is different from knowledge distillation because at incremental step t, we freeze M t−1 while training M t , and do not allow M t to access data from the base classes, and therefore M t−1 and M t are trained using a completely different set of classes. This makes the problem more challenging as the output of M t on feeding data from unseen classes is the only source of base class data. This is further explained in Sec. 4.1.
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC [14]. In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
Distillation loss (L D )
L D was first introduced in [12] for incremental learning. It is defined as follows:
L D (y,ŷ) = − N i=1 y i . log(ŷ i ),(1)
where y andŷ are prediction vectors (composed of probability scores) of M t−1 and M t for base classes at incremental step t, each of length N (assuming that M t−1 is trained on N base classes). Also y i = σ(y i ) andŷ i = σ(ŷ i ) (where σ(·) is sigmoid activation). This definition of L D is consistent with that defined in LwF-MC [14]. Essentially, L D enforces the base class prediction of M t and M t−1 to be equivalent, when an image belonging to one of the incrementally added classes is fed to each of them. Moreover, we believe that there exist common visual semantics or patterns in both base and new class data. Therefore, it makes sense to encourage the feature responses of M t and M t−1 to be equivalent, when new class data is given as input. This helps to retain the old class knowledge (in terms of the common visual semantics).
Generating attention maps
We describe the technique employed to generate attention maps. In our experiments we use the Grad-CAM [15] for this task. For using the Grad-CAM, the image is first forwarded to the model, obtaining a raw score for every class. Following this, the gradient of score y c for a desired class c is computed with respect to each convolutional feature map A k . For each A k , global average pooling is performed to obtain the neuron importance α k of A k . Finally, all the A k weighted by α k are passed through a ReLU activation function to obtain a final attention map for class c.
More precisely, let α k = ∂y c ∂A k . Let α = the number of convolutional feature maps in the layer using which attention map is to be generated. The attention map Q can be defined as
Q = ReLU (α T A)(2)
Proposed approach
We introduce an information preserving penalty (L AD ) based on attention maps. We combine L AD with distillation loss L D and a classification loss L C to construct LwM, an approach which encourages attention maps of teacher and student to be similar. Our LwM framework in shown in Figure 3. The loss function of our LwM approach is defined below:
L LW M = L C + βL D + γL AD(3)
Here β, γ are the weights used for L D , L AD respectively. A representation of our LwM approach is presented in Figure 3. In comparison to LwM, LwF-MC [14] only uses a classification loss combined with distillation loss and is our baseline.
Attention distillation loss (L AD )
At incremental step t, we define student model M t , initialized using a teacher model M t−1 . We assume M t is proficient in classifying N base classes. M t is required to recognize N + k classes, where k is the number of previously unseen classes added incrementally. Hence, the sizes of the prediction vectors of M t−1 and M t are N and N + k respectively. For any given input image i, we denote the vectorized attention maps generated by M t−1 and M t , for class c as Q i,c t−1 and Q i,c t , respectively. We generate these maps using Grad-CAM [15], as explained above.
Q i,c t−1 = vector(Grad-CAM(i, M t−1 , c)) (4) Q i,c t = vector(Grad-CAM(i, M t , c))(5)
We assume that the lengths of each vectorized attention map is l. In [18], it has been mentioned that normalizing the attention map by dividing it by the L 2 norm of the map is an important step for student training. Hence we perform this step while computing L AD . During training of M t , an image belonging to one of the new classes to be learned (denoted as I n ), is given as input to both M t−1 and M t . Let b be the top base class predicted by M t (i.e. base class having the highest score) for I n . For this input, L AD is defined as the sum of element wise L 1 difference of the normalized, vectorized attention map:
L AD = l j=1 Q In,b t−1,j Q In,b t−1,j 2 − Q In,b t,j Q In,b t,j 2 1(6)
From the explanation above, we know that for training M t , M t−1 is fed with the data from the classes that it has not seen before (I n ). Essentially, the attention regions generated by M t−1 for I n , represent the regions in the image which resemble the base classes. If M t and M t−1 have equivalent knowledge of base classes, they should have a similar response to these regions, and therefore Q In,b t should be similar to Q In,b t−1 . This implies that the attention outputs of M t−1 are the only traces of base data, which guides M t 's knowledge of base classes. We use the L 1 distance between Q In,b t−1 and Q In,b t as a penalty to enforce their similarity. We experimented with both L 1 and L 2 distance in this context. However, as we obtained better results with L 1 distance on held-out data, we chose L 1 over L 2 distance.
According to Eq. 2, attention maps encode gradient of the score of class b, y b with respect to convolutional feature maps A. This information is not explicitly captured by the distribution of class scores (used by L D Table 2: The statistics of the datasets used in our experiments, in accordance with [14]. Additionally, we also perform experiments on the CUB-200-2011 [16] dataset. In the table, "acc." represents accuracy.
are a 2D manifestation of the prediction vectors (y,ŷ), which means that they capture more spatial information than these vectors, and hence it is more advantageous to use attention maps than using only prediction vectors.
Experiments
We first explain our baseline, which is LwF-MC [14]. Following that, we provide information about the datasets used in our experiments. After that, we describe the iterative protocol to perform classification at every incremental step. We also provide implementation details including architectural information.
Baseline
As our baseline is LwF-MC [14], we firstly implement its objective function, which is a sum of a classification loss and distillation loss (L C + L D ). In all our experiments, we use a cross entropy loss for L C to be consistent with [14]. However, it should be highlighted that the official implementation of L D in LwF-MC by [14] is different from the definition of L D in [12]. As LwF-MC (but not LwF) is our baseline, we use iCaRL's implementation of LwF-MC in our work. LwF cannot handle CI problems where no base class training data is available (according to Table 1), which is the reason why we choose LwF-MC as the baseline and iCaRL's implementation.
Datasets
We use two datasets used in LwF-MC [14] for our experiments. Additionally, we also perform experiments on Caltech-101 [5] as well as CUBS-200-2011 [16] datasets. The details for the datasets are provided in Table 2. These datasets are constructed by randomly selecting a batch of classes at every incremental step. In both datasets, the classes belonging to different batches are disjoint. For a fair comparison, the data preparation for all the datsets and evaluation strategy are the same as that for LwF-MC [14]. Table 3: Experiment configurations used in this work, identified by their respective experiment IDs.
Experimental protocol
We now describe the protocol using which we iteratively train M t , so that it preserves the knowledge of the base classes while incrementally learning new classes.
Initialization: Before the first incremental step (t = 1), we train a teacher model M 0 on 10 base classes, using a classification loss for 10 epochs. The classification loss is a cross entropy loss L C . Following this, for t = 1 to t = k we initialize student M t using M t−1 as its teacher, and feed data from a new batch of images that is to be incrementally learned, to both of these models. Here k is the number of incremental steps.
Applying IPP and classification loss to student model: Given the data from new classes as inputs, we generate the output of M t and M t−1 with respect to base class having the highest score. These outputs can either be classspecific attention maps (required for computing L AD ) or class-specific scores (required for computing L D ). Using these outputs we compute an IPP which can either be L AD or L D . In addition, we apply a classification loss to M t based on its outputs with respect to the new classes which are to be learned incrementally. We jointly apply classification loss and IPP to M t and train it for 10 epochs. Once M t is trained, we use it as a teacher model in the next incremental step, and follow the aforementioned steps iteratively, until all the k incremental steps are completed.
Implementation details
We use the ResNet-18 [7] architecture for training student and teacher models on the iILSVRC-small dataset, and the ResNet-34 [7] for training models on the iCIFAR-100 dataset. This is consistent with the networks and datasets used in [14]. We used a learning rate of 0.01. The feature maps of the final convolutional layer are used to generate attention maps using Grad-CAM [15]. The combinations of classification loss and IPP, along with their experiment IDs are provided in Table 3. The experiment configurations will be referred to as their respective experiment IDs from now on. Referring to Eq. 3, we provide details regarding the weights of each loss used in LwM in the supplementary material. Figure 5: The performance comparison between our method, LwM, and the baselines. LwM outperforms LwF-MC [14] and "using only classification loss with fine-tuning" on the iILSVRC-small and iCIFAR-100 datasets [14]. LwM even outperforms iCaRL [14] on the iILSVRC-small dataset given that iCaRL has the unfair advantage of accessing the base-class data.
Results
Before discussing the quantitative results and advantages of our proposed penalties, we show some qualitative results to demonstrate the advantage of using L AD . We show that we can retain attention regions of base classes for a longer time when more classes are incrementally added to the classifier by using LwM as compared to LwF-MC [14]. Before the first incremental step t = 1, we have M 0 trained on 10 base classes. Now, following the protocol in Sec. 5.3, we incrementally add 10 classes at each incremental step. At every incremental step t, we train M t with 3 configurations: C, LwF-MC [14], and LwM. We use the M t to generate the attention maps for the data from base classes (using which M 0 was trained), which it has not seen, and show the results in Figure 4. Additionally, we also generate corresponding attention maps using M 0 (i.e. the first teacher model), which can be considered 'ideal' (as target maps) as M 0 was given full access to base class data. For the M t s trained with C, it is seen that attention regions for base classes are quickly forgotten after every incremental step. This can be attributed to catastrophic forgetting [9,10]. M t trained with LwF-MC [14] have slightly better attention preserving ability but as the number of incremental steps increases, the attention regions diverge from the 'ideal' attention regions. Interestingly, the attention maps generated by M t trained with LwM configuration retain the attention regions for base classes for all incremental steps shown in Figure 4, and are most similar to the target attention maps. These examples support that LwM delays forgetting of base class knowledge. We show more results in more incremental steps in the supplementary material.
We now show the quantitative results of the following configurations: C , LwF-MC [14] and LwM. To show the efficacy of LwM across, we evaluate these configurations on multiple datasets. The results on the iILSVRC-small and iCIFAR-100 datasets are presented in Figure 5. For the iILSVRC-small dataset, the performance of LwM is better than that of the baseline LwF-MC [14]. LwM outperforms the baseline by a margin of more than 30% when the number of classes is 40 or more. Especially for 100 classes, LwM achieves an improvement of more than 50% over the baseline LwF-MC [14]. In addition, LwM outperforms iCaRL [14], at every incremental step, even though iCaRL has an unfair advantage of storing the exemplars of base classes while training the student model for iILSVRCsmall dataset.
To be consistent with the LwF-MC experiments in [14], we perform experiments by constructing the iCIFAR-100 datasets by using batches of 10, 20, and 50 classes at each incremental step. The results are provided in Figure 5. It can be seen that LwM outperforms LwF-MC for all three sizes of incremental batches in iCIFAR-100 dataset. Hence, we conclude that LwM consistently outperforms LwF-MC [14] in iILSVRC-small and iCIFAR-100 datasets. Additionally, we also perform these experiments using the Caltech-101 and CUBS-200-2011 dataset [5] Table 4: Results obtained on Caltech-101 [5] and CUBS-200-2011 [16]. Here FT refers to fine-tuning. The first step refers to the training of first teacher model using 10 classes.
batch of 10 classes at every incremental step and compare it with fine-tuning. The results for these two datasets are shown in Table 4. Also, the advantage of incrementally adding every loss on top of L C is also established in Figure 5. Here, we show that the performance with only C is very poor due to the catastrophic forgetting [9,10]. We achieve some improvement when L D is added as an IPP in LwF-MC. The performance further improves with the addition of L AD in LwM configuration. For both the iILSVRC-small and iCIFAR100 datasets, the tabular version of their results in Figures 5 is provided in the supplementary material.
Conclusion and future work
We explored the IL problem for the task of object classification, and proposed a technique: LwM by combining L D with L AD , for utilizing attention maps to transfer the knowledge of base classes from the teacher to student model, without requiring any data of base classes during training. This technique outperforms the baseline in all the scenarios that we investigate. Regarding future applications, LwM can be used in many real world scenarios. For instance, the capability of a face recognition network trained on specific identities can be trained in an incremental setup using our frameworks to include more identities and increase the number of identities that can be recognized using a network. Also, while we explore IL problem for classification in this work, we believe that this can also be extended for segmentation. We believe that incremental segmentation is a challenging problem due to the absence of abundant ground truth maps. The importance of incremental segmentation has already been underscored in [3]. Also, as visual attention is more meaningful for segmentation (as shown in [11]), we intend to apply our frame-works to incremental segmentation in the near future. Other applications may also include incrementally learning from the data belonging to the same class but different domains, to build a student model capable of adapting to various domains.
| 5,055 |
1811.08051
|
2901678097
|
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach, called Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss ( @math ), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding @math to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
|
Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL @cite_4 uses the exemplars of base classes, while incrementally learning new classes. Similarly, @cite_2 also use a fraction of the data of base classes. @cite_2 also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement at each incremental step, which is not feasible when the memory budget is limited.
|
{
"abstract": [
"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.",
"Incremental learning (il) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the il problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of il. The main challenge for an il algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, il also suffers from a problem we call intransigence, its inability to update knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of il algorithms. Furthermore, we present RWalk, a generalization of ewc++ (our efficient version of ewc [6]) and Path Integral [25] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various il algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off for forgetting and intransigence."
],
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2964189064",
"2786446225"
]
}
|
Learning without Memorizing
|
Most state-of-the-art solutions to recognition tasks in computer vision require using models which are specifically trained for these tasks [6,13]. For the tasks involving categories (such as object classification, segmentation), the complexity of the task (i.e. the possible number of target classes) limits the ability of these trained models. For example, a trained model aimed for object recognition can only classify object categories on which it has been trained. However, if the number of target classes increases, the model must be updated in such a way that it performs well on the original classes on which it has been trained, also known as base classes, while it incrementally learns new classes as well.
If we retrain the model only on new, previously unseen classes, it would completely forget the base classes, which is known as catastrophic forgetting [9,10], a phenomenon which is not observed in humane learning. Therefore, most existing solutions [4,14,17] explore incremental learning (IL) by allowing the model to retain a fraction of the training data of base classes, while incrementally learning new classes. Yu et al. [17] have proposed retaining trained models encoding base class information, to transfer their knowledge to the model learning new classes. However, this process is not scalable. This is because storing base class data or models encoding base class information is a memory expensive task, and hence is cumbersome when used in a lifelong learning setting. Also, in an industrial setting, when a trained object classification model is delivered to the customer, the training data is kept private for proprietary reasons. Due to this, the customer would not be able to update the trained model to incorporate new target classes in the absence of base class data. Moreover, storing base class data for incrementally learning new classes is not biologically inspired. For example, when a toddler learns to recognize new shapes/objects, it is observed that it does not completely forget the shapes or objects it already knows. It also does not always need to revisit the old information when learning new entities. Inspired by this, we aim to explore incremental learning in object classification by adding a stream of new classes without storing data belonging to classes that the classifier has already seen. While IL solutions which do not require base class data, such as [1,9] have been proposed, these methods mostly aim at incrementally learning new tasks, which means that at test time the model cannot confuse the incrementally learned tasks with tasks it has already learned, making the problem setup much easier.
We aim to explore the problem of incrementally learning object classes, without storing any data or model associated with base classes (Figure 1), while allowing the model to confuse new classes with old ones. In our problem setup, an ideal incremental learner should have the following properties:
i It should help a trained model to learn new classes obtained from a stream of data, while preserving the model's knowledge of base class information.
ii At test time, it should enable the model to consider all the classes it has learned when the model makes a prediction.
iii The size of the memory footprint should not grow at all, irrespective of the number of classes seen thus far.
An existing work targeting the same problem is LwF-MC, which is one of the baselines in [14]. For the ease of explanation, we use the following terminology (introduced in [ Initialized using M t−1 , M t is then trained to learn new classes using a classification loss, L C . However, an IPP is also applied to M t so as to minimize the divergence between the representations of M t−1 and M t . While L C helps M t learn new classes, IPP prevents M t from diverging too much from M t−1 . Since M t is already initialized as M t−1 , the initial value of IPP is expected to be close to zero. However, as M t keeps learning new classes with L C , it starts diverging from M t−1 , which leads the IPP to increase. The purpose of the IPP is to prevent the divergence of M t from M t−1 . Once M t is trained for a fixed number of epochs, it is used as a teacher in the next incremental step, using which a new student model is initialized.
In LwF-MC [14], the IPP is the knowledge distillation loss. The knowledge distillation loss L D , in this context, was first introduced in [12]. It captures the divergence between the prediction vectors of M t−1 and M t . In an incremental setup, when an image belonging to a new class (I n ) is fed to M t−1 , the base classes which have some resemblance in I n are captured. L D enforces M t to capture the same base classes. Thus, L D essentially makes M t learn 'what' are the possible base classes in I n , as shown in Figure 1. The pixels which have a high influence on the models' prediction constitute the attention region of the network. However, L D does not explicitly take into account the degree of each pixel influencing the models predictions. For example, in Figure 2, in the first row, it is seen that at step n, even though the network focuses on an incorrect region while predicting 'dial telephone', the numerical value of L D (0.09) is same as that when the network focuses on the correct region in step n, in the bottom row.
We hypothesize that attention regions encode the models' representation more precisely. Hence, constraining the attention regions of M t and M t−1 using an Attention Distillation Loss (L D , explained in Sec. 4.1), to minimize the divergence of the representations of M t from that of M t−1 is more meaningful. This is because, instead of finding which base classes are resembled in the new data, attention maps explain 'why' hints of a base class are present (as shown in Figure 1). Using these hints, L D , in an attempt to make the attention maps of M t−1 and M t equivalent, helps to encode some visual knowledge of base class in M t . The utility of L AD is seen in the example in Figure 2, where even though the model correctly predicts the image as 'dial telephone', the value of L D in step n increases if the attention regions diverge too much from the region in Step 0.
We propose an approach where an Attention Distillation Loss (L AD ) is applied to M t to prevent its divergence from M t−1 , at incremental step t. Precisely, we propose to constrain the L 1 distance between the attention maps generated by M t−1 and M t in order to preserve the knowledge of base classes. The reasoning behind this strategy is described in Sec 4.1. This is applied in addition to the distillation loss L D and a classification loss for the student model to incrementally learn new classes.
The main contribution of this work is to provide an attention-based approach, termed as 'Learning without Memorizing (LwM)', that helps a model to incrementally learn new classes by restricting the divergence between student and teacher model. LwM does not require any data of the base classes when learning new classes. Different from the contemporary approaches which explore the same problem, LwM takes into account the gradient flow information of teacher and student models by generating attention maps using these models. It then constrains this infor- Step 0
Step n … …
Sample
Step 1 0.08 0.12 0.08 0.12 -- mation to be equivalent for teacher and student models, thus preventing the student model to diverge too much from the teacher model. Finally, we show that LwM consistently outperforms the state-of-the-art performance in the iILSVRC-small [14] and iCIFAR-100 [14] datasets.
Related work
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field [9,12]. Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
Task incremental (TI) methods: In this problem, a model trained to perform object classification on a specific dataset is incrementally trained to classify objects in a new dataset. A key characteristic of these experiments is that during evaluation, the final model is tested on different datasets (base and incrementally learned) separately. This is known as multi-headed evaluation [4]. In such an evaluation, the classes belonging to two different tasks have no chance to confuse with one another. One of the earlier works in this category is LwF [12], where a distillation loss is used to preserve information of the base classes. Also, the data from base classes is used during training, while the classifier learns new classes. A prominent work in this area is EWC [9], where at each incremental task the weights of the student model are set to those of their corresponding teacher model, according to their importance of network weights. Aljundi et al. present MAS [1], a technique to train the agents to learn what information not to forget. All experiments in this category use multi-headed evaluation, which is different from the problem setting of this paper where we use single-headed evaluation, defined explicitly in [4]. Single-headed evaluation is another evaluation method wherein the model is evaluated on both base and incrementally learned classes jointly. Clearly, multi-headed evaluation is relatively easier than single-headed evaluation, as explained in [4].
Class incremental (CI) methods: In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL [14] belongs to this category. In iCaRL [14], Rebuffi et al. propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in [14], which is a class incremental version of LwF [12]. LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is [4], which evaluates using both single-headed and multi-headed evaluations and highlight their difference. Chaudhry et al. [4] introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
A key specification of most incremental learning frameworks is whether or not they allow storing the data of base classes (i.e. classes on which the classifier is originally trained). We can also divide existing methods based on this specification:
Methods which use base class data: Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL [14] uses the exemplars of base classes, while incrementally learning new classes. Similarly, Chaudhry et al. [4] also use a fraction of the data of base classes. Chaudhry et al. [4] also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement Constraints Use base class data No base class data CI methods iCaRL [14], [4], [17] LwF-MC [14], LwM TI methods LwF [12] IMM [10], EWC [9], MAS [1], [2], [8] at each incremental step, which is not feasible when the memory budget is limited.
Methods which do not use base class data: Several TI methods described earlier (such as [1,9] ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC [14] is the only CI method which does not use base class data but uses single-headed evaluation. Table 1 presents a categorization summary of previous works in this field. We propose a technique to solve the CI problem, without using any base class data. We can infer from the discussion above that LwF-MC [14] is the only existing work which uses single-headed evaluation, and hence use it as our baseline. We intend to use attention maps in an incremental setup, instead of only knowledge distillation, to transfer more comprehensive knowledge of base classes from teacher to student model. Although in [18], enforcing equivalence of attention maps of teacher and student models has been explored previously for transferring knowledge from teacher to student models, the same approach cannot be applied to an incremental learning setting. In our incremental problem setup, due to the absence of base class data, we intend to utilize the attention region in the new data which resembles one of the base classes. But these regions are not prominent since the data does not belong to any of the base classes, thus making class-specific attention maps a necessity. Class-specificity is required to mine out base class regions in a more targeted fashion, which is why generic attention maps such as what is used in [18] are not applicable as they can not provide a class-specific explanation about relevant patterns corresponding to the target class. Moreover, our problem setup is different from knowledge distillation because at incremental step t, we freeze M t−1 while training M t , and do not allow M t to access data from the base classes, and therefore M t−1 and M t are trained using a completely different set of classes. This makes the problem more challenging as the output of M t on feeding data from unseen classes is the only source of base class data. This is further explained in Sec. 4.1.
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC [14]. In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
Distillation loss (L D )
L D was first introduced in [12] for incremental learning. It is defined as follows:
L D (y,ŷ) = − N i=1 y i . log(ŷ i ),(1)
where y andŷ are prediction vectors (composed of probability scores) of M t−1 and M t for base classes at incremental step t, each of length N (assuming that M t−1 is trained on N base classes). Also y i = σ(y i ) andŷ i = σ(ŷ i ) (where σ(·) is sigmoid activation). This definition of L D is consistent with that defined in LwF-MC [14]. Essentially, L D enforces the base class prediction of M t and M t−1 to be equivalent, when an image belonging to one of the incrementally added classes is fed to each of them. Moreover, we believe that there exist common visual semantics or patterns in both base and new class data. Therefore, it makes sense to encourage the feature responses of M t and M t−1 to be equivalent, when new class data is given as input. This helps to retain the old class knowledge (in terms of the common visual semantics).
Generating attention maps
We describe the technique employed to generate attention maps. In our experiments we use the Grad-CAM [15] for this task. For using the Grad-CAM, the image is first forwarded to the model, obtaining a raw score for every class. Following this, the gradient of score y c for a desired class c is computed with respect to each convolutional feature map A k . For each A k , global average pooling is performed to obtain the neuron importance α k of A k . Finally, all the A k weighted by α k are passed through a ReLU activation function to obtain a final attention map for class c.
More precisely, let α k = ∂y c ∂A k . Let α = the number of convolutional feature maps in the layer using which attention map is to be generated. The attention map Q can be defined as
Q = ReLU (α T A)(2)
Proposed approach
We introduce an information preserving penalty (L AD ) based on attention maps. We combine L AD with distillation loss L D and a classification loss L C to construct LwM, an approach which encourages attention maps of teacher and student to be similar. Our LwM framework in shown in Figure 3. The loss function of our LwM approach is defined below:
L LW M = L C + βL D + γL AD(3)
Here β, γ are the weights used for L D , L AD respectively. A representation of our LwM approach is presented in Figure 3. In comparison to LwM, LwF-MC [14] only uses a classification loss combined with distillation loss and is our baseline.
Attention distillation loss (L AD )
At incremental step t, we define student model M t , initialized using a teacher model M t−1 . We assume M t is proficient in classifying N base classes. M t is required to recognize N + k classes, where k is the number of previously unseen classes added incrementally. Hence, the sizes of the prediction vectors of M t−1 and M t are N and N + k respectively. For any given input image i, we denote the vectorized attention maps generated by M t−1 and M t , for class c as Q i,c t−1 and Q i,c t , respectively. We generate these maps using Grad-CAM [15], as explained above.
Q i,c t−1 = vector(Grad-CAM(i, M t−1 , c)) (4) Q i,c t = vector(Grad-CAM(i, M t , c))(5)
We assume that the lengths of each vectorized attention map is l. In [18], it has been mentioned that normalizing the attention map by dividing it by the L 2 norm of the map is an important step for student training. Hence we perform this step while computing L AD . During training of M t , an image belonging to one of the new classes to be learned (denoted as I n ), is given as input to both M t−1 and M t . Let b be the top base class predicted by M t (i.e. base class having the highest score) for I n . For this input, L AD is defined as the sum of element wise L 1 difference of the normalized, vectorized attention map:
L AD = l j=1 Q In,b t−1,j Q In,b t−1,j 2 − Q In,b t,j Q In,b t,j 2 1(6)
From the explanation above, we know that for training M t , M t−1 is fed with the data from the classes that it has not seen before (I n ). Essentially, the attention regions generated by M t−1 for I n , represent the regions in the image which resemble the base classes. If M t and M t−1 have equivalent knowledge of base classes, they should have a similar response to these regions, and therefore Q In,b t should be similar to Q In,b t−1 . This implies that the attention outputs of M t−1 are the only traces of base data, which guides M t 's knowledge of base classes. We use the L 1 distance between Q In,b t−1 and Q In,b t as a penalty to enforce their similarity. We experimented with both L 1 and L 2 distance in this context. However, as we obtained better results with L 1 distance on held-out data, we chose L 1 over L 2 distance.
According to Eq. 2, attention maps encode gradient of the score of class b, y b with respect to convolutional feature maps A. This information is not explicitly captured by the distribution of class scores (used by L D Table 2: The statistics of the datasets used in our experiments, in accordance with [14]. Additionally, we also perform experiments on the CUB-200-2011 [16] dataset. In the table, "acc." represents accuracy.
are a 2D manifestation of the prediction vectors (y,ŷ), which means that they capture more spatial information than these vectors, and hence it is more advantageous to use attention maps than using only prediction vectors.
Experiments
We first explain our baseline, which is LwF-MC [14]. Following that, we provide information about the datasets used in our experiments. After that, we describe the iterative protocol to perform classification at every incremental step. We also provide implementation details including architectural information.
Baseline
As our baseline is LwF-MC [14], we firstly implement its objective function, which is a sum of a classification loss and distillation loss (L C + L D ). In all our experiments, we use a cross entropy loss for L C to be consistent with [14]. However, it should be highlighted that the official implementation of L D in LwF-MC by [14] is different from the definition of L D in [12]. As LwF-MC (but not LwF) is our baseline, we use iCaRL's implementation of LwF-MC in our work. LwF cannot handle CI problems where no base class training data is available (according to Table 1), which is the reason why we choose LwF-MC as the baseline and iCaRL's implementation.
Datasets
We use two datasets used in LwF-MC [14] for our experiments. Additionally, we also perform experiments on Caltech-101 [5] as well as CUBS-200-2011 [16] datasets. The details for the datasets are provided in Table 2. These datasets are constructed by randomly selecting a batch of classes at every incremental step. In both datasets, the classes belonging to different batches are disjoint. For a fair comparison, the data preparation for all the datsets and evaluation strategy are the same as that for LwF-MC [14]. Table 3: Experiment configurations used in this work, identified by their respective experiment IDs.
Experimental protocol
We now describe the protocol using which we iteratively train M t , so that it preserves the knowledge of the base classes while incrementally learning new classes.
Initialization: Before the first incremental step (t = 1), we train a teacher model M 0 on 10 base classes, using a classification loss for 10 epochs. The classification loss is a cross entropy loss L C . Following this, for t = 1 to t = k we initialize student M t using M t−1 as its teacher, and feed data from a new batch of images that is to be incrementally learned, to both of these models. Here k is the number of incremental steps.
Applying IPP and classification loss to student model: Given the data from new classes as inputs, we generate the output of M t and M t−1 with respect to base class having the highest score. These outputs can either be classspecific attention maps (required for computing L AD ) or class-specific scores (required for computing L D ). Using these outputs we compute an IPP which can either be L AD or L D . In addition, we apply a classification loss to M t based on its outputs with respect to the new classes which are to be learned incrementally. We jointly apply classification loss and IPP to M t and train it for 10 epochs. Once M t is trained, we use it as a teacher model in the next incremental step, and follow the aforementioned steps iteratively, until all the k incremental steps are completed.
Implementation details
We use the ResNet-18 [7] architecture for training student and teacher models on the iILSVRC-small dataset, and the ResNet-34 [7] for training models on the iCIFAR-100 dataset. This is consistent with the networks and datasets used in [14]. We used a learning rate of 0.01. The feature maps of the final convolutional layer are used to generate attention maps using Grad-CAM [15]. The combinations of classification loss and IPP, along with their experiment IDs are provided in Table 3. The experiment configurations will be referred to as their respective experiment IDs from now on. Referring to Eq. 3, we provide details regarding the weights of each loss used in LwM in the supplementary material. Figure 5: The performance comparison between our method, LwM, and the baselines. LwM outperforms LwF-MC [14] and "using only classification loss with fine-tuning" on the iILSVRC-small and iCIFAR-100 datasets [14]. LwM even outperforms iCaRL [14] on the iILSVRC-small dataset given that iCaRL has the unfair advantage of accessing the base-class data.
Results
Before discussing the quantitative results and advantages of our proposed penalties, we show some qualitative results to demonstrate the advantage of using L AD . We show that we can retain attention regions of base classes for a longer time when more classes are incrementally added to the classifier by using LwM as compared to LwF-MC [14]. Before the first incremental step t = 1, we have M 0 trained on 10 base classes. Now, following the protocol in Sec. 5.3, we incrementally add 10 classes at each incremental step. At every incremental step t, we train M t with 3 configurations: C, LwF-MC [14], and LwM. We use the M t to generate the attention maps for the data from base classes (using which M 0 was trained), which it has not seen, and show the results in Figure 4. Additionally, we also generate corresponding attention maps using M 0 (i.e. the first teacher model), which can be considered 'ideal' (as target maps) as M 0 was given full access to base class data. For the M t s trained with C, it is seen that attention regions for base classes are quickly forgotten after every incremental step. This can be attributed to catastrophic forgetting [9,10]. M t trained with LwF-MC [14] have slightly better attention preserving ability but as the number of incremental steps increases, the attention regions diverge from the 'ideal' attention regions. Interestingly, the attention maps generated by M t trained with LwM configuration retain the attention regions for base classes for all incremental steps shown in Figure 4, and are most similar to the target attention maps. These examples support that LwM delays forgetting of base class knowledge. We show more results in more incremental steps in the supplementary material.
We now show the quantitative results of the following configurations: C , LwF-MC [14] and LwM. To show the efficacy of LwM across, we evaluate these configurations on multiple datasets. The results on the iILSVRC-small and iCIFAR-100 datasets are presented in Figure 5. For the iILSVRC-small dataset, the performance of LwM is better than that of the baseline LwF-MC [14]. LwM outperforms the baseline by a margin of more than 30% when the number of classes is 40 or more. Especially for 100 classes, LwM achieves an improvement of more than 50% over the baseline LwF-MC [14]. In addition, LwM outperforms iCaRL [14], at every incremental step, even though iCaRL has an unfair advantage of storing the exemplars of base classes while training the student model for iILSVRCsmall dataset.
To be consistent with the LwF-MC experiments in [14], we perform experiments by constructing the iCIFAR-100 datasets by using batches of 10, 20, and 50 classes at each incremental step. The results are provided in Figure 5. It can be seen that LwM outperforms LwF-MC for all three sizes of incremental batches in iCIFAR-100 dataset. Hence, we conclude that LwM consistently outperforms LwF-MC [14] in iILSVRC-small and iCIFAR-100 datasets. Additionally, we also perform these experiments using the Caltech-101 and CUBS-200-2011 dataset [5] Table 4: Results obtained on Caltech-101 [5] and CUBS-200-2011 [16]. Here FT refers to fine-tuning. The first step refers to the training of first teacher model using 10 classes.
batch of 10 classes at every incremental step and compare it with fine-tuning. The results for these two datasets are shown in Table 4. Also, the advantage of incrementally adding every loss on top of L C is also established in Figure 5. Here, we show that the performance with only C is very poor due to the catastrophic forgetting [9,10]. We achieve some improvement when L D is added as an IPP in LwF-MC. The performance further improves with the addition of L AD in LwM configuration. For both the iILSVRC-small and iCIFAR100 datasets, the tabular version of their results in Figures 5 is provided in the supplementary material.
Conclusion and future work
We explored the IL problem for the task of object classification, and proposed a technique: LwM by combining L D with L AD , for utilizing attention maps to transfer the knowledge of base classes from the teacher to student model, without requiring any data of base classes during training. This technique outperforms the baseline in all the scenarios that we investigate. Regarding future applications, LwM can be used in many real world scenarios. For instance, the capability of a face recognition network trained on specific identities can be trained in an incremental setup using our frameworks to include more identities and increase the number of identities that can be recognized using a network. Also, while we explore IL problem for classification in this work, we believe that this can also be extended for segmentation. We believe that incremental segmentation is a challenging problem due to the absence of abundant ground truth maps. The importance of incremental segmentation has already been underscored in [3]. Also, as visual attention is more meaningful for segmentation (as shown in [11]), we intend to apply our frame-works to incremental segmentation in the near future. Other applications may also include incrementally learning from the data belonging to the same class but different domains, to build a student model capable of adapting to various domains.
| 5,055 |
1811.08051
|
2901678097
|
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach, called Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss ( @math ), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding @math to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
|
Several TI methods described earlier (such as @cite_15 @cite_13 ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC @cite_4 is the only CI method which does not use base class data but uses single-headed evaluation.
|
{
"abstract": [
"Humans can learn in a continuous manner. Old rarely utilized knowledge can be overwritten by new incoming information while important, frequently used knowledge is prevented from being erased. In artificial learning systems, lifelong learning so far has focused mainly on accumulating knowledge over tasks and overcoming catastrophic forgetting. In this paper, we argue that, given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively. Inspired by neuroplasticity, we propose a novel approach for lifelong learning, coined Memory Aware Synapses (MAS). It computes the importance of the parameters of a neural network in an unsupervised and online manner. Given a new sample which is fed to the network, MAS accumulates an importance measure for each parameter of the network, based on how sensitive the predicted output function is to a change in this parameter. When learning a new task, changes to important parameters can then be penalized, effectively preventing important knowledge related to previous tasks from being overwritten. Further, we show an interesting connection between a local version of our method and Hebb’s rule, which is a model for the learning process in the brain. We test our method on a sequence of object recognition tasks and on the challenging problem of learning an embedding for predicting triplets. We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.",
"",
"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail."
],
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_4"
],
"mid": [
"2963588172",
"",
"2964189064"
]
}
|
Learning without Memorizing
|
Most state-of-the-art solutions to recognition tasks in computer vision require using models which are specifically trained for these tasks [6,13]. For the tasks involving categories (such as object classification, segmentation), the complexity of the task (i.e. the possible number of target classes) limits the ability of these trained models. For example, a trained model aimed for object recognition can only classify object categories on which it has been trained. However, if the number of target classes increases, the model must be updated in such a way that it performs well on the original classes on which it has been trained, also known as base classes, while it incrementally learns new classes as well.
If we retrain the model only on new, previously unseen classes, it would completely forget the base classes, which is known as catastrophic forgetting [9,10], a phenomenon which is not observed in humane learning. Therefore, most existing solutions [4,14,17] explore incremental learning (IL) by allowing the model to retain a fraction of the training data of base classes, while incrementally learning new classes. Yu et al. [17] have proposed retaining trained models encoding base class information, to transfer their knowledge to the model learning new classes. However, this process is not scalable. This is because storing base class data or models encoding base class information is a memory expensive task, and hence is cumbersome when used in a lifelong learning setting. Also, in an industrial setting, when a trained object classification model is delivered to the customer, the training data is kept private for proprietary reasons. Due to this, the customer would not be able to update the trained model to incorporate new target classes in the absence of base class data. Moreover, storing base class data for incrementally learning new classes is not biologically inspired. For example, when a toddler learns to recognize new shapes/objects, it is observed that it does not completely forget the shapes or objects it already knows. It also does not always need to revisit the old information when learning new entities. Inspired by this, we aim to explore incremental learning in object classification by adding a stream of new classes without storing data belonging to classes that the classifier has already seen. While IL solutions which do not require base class data, such as [1,9] have been proposed, these methods mostly aim at incrementally learning new tasks, which means that at test time the model cannot confuse the incrementally learned tasks with tasks it has already learned, making the problem setup much easier.
We aim to explore the problem of incrementally learning object classes, without storing any data or model associated with base classes (Figure 1), while allowing the model to confuse new classes with old ones. In our problem setup, an ideal incremental learner should have the following properties:
i It should help a trained model to learn new classes obtained from a stream of data, while preserving the model's knowledge of base class information.
ii At test time, it should enable the model to consider all the classes it has learned when the model makes a prediction.
iii The size of the memory footprint should not grow at all, irrespective of the number of classes seen thus far.
An existing work targeting the same problem is LwF-MC, which is one of the baselines in [14]. For the ease of explanation, we use the following terminology (introduced in [ Initialized using M t−1 , M t is then trained to learn new classes using a classification loss, L C . However, an IPP is also applied to M t so as to minimize the divergence between the representations of M t−1 and M t . While L C helps M t learn new classes, IPP prevents M t from diverging too much from M t−1 . Since M t is already initialized as M t−1 , the initial value of IPP is expected to be close to zero. However, as M t keeps learning new classes with L C , it starts diverging from M t−1 , which leads the IPP to increase. The purpose of the IPP is to prevent the divergence of M t from M t−1 . Once M t is trained for a fixed number of epochs, it is used as a teacher in the next incremental step, using which a new student model is initialized.
In LwF-MC [14], the IPP is the knowledge distillation loss. The knowledge distillation loss L D , in this context, was first introduced in [12]. It captures the divergence between the prediction vectors of M t−1 and M t . In an incremental setup, when an image belonging to a new class (I n ) is fed to M t−1 , the base classes which have some resemblance in I n are captured. L D enforces M t to capture the same base classes. Thus, L D essentially makes M t learn 'what' are the possible base classes in I n , as shown in Figure 1. The pixels which have a high influence on the models' prediction constitute the attention region of the network. However, L D does not explicitly take into account the degree of each pixel influencing the models predictions. For example, in Figure 2, in the first row, it is seen that at step n, even though the network focuses on an incorrect region while predicting 'dial telephone', the numerical value of L D (0.09) is same as that when the network focuses on the correct region in step n, in the bottom row.
We hypothesize that attention regions encode the models' representation more precisely. Hence, constraining the attention regions of M t and M t−1 using an Attention Distillation Loss (L D , explained in Sec. 4.1), to minimize the divergence of the representations of M t from that of M t−1 is more meaningful. This is because, instead of finding which base classes are resembled in the new data, attention maps explain 'why' hints of a base class are present (as shown in Figure 1). Using these hints, L D , in an attempt to make the attention maps of M t−1 and M t equivalent, helps to encode some visual knowledge of base class in M t . The utility of L AD is seen in the example in Figure 2, where even though the model correctly predicts the image as 'dial telephone', the value of L D in step n increases if the attention regions diverge too much from the region in Step 0.
We propose an approach where an Attention Distillation Loss (L AD ) is applied to M t to prevent its divergence from M t−1 , at incremental step t. Precisely, we propose to constrain the L 1 distance between the attention maps generated by M t−1 and M t in order to preserve the knowledge of base classes. The reasoning behind this strategy is described in Sec 4.1. This is applied in addition to the distillation loss L D and a classification loss for the student model to incrementally learn new classes.
The main contribution of this work is to provide an attention-based approach, termed as 'Learning without Memorizing (LwM)', that helps a model to incrementally learn new classes by restricting the divergence between student and teacher model. LwM does not require any data of the base classes when learning new classes. Different from the contemporary approaches which explore the same problem, LwM takes into account the gradient flow information of teacher and student models by generating attention maps using these models. It then constrains this infor- Step 0
Step n … …
Sample
Step 1 0.08 0.12 0.08 0.12 -- mation to be equivalent for teacher and student models, thus preventing the student model to diverge too much from the teacher model. Finally, we show that LwM consistently outperforms the state-of-the-art performance in the iILSVRC-small [14] and iCIFAR-100 [14] datasets.
Related work
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field [9,12]. Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
Task incremental (TI) methods: In this problem, a model trained to perform object classification on a specific dataset is incrementally trained to classify objects in a new dataset. A key characteristic of these experiments is that during evaluation, the final model is tested on different datasets (base and incrementally learned) separately. This is known as multi-headed evaluation [4]. In such an evaluation, the classes belonging to two different tasks have no chance to confuse with one another. One of the earlier works in this category is LwF [12], where a distillation loss is used to preserve information of the base classes. Also, the data from base classes is used during training, while the classifier learns new classes. A prominent work in this area is EWC [9], where at each incremental task the weights of the student model are set to those of their corresponding teacher model, according to their importance of network weights. Aljundi et al. present MAS [1], a technique to train the agents to learn what information not to forget. All experiments in this category use multi-headed evaluation, which is different from the problem setting of this paper where we use single-headed evaluation, defined explicitly in [4]. Single-headed evaluation is another evaluation method wherein the model is evaluated on both base and incrementally learned classes jointly. Clearly, multi-headed evaluation is relatively easier than single-headed evaluation, as explained in [4].
Class incremental (CI) methods: In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL [14] belongs to this category. In iCaRL [14], Rebuffi et al. propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in [14], which is a class incremental version of LwF [12]. LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is [4], which evaluates using both single-headed and multi-headed evaluations and highlight their difference. Chaudhry et al. [4] introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
A key specification of most incremental learning frameworks is whether or not they allow storing the data of base classes (i.e. classes on which the classifier is originally trained). We can also divide existing methods based on this specification:
Methods which use base class data: Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL [14] uses the exemplars of base classes, while incrementally learning new classes. Similarly, Chaudhry et al. [4] also use a fraction of the data of base classes. Chaudhry et al. [4] also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement Constraints Use base class data No base class data CI methods iCaRL [14], [4], [17] LwF-MC [14], LwM TI methods LwF [12] IMM [10], EWC [9], MAS [1], [2], [8] at each incremental step, which is not feasible when the memory budget is limited.
Methods which do not use base class data: Several TI methods described earlier (such as [1,9] ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC [14] is the only CI method which does not use base class data but uses single-headed evaluation. Table 1 presents a categorization summary of previous works in this field. We propose a technique to solve the CI problem, without using any base class data. We can infer from the discussion above that LwF-MC [14] is the only existing work which uses single-headed evaluation, and hence use it as our baseline. We intend to use attention maps in an incremental setup, instead of only knowledge distillation, to transfer more comprehensive knowledge of base classes from teacher to student model. Although in [18], enforcing equivalence of attention maps of teacher and student models has been explored previously for transferring knowledge from teacher to student models, the same approach cannot be applied to an incremental learning setting. In our incremental problem setup, due to the absence of base class data, we intend to utilize the attention region in the new data which resembles one of the base classes. But these regions are not prominent since the data does not belong to any of the base classes, thus making class-specific attention maps a necessity. Class-specificity is required to mine out base class regions in a more targeted fashion, which is why generic attention maps such as what is used in [18] are not applicable as they can not provide a class-specific explanation about relevant patterns corresponding to the target class. Moreover, our problem setup is different from knowledge distillation because at incremental step t, we freeze M t−1 while training M t , and do not allow M t to access data from the base classes, and therefore M t−1 and M t are trained using a completely different set of classes. This makes the problem more challenging as the output of M t on feeding data from unseen classes is the only source of base class data. This is further explained in Sec. 4.1.
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC [14]. In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
Distillation loss (L D )
L D was first introduced in [12] for incremental learning. It is defined as follows:
L D (y,ŷ) = − N i=1 y i . log(ŷ i ),(1)
where y andŷ are prediction vectors (composed of probability scores) of M t−1 and M t for base classes at incremental step t, each of length N (assuming that M t−1 is trained on N base classes). Also y i = σ(y i ) andŷ i = σ(ŷ i ) (where σ(·) is sigmoid activation). This definition of L D is consistent with that defined in LwF-MC [14]. Essentially, L D enforces the base class prediction of M t and M t−1 to be equivalent, when an image belonging to one of the incrementally added classes is fed to each of them. Moreover, we believe that there exist common visual semantics or patterns in both base and new class data. Therefore, it makes sense to encourage the feature responses of M t and M t−1 to be equivalent, when new class data is given as input. This helps to retain the old class knowledge (in terms of the common visual semantics).
Generating attention maps
We describe the technique employed to generate attention maps. In our experiments we use the Grad-CAM [15] for this task. For using the Grad-CAM, the image is first forwarded to the model, obtaining a raw score for every class. Following this, the gradient of score y c for a desired class c is computed with respect to each convolutional feature map A k . For each A k , global average pooling is performed to obtain the neuron importance α k of A k . Finally, all the A k weighted by α k are passed through a ReLU activation function to obtain a final attention map for class c.
More precisely, let α k = ∂y c ∂A k . Let α = the number of convolutional feature maps in the layer using which attention map is to be generated. The attention map Q can be defined as
Q = ReLU (α T A)(2)
Proposed approach
We introduce an information preserving penalty (L AD ) based on attention maps. We combine L AD with distillation loss L D and a classification loss L C to construct LwM, an approach which encourages attention maps of teacher and student to be similar. Our LwM framework in shown in Figure 3. The loss function of our LwM approach is defined below:
L LW M = L C + βL D + γL AD(3)
Here β, γ are the weights used for L D , L AD respectively. A representation of our LwM approach is presented in Figure 3. In comparison to LwM, LwF-MC [14] only uses a classification loss combined with distillation loss and is our baseline.
Attention distillation loss (L AD )
At incremental step t, we define student model M t , initialized using a teacher model M t−1 . We assume M t is proficient in classifying N base classes. M t is required to recognize N + k classes, where k is the number of previously unseen classes added incrementally. Hence, the sizes of the prediction vectors of M t−1 and M t are N and N + k respectively. For any given input image i, we denote the vectorized attention maps generated by M t−1 and M t , for class c as Q i,c t−1 and Q i,c t , respectively. We generate these maps using Grad-CAM [15], as explained above.
Q i,c t−1 = vector(Grad-CAM(i, M t−1 , c)) (4) Q i,c t = vector(Grad-CAM(i, M t , c))(5)
We assume that the lengths of each vectorized attention map is l. In [18], it has been mentioned that normalizing the attention map by dividing it by the L 2 norm of the map is an important step for student training. Hence we perform this step while computing L AD . During training of M t , an image belonging to one of the new classes to be learned (denoted as I n ), is given as input to both M t−1 and M t . Let b be the top base class predicted by M t (i.e. base class having the highest score) for I n . For this input, L AD is defined as the sum of element wise L 1 difference of the normalized, vectorized attention map:
L AD = l j=1 Q In,b t−1,j Q In,b t−1,j 2 − Q In,b t,j Q In,b t,j 2 1(6)
From the explanation above, we know that for training M t , M t−1 is fed with the data from the classes that it has not seen before (I n ). Essentially, the attention regions generated by M t−1 for I n , represent the regions in the image which resemble the base classes. If M t and M t−1 have equivalent knowledge of base classes, they should have a similar response to these regions, and therefore Q In,b t should be similar to Q In,b t−1 . This implies that the attention outputs of M t−1 are the only traces of base data, which guides M t 's knowledge of base classes. We use the L 1 distance between Q In,b t−1 and Q In,b t as a penalty to enforce their similarity. We experimented with both L 1 and L 2 distance in this context. However, as we obtained better results with L 1 distance on held-out data, we chose L 1 over L 2 distance.
According to Eq. 2, attention maps encode gradient of the score of class b, y b with respect to convolutional feature maps A. This information is not explicitly captured by the distribution of class scores (used by L D Table 2: The statistics of the datasets used in our experiments, in accordance with [14]. Additionally, we also perform experiments on the CUB-200-2011 [16] dataset. In the table, "acc." represents accuracy.
are a 2D manifestation of the prediction vectors (y,ŷ), which means that they capture more spatial information than these vectors, and hence it is more advantageous to use attention maps than using only prediction vectors.
Experiments
We first explain our baseline, which is LwF-MC [14]. Following that, we provide information about the datasets used in our experiments. After that, we describe the iterative protocol to perform classification at every incremental step. We also provide implementation details including architectural information.
Baseline
As our baseline is LwF-MC [14], we firstly implement its objective function, which is a sum of a classification loss and distillation loss (L C + L D ). In all our experiments, we use a cross entropy loss for L C to be consistent with [14]. However, it should be highlighted that the official implementation of L D in LwF-MC by [14] is different from the definition of L D in [12]. As LwF-MC (but not LwF) is our baseline, we use iCaRL's implementation of LwF-MC in our work. LwF cannot handle CI problems where no base class training data is available (according to Table 1), which is the reason why we choose LwF-MC as the baseline and iCaRL's implementation.
Datasets
We use two datasets used in LwF-MC [14] for our experiments. Additionally, we also perform experiments on Caltech-101 [5] as well as CUBS-200-2011 [16] datasets. The details for the datasets are provided in Table 2. These datasets are constructed by randomly selecting a batch of classes at every incremental step. In both datasets, the classes belonging to different batches are disjoint. For a fair comparison, the data preparation for all the datsets and evaluation strategy are the same as that for LwF-MC [14]. Table 3: Experiment configurations used in this work, identified by their respective experiment IDs.
Experimental protocol
We now describe the protocol using which we iteratively train M t , so that it preserves the knowledge of the base classes while incrementally learning new classes.
Initialization: Before the first incremental step (t = 1), we train a teacher model M 0 on 10 base classes, using a classification loss for 10 epochs. The classification loss is a cross entropy loss L C . Following this, for t = 1 to t = k we initialize student M t using M t−1 as its teacher, and feed data from a new batch of images that is to be incrementally learned, to both of these models. Here k is the number of incremental steps.
Applying IPP and classification loss to student model: Given the data from new classes as inputs, we generate the output of M t and M t−1 with respect to base class having the highest score. These outputs can either be classspecific attention maps (required for computing L AD ) or class-specific scores (required for computing L D ). Using these outputs we compute an IPP which can either be L AD or L D . In addition, we apply a classification loss to M t based on its outputs with respect to the new classes which are to be learned incrementally. We jointly apply classification loss and IPP to M t and train it for 10 epochs. Once M t is trained, we use it as a teacher model in the next incremental step, and follow the aforementioned steps iteratively, until all the k incremental steps are completed.
Implementation details
We use the ResNet-18 [7] architecture for training student and teacher models on the iILSVRC-small dataset, and the ResNet-34 [7] for training models on the iCIFAR-100 dataset. This is consistent with the networks and datasets used in [14]. We used a learning rate of 0.01. The feature maps of the final convolutional layer are used to generate attention maps using Grad-CAM [15]. The combinations of classification loss and IPP, along with their experiment IDs are provided in Table 3. The experiment configurations will be referred to as their respective experiment IDs from now on. Referring to Eq. 3, we provide details regarding the weights of each loss used in LwM in the supplementary material. Figure 5: The performance comparison between our method, LwM, and the baselines. LwM outperforms LwF-MC [14] and "using only classification loss with fine-tuning" on the iILSVRC-small and iCIFAR-100 datasets [14]. LwM even outperforms iCaRL [14] on the iILSVRC-small dataset given that iCaRL has the unfair advantage of accessing the base-class data.
Results
Before discussing the quantitative results and advantages of our proposed penalties, we show some qualitative results to demonstrate the advantage of using L AD . We show that we can retain attention regions of base classes for a longer time when more classes are incrementally added to the classifier by using LwM as compared to LwF-MC [14]. Before the first incremental step t = 1, we have M 0 trained on 10 base classes. Now, following the protocol in Sec. 5.3, we incrementally add 10 classes at each incremental step. At every incremental step t, we train M t with 3 configurations: C, LwF-MC [14], and LwM. We use the M t to generate the attention maps for the data from base classes (using which M 0 was trained), which it has not seen, and show the results in Figure 4. Additionally, we also generate corresponding attention maps using M 0 (i.e. the first teacher model), which can be considered 'ideal' (as target maps) as M 0 was given full access to base class data. For the M t s trained with C, it is seen that attention regions for base classes are quickly forgotten after every incremental step. This can be attributed to catastrophic forgetting [9,10]. M t trained with LwF-MC [14] have slightly better attention preserving ability but as the number of incremental steps increases, the attention regions diverge from the 'ideal' attention regions. Interestingly, the attention maps generated by M t trained with LwM configuration retain the attention regions for base classes for all incremental steps shown in Figure 4, and are most similar to the target attention maps. These examples support that LwM delays forgetting of base class knowledge. We show more results in more incremental steps in the supplementary material.
We now show the quantitative results of the following configurations: C , LwF-MC [14] and LwM. To show the efficacy of LwM across, we evaluate these configurations on multiple datasets. The results on the iILSVRC-small and iCIFAR-100 datasets are presented in Figure 5. For the iILSVRC-small dataset, the performance of LwM is better than that of the baseline LwF-MC [14]. LwM outperforms the baseline by a margin of more than 30% when the number of classes is 40 or more. Especially for 100 classes, LwM achieves an improvement of more than 50% over the baseline LwF-MC [14]. In addition, LwM outperforms iCaRL [14], at every incremental step, even though iCaRL has an unfair advantage of storing the exemplars of base classes while training the student model for iILSVRCsmall dataset.
To be consistent with the LwF-MC experiments in [14], we perform experiments by constructing the iCIFAR-100 datasets by using batches of 10, 20, and 50 classes at each incremental step. The results are provided in Figure 5. It can be seen that LwM outperforms LwF-MC for all three sizes of incremental batches in iCIFAR-100 dataset. Hence, we conclude that LwM consistently outperforms LwF-MC [14] in iILSVRC-small and iCIFAR-100 datasets. Additionally, we also perform these experiments using the Caltech-101 and CUBS-200-2011 dataset [5] Table 4: Results obtained on Caltech-101 [5] and CUBS-200-2011 [16]. Here FT refers to fine-tuning. The first step refers to the training of first teacher model using 10 classes.
batch of 10 classes at every incremental step and compare it with fine-tuning. The results for these two datasets are shown in Table 4. Also, the advantage of incrementally adding every loss on top of L C is also established in Figure 5. Here, we show that the performance with only C is very poor due to the catastrophic forgetting [9,10]. We achieve some improvement when L D is added as an IPP in LwF-MC. The performance further improves with the addition of L AD in LwM configuration. For both the iILSVRC-small and iCIFAR100 datasets, the tabular version of their results in Figures 5 is provided in the supplementary material.
Conclusion and future work
We explored the IL problem for the task of object classification, and proposed a technique: LwM by combining L D with L AD , for utilizing attention maps to transfer the knowledge of base classes from the teacher to student model, without requiring any data of base classes during training. This technique outperforms the baseline in all the scenarios that we investigate. Regarding future applications, LwM can be used in many real world scenarios. For instance, the capability of a face recognition network trained on specific identities can be trained in an incremental setup using our frameworks to include more identities and increase the number of identities that can be recognized using a network. Also, while we explore IL problem for classification in this work, we believe that this can also be extended for segmentation. We believe that incremental segmentation is a challenging problem due to the absence of abundant ground truth maps. The importance of incremental segmentation has already been underscored in [3]. Also, as visual attention is more meaningful for segmentation (as shown in [11]), we intend to apply our frame-works to incremental segmentation in the near future. Other applications may also include incrementally learning from the data belonging to the same class but different domains, to build a student model capable of adapting to various domains.
| 5,055 |
1811.08051
|
2901678097
|
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach, called Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss ( @math ), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding @math to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
|
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC @cite_4 . In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
|
{
"abstract": [
"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2964189064"
]
}
|
Learning without Memorizing
|
Most state-of-the-art solutions to recognition tasks in computer vision require using models which are specifically trained for these tasks [6,13]. For the tasks involving categories (such as object classification, segmentation), the complexity of the task (i.e. the possible number of target classes) limits the ability of these trained models. For example, a trained model aimed for object recognition can only classify object categories on which it has been trained. However, if the number of target classes increases, the model must be updated in such a way that it performs well on the original classes on which it has been trained, also known as base classes, while it incrementally learns new classes as well.
If we retrain the model only on new, previously unseen classes, it would completely forget the base classes, which is known as catastrophic forgetting [9,10], a phenomenon which is not observed in humane learning. Therefore, most existing solutions [4,14,17] explore incremental learning (IL) by allowing the model to retain a fraction of the training data of base classes, while incrementally learning new classes. Yu et al. [17] have proposed retaining trained models encoding base class information, to transfer their knowledge to the model learning new classes. However, this process is not scalable. This is because storing base class data or models encoding base class information is a memory expensive task, and hence is cumbersome when used in a lifelong learning setting. Also, in an industrial setting, when a trained object classification model is delivered to the customer, the training data is kept private for proprietary reasons. Due to this, the customer would not be able to update the trained model to incorporate new target classes in the absence of base class data. Moreover, storing base class data for incrementally learning new classes is not biologically inspired. For example, when a toddler learns to recognize new shapes/objects, it is observed that it does not completely forget the shapes or objects it already knows. It also does not always need to revisit the old information when learning new entities. Inspired by this, we aim to explore incremental learning in object classification by adding a stream of new classes without storing data belonging to classes that the classifier has already seen. While IL solutions which do not require base class data, such as [1,9] have been proposed, these methods mostly aim at incrementally learning new tasks, which means that at test time the model cannot confuse the incrementally learned tasks with tasks it has already learned, making the problem setup much easier.
We aim to explore the problem of incrementally learning object classes, without storing any data or model associated with base classes (Figure 1), while allowing the model to confuse new classes with old ones. In our problem setup, an ideal incremental learner should have the following properties:
i It should help a trained model to learn new classes obtained from a stream of data, while preserving the model's knowledge of base class information.
ii At test time, it should enable the model to consider all the classes it has learned when the model makes a prediction.
iii The size of the memory footprint should not grow at all, irrespective of the number of classes seen thus far.
An existing work targeting the same problem is LwF-MC, which is one of the baselines in [14]. For the ease of explanation, we use the following terminology (introduced in [ Initialized using M t−1 , M t is then trained to learn new classes using a classification loss, L C . However, an IPP is also applied to M t so as to minimize the divergence between the representations of M t−1 and M t . While L C helps M t learn new classes, IPP prevents M t from diverging too much from M t−1 . Since M t is already initialized as M t−1 , the initial value of IPP is expected to be close to zero. However, as M t keeps learning new classes with L C , it starts diverging from M t−1 , which leads the IPP to increase. The purpose of the IPP is to prevent the divergence of M t from M t−1 . Once M t is trained for a fixed number of epochs, it is used as a teacher in the next incremental step, using which a new student model is initialized.
In LwF-MC [14], the IPP is the knowledge distillation loss. The knowledge distillation loss L D , in this context, was first introduced in [12]. It captures the divergence between the prediction vectors of M t−1 and M t . In an incremental setup, when an image belonging to a new class (I n ) is fed to M t−1 , the base classes which have some resemblance in I n are captured. L D enforces M t to capture the same base classes. Thus, L D essentially makes M t learn 'what' are the possible base classes in I n , as shown in Figure 1. The pixels which have a high influence on the models' prediction constitute the attention region of the network. However, L D does not explicitly take into account the degree of each pixel influencing the models predictions. For example, in Figure 2, in the first row, it is seen that at step n, even though the network focuses on an incorrect region while predicting 'dial telephone', the numerical value of L D (0.09) is same as that when the network focuses on the correct region in step n, in the bottom row.
We hypothesize that attention regions encode the models' representation more precisely. Hence, constraining the attention regions of M t and M t−1 using an Attention Distillation Loss (L D , explained in Sec. 4.1), to minimize the divergence of the representations of M t from that of M t−1 is more meaningful. This is because, instead of finding which base classes are resembled in the new data, attention maps explain 'why' hints of a base class are present (as shown in Figure 1). Using these hints, L D , in an attempt to make the attention maps of M t−1 and M t equivalent, helps to encode some visual knowledge of base class in M t . The utility of L AD is seen in the example in Figure 2, where even though the model correctly predicts the image as 'dial telephone', the value of L D in step n increases if the attention regions diverge too much from the region in Step 0.
We propose an approach where an Attention Distillation Loss (L AD ) is applied to M t to prevent its divergence from M t−1 , at incremental step t. Precisely, we propose to constrain the L 1 distance between the attention maps generated by M t−1 and M t in order to preserve the knowledge of base classes. The reasoning behind this strategy is described in Sec 4.1. This is applied in addition to the distillation loss L D and a classification loss for the student model to incrementally learn new classes.
The main contribution of this work is to provide an attention-based approach, termed as 'Learning without Memorizing (LwM)', that helps a model to incrementally learn new classes by restricting the divergence between student and teacher model. LwM does not require any data of the base classes when learning new classes. Different from the contemporary approaches which explore the same problem, LwM takes into account the gradient flow information of teacher and student models by generating attention maps using these models. It then constrains this infor- Step 0
Step n … …
Sample
Step 1 0.08 0.12 0.08 0.12 -- mation to be equivalent for teacher and student models, thus preventing the student model to diverge too much from the teacher model. Finally, we show that LwM consistently outperforms the state-of-the-art performance in the iILSVRC-small [14] and iCIFAR-100 [14] datasets.
Related work
In object classification, Incremental learning (IL) is the process of increasing the breadth of an object classifier, by training it to recognize new classes, while retaining its knowledge of the classes on which it has been trained originally. In the past couple of years, there has been considerable research efforts in this field [9,12]. Moreover, there exist several subsets of this research problem which impose different constraints in terms of data storage and evaluation. We can divide existing methods based on their constraints:
Task incremental (TI) methods: In this problem, a model trained to perform object classification on a specific dataset is incrementally trained to classify objects in a new dataset. A key characteristic of these experiments is that during evaluation, the final model is tested on different datasets (base and incrementally learned) separately. This is known as multi-headed evaluation [4]. In such an evaluation, the classes belonging to two different tasks have no chance to confuse with one another. One of the earlier works in this category is LwF [12], where a distillation loss is used to preserve information of the base classes. Also, the data from base classes is used during training, while the classifier learns new classes. A prominent work in this area is EWC [9], where at each incremental task the weights of the student model are set to those of their corresponding teacher model, according to their importance of network weights. Aljundi et al. present MAS [1], a technique to train the agents to learn what information not to forget. All experiments in this category use multi-headed evaluation, which is different from the problem setting of this paper where we use single-headed evaluation, defined explicitly in [4]. Single-headed evaluation is another evaluation method wherein the model is evaluated on both base and incrementally learned classes jointly. Clearly, multi-headed evaluation is relatively easier than single-headed evaluation, as explained in [4].
Class incremental (CI) methods: In this problem, a model trained to perform object classification on specific classes of a dataset is incrementally trained to classify new unseen classes in the same dataset. Most of the existing work exploring this problem use single-headed evaluation. This makes the CI problem more difficult than the TI problem because the model can confuse the new class with a base class in the CI problem. iCaRL [14] belongs to this category. In iCaRL [14], Rebuffi et al. propose a technique to jointly learn feature representation and classifiers. They also introduce a strategy to select exemplars which is used in combination with the distillation loss to prevent catastrophic forgetting. In addition, a new baseline: LwF-MC is introduced in [14], which is a class incremental version of LwF [12]. LwF-MC uses the distillation loss to preserve the knowledge of base classes along with a classification loss, without storing the data of base classes and is evaluated using single-headed evaluation. Another work aiming to solve the CI problem is [4], which evaluates using both single-headed and multi-headed evaluations and highlight their difference. Chaudhry et al. [4] introduce metrics to quantify forgetting and intransigence, and also propose an algorithm: Riemannian walk to incrementally learn classes.
A key specification of most incremental learning frameworks is whether or not they allow storing the data of base classes (i.e. classes on which the classifier is originally trained). We can also divide existing methods based on this specification:
Methods which use base class data: Several experiments have been proposed to use a small percentage of the data of base classes while training the classifier to learn new classes. iCaRL [14] uses the exemplars of base classes, while incrementally learning new classes. Similarly, Chaudhry et al. [4] also use a fraction of the data of base classes. Chaudhry et al. [4] also show that this is especially useful for alleviating intransigence, which is a problem faced in single-headed evaluation. However, storing data for base classes increases memory requirement Constraints Use base class data No base class data CI methods iCaRL [14], [4], [17] LwF-MC [14], LwM TI methods LwF [12] IMM [10], EWC [9], MAS [1], [2], [8] at each incremental step, which is not feasible when the memory budget is limited.
Methods which do not use base class data: Several TI methods described earlier (such as [1,9] ) do not use the information about base classes while training the classifier to learn new classes incrementally. To the best of our knowledge, LwF-MC [14] is the only CI method which does not use base class data but uses single-headed evaluation. Table 1 presents a categorization summary of previous works in this field. We propose a technique to solve the CI problem, without using any base class data. We can infer from the discussion above that LwF-MC [14] is the only existing work which uses single-headed evaluation, and hence use it as our baseline. We intend to use attention maps in an incremental setup, instead of only knowledge distillation, to transfer more comprehensive knowledge of base classes from teacher to student model. Although in [18], enforcing equivalence of attention maps of teacher and student models has been explored previously for transferring knowledge from teacher to student models, the same approach cannot be applied to an incremental learning setting. In our incremental problem setup, due to the absence of base class data, we intend to utilize the attention region in the new data which resembles one of the base classes. But these regions are not prominent since the data does not belong to any of the base classes, thus making class-specific attention maps a necessity. Class-specificity is required to mine out base class regions in a more targeted fashion, which is why generic attention maps such as what is used in [18] are not applicable as they can not provide a class-specific explanation about relevant patterns corresponding to the target class. Moreover, our problem setup is different from knowledge distillation because at incremental step t, we freeze M t−1 while training M t , and do not allow M t to access data from the base classes, and therefore M t−1 and M t are trained using a completely different set of classes. This makes the problem more challenging as the output of M t on feeding data from unseen classes is the only source of base class data. This is further explained in Sec. 4.1.
We intend to explore the CI problem by proposing to constrain the attention maps of the teacher and student models to be equivalent (in addition to their prediction vectors), to improve the information preserving capability of LwF-MC [14]. In LwF-MC and our proposed method LwM, storing teacher models trained in previous incremental steps is not allowed since it would not be feasible to accumulate models from all the previous steps when the memory budget is limited.
Distillation loss (L D )
L D was first introduced in [12] for incremental learning. It is defined as follows:
L D (y,ŷ) = − N i=1 y i . log(ŷ i ),(1)
where y andŷ are prediction vectors (composed of probability scores) of M t−1 and M t for base classes at incremental step t, each of length N (assuming that M t−1 is trained on N base classes). Also y i = σ(y i ) andŷ i = σ(ŷ i ) (where σ(·) is sigmoid activation). This definition of L D is consistent with that defined in LwF-MC [14]. Essentially, L D enforces the base class prediction of M t and M t−1 to be equivalent, when an image belonging to one of the incrementally added classes is fed to each of them. Moreover, we believe that there exist common visual semantics or patterns in both base and new class data. Therefore, it makes sense to encourage the feature responses of M t and M t−1 to be equivalent, when new class data is given as input. This helps to retain the old class knowledge (in terms of the common visual semantics).
Generating attention maps
We describe the technique employed to generate attention maps. In our experiments we use the Grad-CAM [15] for this task. For using the Grad-CAM, the image is first forwarded to the model, obtaining a raw score for every class. Following this, the gradient of score y c for a desired class c is computed with respect to each convolutional feature map A k . For each A k , global average pooling is performed to obtain the neuron importance α k of A k . Finally, all the A k weighted by α k are passed through a ReLU activation function to obtain a final attention map for class c.
More precisely, let α k = ∂y c ∂A k . Let α = the number of convolutional feature maps in the layer using which attention map is to be generated. The attention map Q can be defined as
Q = ReLU (α T A)(2)
Proposed approach
We introduce an information preserving penalty (L AD ) based on attention maps. We combine L AD with distillation loss L D and a classification loss L C to construct LwM, an approach which encourages attention maps of teacher and student to be similar. Our LwM framework in shown in Figure 3. The loss function of our LwM approach is defined below:
L LW M = L C + βL D + γL AD(3)
Here β, γ are the weights used for L D , L AD respectively. A representation of our LwM approach is presented in Figure 3. In comparison to LwM, LwF-MC [14] only uses a classification loss combined with distillation loss and is our baseline.
Attention distillation loss (L AD )
At incremental step t, we define student model M t , initialized using a teacher model M t−1 . We assume M t is proficient in classifying N base classes. M t is required to recognize N + k classes, where k is the number of previously unseen classes added incrementally. Hence, the sizes of the prediction vectors of M t−1 and M t are N and N + k respectively. For any given input image i, we denote the vectorized attention maps generated by M t−1 and M t , for class c as Q i,c t−1 and Q i,c t , respectively. We generate these maps using Grad-CAM [15], as explained above.
Q i,c t−1 = vector(Grad-CAM(i, M t−1 , c)) (4) Q i,c t = vector(Grad-CAM(i, M t , c))(5)
We assume that the lengths of each vectorized attention map is l. In [18], it has been mentioned that normalizing the attention map by dividing it by the L 2 norm of the map is an important step for student training. Hence we perform this step while computing L AD . During training of M t , an image belonging to one of the new classes to be learned (denoted as I n ), is given as input to both M t−1 and M t . Let b be the top base class predicted by M t (i.e. base class having the highest score) for I n . For this input, L AD is defined as the sum of element wise L 1 difference of the normalized, vectorized attention map:
L AD = l j=1 Q In,b t−1,j Q In,b t−1,j 2 − Q In,b t,j Q In,b t,j 2 1(6)
From the explanation above, we know that for training M t , M t−1 is fed with the data from the classes that it has not seen before (I n ). Essentially, the attention regions generated by M t−1 for I n , represent the regions in the image which resemble the base classes. If M t and M t−1 have equivalent knowledge of base classes, they should have a similar response to these regions, and therefore Q In,b t should be similar to Q In,b t−1 . This implies that the attention outputs of M t−1 are the only traces of base data, which guides M t 's knowledge of base classes. We use the L 1 distance between Q In,b t−1 and Q In,b t as a penalty to enforce their similarity. We experimented with both L 1 and L 2 distance in this context. However, as we obtained better results with L 1 distance on held-out data, we chose L 1 over L 2 distance.
According to Eq. 2, attention maps encode gradient of the score of class b, y b with respect to convolutional feature maps A. This information is not explicitly captured by the distribution of class scores (used by L D Table 2: The statistics of the datasets used in our experiments, in accordance with [14]. Additionally, we also perform experiments on the CUB-200-2011 [16] dataset. In the table, "acc." represents accuracy.
are a 2D manifestation of the prediction vectors (y,ŷ), which means that they capture more spatial information than these vectors, and hence it is more advantageous to use attention maps than using only prediction vectors.
Experiments
We first explain our baseline, which is LwF-MC [14]. Following that, we provide information about the datasets used in our experiments. After that, we describe the iterative protocol to perform classification at every incremental step. We also provide implementation details including architectural information.
Baseline
As our baseline is LwF-MC [14], we firstly implement its objective function, which is a sum of a classification loss and distillation loss (L C + L D ). In all our experiments, we use a cross entropy loss for L C to be consistent with [14]. However, it should be highlighted that the official implementation of L D in LwF-MC by [14] is different from the definition of L D in [12]. As LwF-MC (but not LwF) is our baseline, we use iCaRL's implementation of LwF-MC in our work. LwF cannot handle CI problems where no base class training data is available (according to Table 1), which is the reason why we choose LwF-MC as the baseline and iCaRL's implementation.
Datasets
We use two datasets used in LwF-MC [14] for our experiments. Additionally, we also perform experiments on Caltech-101 [5] as well as CUBS-200-2011 [16] datasets. The details for the datasets are provided in Table 2. These datasets are constructed by randomly selecting a batch of classes at every incremental step. In both datasets, the classes belonging to different batches are disjoint. For a fair comparison, the data preparation for all the datsets and evaluation strategy are the same as that for LwF-MC [14]. Table 3: Experiment configurations used in this work, identified by their respective experiment IDs.
Experimental protocol
We now describe the protocol using which we iteratively train M t , so that it preserves the knowledge of the base classes while incrementally learning new classes.
Initialization: Before the first incremental step (t = 1), we train a teacher model M 0 on 10 base classes, using a classification loss for 10 epochs. The classification loss is a cross entropy loss L C . Following this, for t = 1 to t = k we initialize student M t using M t−1 as its teacher, and feed data from a new batch of images that is to be incrementally learned, to both of these models. Here k is the number of incremental steps.
Applying IPP and classification loss to student model: Given the data from new classes as inputs, we generate the output of M t and M t−1 with respect to base class having the highest score. These outputs can either be classspecific attention maps (required for computing L AD ) or class-specific scores (required for computing L D ). Using these outputs we compute an IPP which can either be L AD or L D . In addition, we apply a classification loss to M t based on its outputs with respect to the new classes which are to be learned incrementally. We jointly apply classification loss and IPP to M t and train it for 10 epochs. Once M t is trained, we use it as a teacher model in the next incremental step, and follow the aforementioned steps iteratively, until all the k incremental steps are completed.
Implementation details
We use the ResNet-18 [7] architecture for training student and teacher models on the iILSVRC-small dataset, and the ResNet-34 [7] for training models on the iCIFAR-100 dataset. This is consistent with the networks and datasets used in [14]. We used a learning rate of 0.01. The feature maps of the final convolutional layer are used to generate attention maps using Grad-CAM [15]. The combinations of classification loss and IPP, along with their experiment IDs are provided in Table 3. The experiment configurations will be referred to as their respective experiment IDs from now on. Referring to Eq. 3, we provide details regarding the weights of each loss used in LwM in the supplementary material. Figure 5: The performance comparison between our method, LwM, and the baselines. LwM outperforms LwF-MC [14] and "using only classification loss with fine-tuning" on the iILSVRC-small and iCIFAR-100 datasets [14]. LwM even outperforms iCaRL [14] on the iILSVRC-small dataset given that iCaRL has the unfair advantage of accessing the base-class data.
Results
Before discussing the quantitative results and advantages of our proposed penalties, we show some qualitative results to demonstrate the advantage of using L AD . We show that we can retain attention regions of base classes for a longer time when more classes are incrementally added to the classifier by using LwM as compared to LwF-MC [14]. Before the first incremental step t = 1, we have M 0 trained on 10 base classes. Now, following the protocol in Sec. 5.3, we incrementally add 10 classes at each incremental step. At every incremental step t, we train M t with 3 configurations: C, LwF-MC [14], and LwM. We use the M t to generate the attention maps for the data from base classes (using which M 0 was trained), which it has not seen, and show the results in Figure 4. Additionally, we also generate corresponding attention maps using M 0 (i.e. the first teacher model), which can be considered 'ideal' (as target maps) as M 0 was given full access to base class data. For the M t s trained with C, it is seen that attention regions for base classes are quickly forgotten after every incremental step. This can be attributed to catastrophic forgetting [9,10]. M t trained with LwF-MC [14] have slightly better attention preserving ability but as the number of incremental steps increases, the attention regions diverge from the 'ideal' attention regions. Interestingly, the attention maps generated by M t trained with LwM configuration retain the attention regions for base classes for all incremental steps shown in Figure 4, and are most similar to the target attention maps. These examples support that LwM delays forgetting of base class knowledge. We show more results in more incremental steps in the supplementary material.
We now show the quantitative results of the following configurations: C , LwF-MC [14] and LwM. To show the efficacy of LwM across, we evaluate these configurations on multiple datasets. The results on the iILSVRC-small and iCIFAR-100 datasets are presented in Figure 5. For the iILSVRC-small dataset, the performance of LwM is better than that of the baseline LwF-MC [14]. LwM outperforms the baseline by a margin of more than 30% when the number of classes is 40 or more. Especially for 100 classes, LwM achieves an improvement of more than 50% over the baseline LwF-MC [14]. In addition, LwM outperforms iCaRL [14], at every incremental step, even though iCaRL has an unfair advantage of storing the exemplars of base classes while training the student model for iILSVRCsmall dataset.
To be consistent with the LwF-MC experiments in [14], we perform experiments by constructing the iCIFAR-100 datasets by using batches of 10, 20, and 50 classes at each incremental step. The results are provided in Figure 5. It can be seen that LwM outperforms LwF-MC for all three sizes of incremental batches in iCIFAR-100 dataset. Hence, we conclude that LwM consistently outperforms LwF-MC [14] in iILSVRC-small and iCIFAR-100 datasets. Additionally, we also perform these experiments using the Caltech-101 and CUBS-200-2011 dataset [5] Table 4: Results obtained on Caltech-101 [5] and CUBS-200-2011 [16]. Here FT refers to fine-tuning. The first step refers to the training of first teacher model using 10 classes.
batch of 10 classes at every incremental step and compare it with fine-tuning. The results for these two datasets are shown in Table 4. Also, the advantage of incrementally adding every loss on top of L C is also established in Figure 5. Here, we show that the performance with only C is very poor due to the catastrophic forgetting [9,10]. We achieve some improvement when L D is added as an IPP in LwF-MC. The performance further improves with the addition of L AD in LwM configuration. For both the iILSVRC-small and iCIFAR100 datasets, the tabular version of their results in Figures 5 is provided in the supplementary material.
Conclusion and future work
We explored the IL problem for the task of object classification, and proposed a technique: LwM by combining L D with L AD , for utilizing attention maps to transfer the knowledge of base classes from the teacher to student model, without requiring any data of base classes during training. This technique outperforms the baseline in all the scenarios that we investigate. Regarding future applications, LwM can be used in many real world scenarios. For instance, the capability of a face recognition network trained on specific identities can be trained in an incremental setup using our frameworks to include more identities and increase the number of identities that can be recognized using a network. Also, while we explore IL problem for classification in this work, we believe that this can also be extended for segmentation. We believe that incremental segmentation is a challenging problem due to the absence of abundant ground truth maps. The importance of incremental segmentation has already been underscored in [3]. Also, as visual attention is more meaningful for segmentation (as shown in [11]), we intend to apply our frame-works to incremental segmentation in the near future. Other applications may also include incrementally learning from the data belonging to the same class but different domains, to build a student model capable of adapting to various domains.
| 5,055 |
1811.07258
|
2901012461
|
Multi-object tracking (MOT) is an important and practical task related to both surveillance systems and moving camera applications, such as autonomous driving and robotic vision. However, due to unreliable detection, occlusion and fast camera motion, tracked targets can be easily lost, which makes MOT very challenging. Most recent works treat tracking as a re-identification (Re-ID) task, but how to combine appearance and temporal features is still not well addressed. In this paper, we propose an innovative and effective tracking method called TrackletNet Tracker (TNT) that combines temporal and appearance information together as a unified framework. First, we define a graph model which treats each tracklet as a vertex. The tracklets are generated by appearance similarity with CNN features and intersection-over-union (IOU) with epipolar constraints to compensate camera movement between adjacent frames. Then, for every pair of two tracklets, the similarity is measured by our designed multi-scale TrackletNet. Afterwards, the tracklets are clustered into groups which represent individual object IDs. Our proposed TNT has the ability to handle most of the challenges in MOT, and achieve promising results on MOT16 and MOT17 benchmark datasets compared with other state-of-the-art methods.
|
Besides graph models, recurrent neural networks (RNN)-based tracking also plays an important role in recent years @cite_34 @cite_27 @cite_38 @cite_41 @cite_6 @cite_3 . One advantage of RNN-based tracking is the ability of online prediction. However, along with the propagation of RNN block, the relation between two faraway detections becomes very weak. Without direct connections, the performance of RNN-based methods degrades in the long run and sometimes can be easily affected by unreliable detections.
|
{
"abstract": [
"In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.",
"",
"Video object detection is a fundamental tool for many applications. Since direct application of image-based object detection cannot leverage the rich temporal information inherent in video data, we advocate to the detection of long-range video object pattern. While the Long Short-Term Memory (LSTM) has been the de facto choice for such detection, currently LSTM cannot fundamentally model object association between consecutive frames. In this paper, we propose the association LSTM to address this fundamental association problem. Association LSTM not only regresses and classifiy directly on object locations and categories but also associates features to represent each output object. By minimizing the matching error between these features, we learn how to associate objects in two consecutive frames. Additionally, our method works in an online manner, which is important for most video tasks. Compared to the traditional video object detection methods, our approach outperforms them on standard video datasets.",
"We present a novel approach to online multi-target tracking based on recurrent neural networks (RNNs). Tracking multiple objects in real-world scenes involves many challenges, including a) an a-priori unknown and time-varying number of targets, b) a continuous state estimation of all present targets, and c) a discrete combinatorial problem of data association. Most previous methods involve complex models that require tedious tuning of parameters. Here, we propose for the first time, an end-to-end learning approach for online multi-target tracking. Existing deep learning methods are not designed for the above challenges and cannot be trivially applied to the task. Our solution addresses all of the above points in a principled way. Experiments on both synthetic and real data show promising results obtained at 300 Hz on a standard CPU, and pave the way towards future research in this direction.",
"Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the tracklet images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOT16.",
"We present a multi-cue metric learning framework to tackle the popular yet unsolved Multi-Object Tracking (MOT) problem. One of the key challenges of tracking methods is to effectively compute a similarity score that models multiple cues from the past such as object appearance, motion, or even interactions. This is particularly challenging when objects get occluded or share similar appearance properties with surrounding objects. To address this challenge, we cast the problem as a metric learning task that jointly reasons on multiple cues across time. Our framework learns to encode long-term temporal dependencies across multiple cues with a hierarchical Recurrent Neural Network. We demonstrate the strength of our approach by tracking multiple objects using their appearance, motion, and interactions. Our method outperforms previous works by a large margin on multiple publicly available datasets including the challenging MOT benchmark."
],
"cite_N": [
"@cite_38",
"@cite_41",
"@cite_6",
"@cite_3",
"@cite_27",
"@cite_34"
],
"mid": [
"2895071559",
"",
"2780608998",
"2339473870",
"2796655392",
"2951063106"
]
}
|
Exploit the Connectivity: Multi-Object Tracking with TrackletNet
|
Multi-object tracking is an important topic in computer vision and machine learning field. This technique can be used in many tasks, such as traffic flow counting from surveillance cameras, human behavior prediction and autonomous driving assistance. However, due to noisy detections and occlusions, tracking multiple objects in a long time range is very challenging. To address such problems, many methods follow the tracking-by-detection framework, i.e., tracking is applied as an association approach given the detection results. Built upon the tracking-by-detection framework, multiple cues can be combined together into the Figure 1. Our TNT framework for multi-object tracking. Given the detections in different frames, detection association is computed to generate Tracklets for the Vertex Set V . After that, each two tracklets are put into a novel TrackletNet to measure the connectivity, which formed the similarity on the Edge Set E. A graph model G can be derived from V and E. Finally, the tracklets with the same ID are grouped into one cluster using the graph partition approach. tracking scheme. 1) Appearance feature of each detected object [27,41,33,37]. With a well-embedded appearance, features should be similar if they are from the same object, while they can be very different if they are from distinct objects. 2) Temporal relation for locations among frames in a trajectory [22]. With slow motion and high frame rate of cameras, we can assume that the trajectories of objects are smooth in time domain. 3) Interaction cue among different target objects which considers the relationship among neighboring targets [28]. As a result, we should take into account all these cues as an optimization problem.
In this paper, the proposed TrackletNet Tracker (TNT) takes advantages of the above useful cues together into a unified framework based on an undirected graph model [23]. Each vertex in our graph model represents one tracklet and the edge between two vertices measures the connectivity of two tracklets. Here, the tracklet is defined as a small piece of consecutive detections of an object. Due to the unreliable detections and occlusions, the entire trajectory of an object may be divided into several distinct tracklets. Given the graph representation, tracking can be regarded as a clustering approach that groups the tracklets into one big cluster.
To generate the tracklets, i.e., vertices of the graph, we associate detections among consecutive frames based on intersection-over-union (IOU) and the similarity of appearance features. However, the IOU criterion becomes unreliable because the position of detection may shift a lot when camera is moving or revolving. In such situation, epipolar geometry is adopted to compensate camera movement and predict the position of bounding boxes in the next frame. To estimate the connectivity on the edge of the graph between two vertices, the TrackletNet is designed for measuring the continuity of two input tracklets, which combines both trajectory and appearance information. The flowchart of our tracking method TNT is shown in Figure 1.
Specifically, we propose the following contributions: 1) We build a graph-based model that takes tracklets, instead of detected objects, as the vertices, to better utilize the temporal information and greatly reduce the computational complexity.
2) To the best of our knowledge, this is the first work to adopt epipolar geometry in tracklet generation to compensate camera movement.
3) A CNN architecture, called multi-scale TrackletNet, is designed to measure the connectivity between two tracklets. This network combines trajectory and appearance information into a unified system. 4) Our model outperforms state-of-the-art methods in multi-object tracking for both MOT16 and MOT17 benchmarks, and it can be also easily applied to other different scenarios.
Tracklet Graph Model
We use tracklets as the vertices in our graph model. Unlike the detection-based graph models, which are computational expensive and not well utilizing temporal informa-tion, we propose a tracklet-based graph model, which treats the tracklet as the vertex and measures the similarity between tracklets. From the tracklet, we can infer the object moving trajectory for a longer time, and we can also measure how the embedded features of the detections change along the time. Moreover, the number of tracklets is much less than the number of detections, which makes the optimization more efficiently.
In the following section, we will discuss in detail about the model parameters and optimization by tracklet clustering.
Graph Definition
G(V, E) Vertex Set. A finite set V in which every element v ∈ V
represents a tracklet of one object across multiple frames, i.e., a set of consecutive detections of the same object along time. For each detection, we define the bounding box with five parameters, i.e., the center of the bounding box (x t , y t ), the width and height (w t , h t ), and the frame index t. Besides the bounding box of the detection, we also extract an appearance feature [30] for each detected object at frame t. Note that because of unreliable detections, an entire trajectory of an object may be divided into multiple pieces of tracklets. The tracklet generation is explained in detail in Section 4.1.
Edge Set. A finite set E in which every element e ∈ E represents an edge between two tracklets u, w ∈ V that are not far away in the time domain, i.e., min tu∈T (u),tw∈T (w) |t u − t w | ≤ δ t , where T (u) is the set of frame indices of the tracklet u. For tracklets that are far away, the edge is not considered between them since not enough information can be utilized for measuring their relationship.
A connectivity measure p e , represents the similarity of the two tracklets connected by the edge e ∈ E. The edge cost is defined as
c = log 1 − p e p e .(1)
Moreover, the connectivity is defined to be 0 if two tracklets have overlap in the time domain since they must belong to distinct objects. This is because an object cannot appear in two tracklets at the same time. The connectivity is measured by our designed TrackletNet, which will be introduced in Section 4.2.
Tracklet Clustering
After the tracklet graph is built, we acquire the object trajectories by clustering the graph into different sub-graphs. The tracklets in each sub-graph can represent the same object. We will explain some details of our tracklet clustering in the following paragraphs.
Feasible Solutions. Given a tracklet graph G(V, E), we hope to partition G into disjoint sub-graphs G[s τ ], and each sub-graph represents a distinct object. Here ∀s τ ⊆ V , τ represents the object ID. Thus, every tracklet u ∈ s τ is from the same object τ and any two tracklets u ∈ s τ , w ∈ s τ from two different sub-graphs are from different objects τ and τ . For the graph partition problem, the global optimal solution cannot be easily guaranteed. But we can still define the feasible solutions as follows.
• Each sub-graph G[s τ ] should be a connected graph, i.e., ∀τ , ∀u, w ∈ s τ , ∃P ∈ G[s τ ], s.t., u, w ∈ P , where P is a path inside G[s τ ].
• The cost on the edge inside each sub-graph should have a finite value, i.e., ∀τ , ∀u, w ∈ s τ , if ∃e ∈ E for u, w, p e = 0.
Objective Function. The objective function is defined to minimize the total clustering cost on all graph edges. We define π(u, w) ∈ {±1} as the clustering label for tracklets u and w. If u and w are partitioned into one sub-graph, π(u, w) is set to be +1; otherwise, π(u, w) is set to be −1.
The objective function is defined as follows,
O = min π∈{±1} u,w∈V u∈N (w) π(u, w) · c(u, w),(2)
where N (w) represents the set of neighboring tracklets of w with edge shared in the graph.
Clustering. The graph partition is formulated as a clustering problem. However, the minimum cost of graph cut problem defined by Equation (2) is APX-hard [24]. Besides, the number of clusters is unknown in advance. In this work, we exploit a greedy search-based clustering method proposed by [34] to minimize the cost. Five clustering operations, i.e., assign, merge, split, switch, and break, are used. The advantage of adopting different types of clustering operation is to avoid being stuck at the local minimum as much as possible in the optimization.
Proposed TrackletNet Tracker
Tracklet Generation with Epipolar Constraints
As defined in Section 3, a tracklet contains consecutively detected objects with bounding box information and appearance features with dimension d ap . To simplify the generation of tracklets, we associate two consecutive detections based on IOU and appearance similarity in adjacent frames with a high association threshold to guarantee the mis-association as small as possible [41,35].
However, the association accuracy can still be affected by the fast motion of the camera. For example, as shown in the Figure 2(a)(b), the target detection in the t-th frame has a large IOU with another detection in the (t + 1)-th frame. As a result, the detection may easily get mis-associated.
This issue can be well solved by epipolar geometry (EG) [8], i.e., x t Fx t+1 = 0 for any matched static feature point x in two frames, where F is the fundamental matrix. First, if we assume the target is static or has slow motion, then the four corner points x i,t of the target detection bounding box in the t-th frame should lie on the corresponding epipolar lines in the (t + 1)-th frame, i.e., the predicted target bounding box in the (t + 1)-th frame should intersect with the four epipolar lines as much as possible as shown in Figure 2(c). Second, we also assume the size of the bounding box does not have much change in adjacent frames, then the optimal predicted bounding box can be obtained, which is shown in red in Figure 2(d).
Followed by the above two assumptions, we can predict the target bounding box location in the (t + 1)-th frame by formulating an optimization problem. Define four corner points of the target bounding box in the t-th frame as x i,t , where i ∈ {1, 2, 3, 4}, like the example shown in Figure 2(a). Similarly, we define x i,t+1 , i ∈ {1, 2, 3, 4}, as the bounding box in the (t + 1)-th frame. Then we can define the cost function as follows,
f (x i,t+1 ) = 4 i=1 x i,t Fx i,t+1 2 2 + (x 3,t+1 − x 1,t+1 ) − (x 3,t − x 1,t ) 2 2 ,(3)
where the first term guarantees the predicted bounding box should intersect with four corresponding epipolar lines as much as possible, while the second term is the target size constraint. One example of predicted bounding box, as shown in Figure 2(d), is well aligned with the true target in (t + 1)-th frame. Then, in the detection association, IOU is calculated between predicted bounding boxes and detection bounding boxes in the (t + 1)-th frame. Fundamental matrix F can be estimated by the RANSAC [6] algorithm with matched SURF points [1] between two consecutive frames.
The optimization of the cost function in Equation (3) can be reformulated into a Least Square problem and solved efficiently.
Multi-Scale TrackletNet
To measure the connectivity between two tracklets, we aggregate different types of information, including temporal and appearance features via the designed multi-scale TrackletNet. The architecture of the proposed TrackletNet is shown in Figure 3.
For each frame t, a vector consisting of the bounding box parameters, i.e., (x t , y t , w t , h t ), concatenated by an embedded appearance feature extracted from the FaceNet [30], is used to represent an individual detection from a tracklet. Considering two tracklets with edge-shared in the graph, we concatenate the embedded feature of each detection from these two tracklets inside a time window with a fixed size T . Then the feature space in the time window of the two tracklets is (4 + d ap ) × T . As for frames between the two target tracklets, we use a (4 + d ap ) dimensional interpolated vector instead at each missing frame t. Besides, zeropadding is used for frames after the second tracklet. To better represent the time duration of input tracklets, two binary masks are used as individual channels with (4 + d ap ) × T dimension for each input tracklet. For each frame t, if the detection exists, then we set the t-th column of the binary mask to be all 1 vector; otherwise we set 0 vector instead. As a result, the size of the input tensor of the TrackletNet is B × (4 + d ap ) × T × 3, where B is the batch size and 3 indicates the number of channels, one for the embedded feature space and the other two for the binary masks.
TrackletNet contains three convolution layers Conv1, Conv2, Conv3, one average pooling layer AvgPool, and two fully connected layers FC1, FC2. For each convolution layer, four different sizes of kernels are used, i.e., 1 × 3, 1 × 5, 1 × 9, 1 × 13. Note that our convolution is only in the time domain, which can measure the continuity for each dimension of the feature. Different sizes of kernels will look for feature changes in different scales. The large kernels have the ability to measure the continuity of two tracklets even if they are far away in the time domain, while small kernels can focus on appearance difference if input tracklets are in small pieces. Each convolution is followed by Figure 3. Architecture of Multi-scale TrackletNet. First, we extract embedded features from two input tracklets, which include 4D location features and 512D appearance features along the time window of 64 frames. The input tensor has three channels, i.e., one for tracklet embedded features and the other two for binary masks, where white color represents 1 and black color represents 0. Four types of 1D convolution kernels are applied for feature extraction in three convolution layers. For each convolution layer, max pooling is adopted for down-sampling in the time domain. Average pooling is conducted on the dimensions of the appearance feature after Conv3. Then two fully connected layers are conducted to get the final output. one max pooling layer which down-samples by 2 in the time domain. After Conv3, we take the average pooling on appearance feature dimensions. AvgPool plays a role of the weighted majority vote on the discontinuity of all appearance dimensions. Then we concatenate all features and use two fully connected layers for the final output. The output is defined as the similarity between the two input tracklets, which ranges from zero to one.
There are some important properties of the TrackletNet, which are listed as follows.
• TrackletNet focuses on the continuity of the embedded features along the time. Because of the independence among different feature dimensions, no convolution is conducted across the dimensions of the embedded features. In other words, the convolution kernels only capture the dependency along time.
• Binary masks of the input tensor play a role as the tracklet indicator, telling the temporal locations of the tracklets. They help the network learn if the discontinuity of two tracklets is caused by frames without detection or the abrupt changes of the tracklets.
• The network integrates object Re-ID, temporal and spatial dependency as one unified framework.
Experiments
Dataset
We use MOT16 and MOT17 [21] datasets to train and evaluate our tracking performance. For MOT16 dataset, there are 7 training video sequences and 7 testing video sequences. The benchmark also provides public deformable part models (DPM) [5] detections for both training and testing data. MOT17 has the same video sequences as MOT16 but provides more accurate ground truth in the evaluation. In addition to DPM, Faster-RCNN [25] and scale dependent pooling (SDP) [38] detections are also provided for evaluating the tracking performance. The number of trajectories in the training data is 546 and the number of total frames is 5, 316.
Implementation Details
Our proposed multi-scale TrackletNet is purely trained on MOT dataset. The extracted appearance feature has 512 dimensions, i.e., d ap = 512. The time window T is set to 64 and the batch size B is set to 32. We use Adam optimizer with a learning rate of 10 −3 at the beginning. We decrease the learning rate by 10 times for every 2, 000 steps until it reaches 10 −5 . As mentioned above, the MOT dataset is quite small for training a complex neural network. However, the framework of our proposed TNT is carefully designed to avoid over-fitting. In addition, augmentation approaches are used for generating the training data, i.e., tracklets, as follows.
Bounding box randomization. Instead of using the ground truth bounding boxes for training, we randomly disturb the size and location of bounding boxes by a factor α sampled from the normal distribution N (0, 0.05 2 ). Since the detection results could be very noisy, this randomization will make sure the data from training and testing are as similar as possible. For each embedded detection before TrackletNet, the four parameters, i.e., (x, y, w, h), are normalized by the size of the frame image to ensure the input of TrackletNet keeps the same scale in different datasets.
Tracklet generation. Here, we randomly divide the trajectory of each object into small pieces of tracklets as follows. For each frame, we sample a random number from the uniform distribution, if it is smaller than a threshold, then we set this frame as the breaking frame. Then we split the entire trajectory based on the breaking frames into tracklets.
In the training stage, we randomly generate tracklets with augmentations mentioned above. For each training data, two tracklets are randomly selected as the input if they can satisfy the condition of the edge defined in the graph model in Section 3.1. If they are from the same object, the training label is set to be 1; otherwise, 0 is assigned as the label. To make it no bias, positive and negative pairs are sampled equally.
Feature Map Visualization
To better understand the effectiveness of our proposed TrackletNet, we also plot two examples of feature maps as shown in Figure 4. For each column (a) and (b) in Figure 4, the top figure shows the spatial locations of the two input tracklets in the 64-frame time window. Blue and green colors represent two tracklets respectively. The bottom figure shows the corresponding feature map in the time-channel plane after the max pooling of Conv3 with kernel size 5. The horizontal axis represents the time domain which aligns with the figures in the top row, while the vertical axis represents different channels in the feature map. For the example shown in (a), most higher values of the feature map are on the left side since the connection between the two tracklets is on the left part of the time window. As for (b), higher values in the feature map are on the middle side of the time window, which also matches the situation of the two input tracklets. From the feature map, we can see that the connection part of the input tracklets has strong activation, which is critical for the connectivity measurement.
Tracking Performance
Quantitative results on MOT16 and MOT17 datasets. We also provide our quantitative results on MOT16 and MOT17 benchmark datasets compared with other state-ofthe-art methods, which are shown in Table 1 and Table 2. Note that we use IDF1 [26] and MOTA as major factors to evaluate the reliability of a tracker. As mentioned in [26], there are several weaknesses of MOTA metric, which is very sensitive to the detection threshold. Instead, IDF1 score compares ground truth trajectory and computes trajectory by a bipartite graph, which reflects how long of an object has been correctly tracked. We can see that our IDF1 score is much higher than other state-of-the-art methods. For other metrics shown in the table, we are also among the top rankings.
Qualitative results for different scenarios. With the trained model on the MOT dataset, we also test our proposed tracker on other scenarios without any fine-tuning. Promising results are also achieved. Figure 5 shows some qualitative tracking results using our proposed tracker on other applications, like 3D pose estimation and UAV applications.
Ablation Study
Occlusion Handling. Occlusion is one of the major challenges in MOT. Our framework can easily handle both partial and full occlusions even with a very long time range. When a person is occluded, the detection as well as appearance features are unreliable. During generating the track- [20] 48 lets, when we test that there is a large change in appearance, we just stop detection association even the detection result is available. After several or tens of frames, when the same person appears again from occlusion, a new tracklet will be assigned to the person. Then the connectivity between these two tracklets will be measured to distinguish whether they are the same person. Once they are confirmed with the same ID, we can easily fill out the missing detections with linear interpolation. Figure 6 shows qualitative results for handling occlusions. The first row of Figure 6 is from the MOT17-08 sequence. At frame 566, the person with a red bounding box is fully occluded by a statue. But it can be correctly tracked after it appears again at frame 604. The second row is one example of the MOT17-01 sequence, the person with the red bounding box goes across five other pedestrians, but the IDs of all targets keep consis- eration, we run detection association on MOT17-10 and MOT17-13 with the Faster-RCNN detector because these two sequences have large camera motion. Table 3 shows the results with/without epipolar geometry. Two types of error rates are evaluated, i.e., false discovery rate (FDR) and false negative rate (FNR), which are defined as follows,
Tracker IDF1 ↑ MOTA ↑ MT ↑ ML ↓ FP ↓ FN ↓ IDsw. ↓ Frag ↓ GCRAFDR = FP TP + FP , FNR = FN TP + FN ,(4)
where TP, FP and FN represent true positive, false positive and false negative, respectively. From Table 3, we can see that FDR is quite small in both cases, which means only a small portion of incorrect associations is involved in the tracklet generation. It shows the effectiveness of our tracklet-based graph model. On the other hand, FNR largely drops with epipolar geometry adopted, especially for the MOT17-13 sequence, which reflects the effectiveness of the proposed tracklet generation strategy.
Robustness to Appearance Features. Another major advantage of our TrackletNet is the ability to address overfitting learning of appearance features. Different from [20], our TrackletNet is trained only on MOT dataset without using additional tracking datasets, but we can still achieve very good performance. This is because of the dimension independence of appearance features in training the network with convolutions only conducted in the time domain. As a result, the complexity of the network is largely reduced, which also decreases the effect of overfitting.
To test the model robustness to appearance features, we disturb the appearance features with Gaussian noise on MOT17-02 sequence. The compared baseline method is using the Bhattacharyya distance of appearance features between the input pair of tracklets as the edge cost in the graph, which is commonly used in person Re-ID tasks. The comparison results are shown in Table 4 with Gaussian noise using different standard deviations (Std). From the table, we can see that the baseline method degrades largely with the increasing of noise level, while the tracking performance is not affected much for TNT. This is because TNT measures the temporal continuity of features as the similarity rather than using feature distance itself, which can largely suppress unreliable detections or noise in tracking.
Conclusion and Future Work
In this paper, we propose a novel multi-object tracking method TNT based on a tracklet graph model, including tracklet vertex generation with epipolar geometry and connectivity edge measurement by a multi-scale Tracklet-Net. Our TNT outperforms other state-of-the-art methods on MOT16 and MOT17 benchmarks. We also show some qualitative results on different scenarios and applications using TNT. Robustness of TNT is further discussed with handling occlusions.
However, fast camera motion is still a challenge in 2D tracking. In our future work, we are going to convert 2D tracking to 3D tracking with the help of visual odometry. Once the 3D location of the object in the world coordinate can be estimated, the trajectory of the object should be much smoother than the 2D case.
| 4,120 |
1811.07258
|
2901012461
|
Multi-object tracking (MOT) is an important and practical task related to both surveillance systems and moving camera applications, such as autonomous driving and robotic vision. However, due to unreliable detection, occlusion and fast camera motion, tracked targets can be easily lost, which makes MOT very challenging. Most recent works treat tracking as a re-identification (Re-ID) task, but how to combine appearance and temporal features is still not well addressed. In this paper, we propose an innovative and effective tracking method called TrackletNet Tracker (TNT) that combines temporal and appearance information together as a unified framework. First, we define a graph model which treats each tracklet as a vertex. The tracklets are generated by appearance similarity with CNN features and intersection-over-union (IOU) with epipolar constraints to compensate camera movement between adjacent frames. Then, for every pair of two tracklets, the similarity is measured by our designed multi-scale TrackletNet. Afterwards, the tracklets are clustered into groups which represent individual object IDs. Our proposed TNT has the ability to handle most of the challenges in MOT, and achieve promising results on MOT16 and MOT17 benchmark datasets compared with other state-of-the-art methods.
|
Features are very important in the tracking-by-detection framework. There are two types of features that are used in common, i.e., appearance features and temporal features. For appearance features, many works adopt CNN-based features from Re-ID tasks @cite_14 @cite_20 @cite_36 . However, histogram-based features, like color histograms, HOG, and LBP, are still powerful if no training data is provided @cite_33 . As for temporal features, the location, size, and motion of bounding boxes are commonly used. Given the appearance features and temporal features, the tracker can fuse them together using human defined weights @cite_20 @cite_31 @cite_33 . Although @cite_34 @cite_27 propose RNN-based networks to combine features together, it is still empirical and difficult to determine the weight of each feature.
|
{
"abstract": [
"Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available.",
"Tracking of vehicles across multiple cameras with nonoverlapping views has been a challenging task for the intelligent transportation system (ITS). It is mainly because of high similarity among vehicle models, frequent occlusion, large variation in different viewing perspectives and low video resolution. In this work, we propose a fusion of visual and semantic features for both single-camera tracking (SCT) and inter-camera tracking (ICT). Specifically, a histogram-based adaptive appearance model is introduced to learn long-term history of visual features for each vehicle target. Besides, semantic features including trajectory smoothness, velocity change and temporal information are incorporated into a bottom-up clustering strategy for data association in each single camera view. Across different camera views, we also exploit other information, such as deep learning features, detected license plate features and detected car types, for vehicle re-identification. Additionally, evolutionary optimization is applied to camera calibration for reliable 3D speed estimation. Our algorithm achieves the top performance in both 3D speed estimation and vehicle re-identification at the NVIDIA AI City Challenge 2018.",
"",
"Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the tracklet images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOT16.",
"",
"We present a multi-cue metric learning framework to tackle the popular yet unsolved Multi-Object Tracking (MOT) problem. One of the key challenges of tracking methods is to effectively compute a similarity score that models multiple cues from the past such as object appearance, motion, or even interactions. This is particularly challenging when objects get occluded or share similar appearance properties with surrounding objects. To address this challenge, we cast the problem as a metric learning task that jointly reasons on multiple cues across time. Our framework learns to encode long-term temporal dependencies across multiple cues with a hierarchical Recurrent Neural Network. We demonstrate the strength of our approach by tracking multiple objects using their appearance, motion, and interactions. Our method outperforms previous works by a large margin on multiple publicly available datasets including the challenging MOT benchmark.",
"Although many methods perform well in single camera tracking, multi-camera tracking remains a challenging problem with less attention. DukeMTMC is a large-scale, well-annotated multi-camera tracking benchmark which makes great progress in this field. This report is dedicated to briefly introduce our method on DukeMTMC and show that simple hierarchical clustering with well-trained person re-identification features can get good results on this dataset."
],
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_36",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_20"
],
"mid": [
"2794497862",
"2893929991",
"",
"2796655392",
"",
"2951063106",
"2779160359"
]
}
|
Exploit the Connectivity: Multi-Object Tracking with TrackletNet
|
Multi-object tracking is an important topic in computer vision and machine learning field. This technique can be used in many tasks, such as traffic flow counting from surveillance cameras, human behavior prediction and autonomous driving assistance. However, due to noisy detections and occlusions, tracking multiple objects in a long time range is very challenging. To address such problems, many methods follow the tracking-by-detection framework, i.e., tracking is applied as an association approach given the detection results. Built upon the tracking-by-detection framework, multiple cues can be combined together into the Figure 1. Our TNT framework for multi-object tracking. Given the detections in different frames, detection association is computed to generate Tracklets for the Vertex Set V . After that, each two tracklets are put into a novel TrackletNet to measure the connectivity, which formed the similarity on the Edge Set E. A graph model G can be derived from V and E. Finally, the tracklets with the same ID are grouped into one cluster using the graph partition approach. tracking scheme. 1) Appearance feature of each detected object [27,41,33,37]. With a well-embedded appearance, features should be similar if they are from the same object, while they can be very different if they are from distinct objects. 2) Temporal relation for locations among frames in a trajectory [22]. With slow motion and high frame rate of cameras, we can assume that the trajectories of objects are smooth in time domain. 3) Interaction cue among different target objects which considers the relationship among neighboring targets [28]. As a result, we should take into account all these cues as an optimization problem.
In this paper, the proposed TrackletNet Tracker (TNT) takes advantages of the above useful cues together into a unified framework based on an undirected graph model [23]. Each vertex in our graph model represents one tracklet and the edge between two vertices measures the connectivity of two tracklets. Here, the tracklet is defined as a small piece of consecutive detections of an object. Due to the unreliable detections and occlusions, the entire trajectory of an object may be divided into several distinct tracklets. Given the graph representation, tracking can be regarded as a clustering approach that groups the tracklets into one big cluster.
To generate the tracklets, i.e., vertices of the graph, we associate detections among consecutive frames based on intersection-over-union (IOU) and the similarity of appearance features. However, the IOU criterion becomes unreliable because the position of detection may shift a lot when camera is moving or revolving. In such situation, epipolar geometry is adopted to compensate camera movement and predict the position of bounding boxes in the next frame. To estimate the connectivity on the edge of the graph between two vertices, the TrackletNet is designed for measuring the continuity of two input tracklets, which combines both trajectory and appearance information. The flowchart of our tracking method TNT is shown in Figure 1.
Specifically, we propose the following contributions: 1) We build a graph-based model that takes tracklets, instead of detected objects, as the vertices, to better utilize the temporal information and greatly reduce the computational complexity.
2) To the best of our knowledge, this is the first work to adopt epipolar geometry in tracklet generation to compensate camera movement.
3) A CNN architecture, called multi-scale TrackletNet, is designed to measure the connectivity between two tracklets. This network combines trajectory and appearance information into a unified system. 4) Our model outperforms state-of-the-art methods in multi-object tracking for both MOT16 and MOT17 benchmarks, and it can be also easily applied to other different scenarios.
Tracklet Graph Model
We use tracklets as the vertices in our graph model. Unlike the detection-based graph models, which are computational expensive and not well utilizing temporal informa-tion, we propose a tracklet-based graph model, which treats the tracklet as the vertex and measures the similarity between tracklets. From the tracklet, we can infer the object moving trajectory for a longer time, and we can also measure how the embedded features of the detections change along the time. Moreover, the number of tracklets is much less than the number of detections, which makes the optimization more efficiently.
In the following section, we will discuss in detail about the model parameters and optimization by tracklet clustering.
Graph Definition
G(V, E) Vertex Set. A finite set V in which every element v ∈ V
represents a tracklet of one object across multiple frames, i.e., a set of consecutive detections of the same object along time. For each detection, we define the bounding box with five parameters, i.e., the center of the bounding box (x t , y t ), the width and height (w t , h t ), and the frame index t. Besides the bounding box of the detection, we also extract an appearance feature [30] for each detected object at frame t. Note that because of unreliable detections, an entire trajectory of an object may be divided into multiple pieces of tracklets. The tracklet generation is explained in detail in Section 4.1.
Edge Set. A finite set E in which every element e ∈ E represents an edge between two tracklets u, w ∈ V that are not far away in the time domain, i.e., min tu∈T (u),tw∈T (w) |t u − t w | ≤ δ t , where T (u) is the set of frame indices of the tracklet u. For tracklets that are far away, the edge is not considered between them since not enough information can be utilized for measuring their relationship.
A connectivity measure p e , represents the similarity of the two tracklets connected by the edge e ∈ E. The edge cost is defined as
c = log 1 − p e p e .(1)
Moreover, the connectivity is defined to be 0 if two tracklets have overlap in the time domain since they must belong to distinct objects. This is because an object cannot appear in two tracklets at the same time. The connectivity is measured by our designed TrackletNet, which will be introduced in Section 4.2.
Tracklet Clustering
After the tracklet graph is built, we acquire the object trajectories by clustering the graph into different sub-graphs. The tracklets in each sub-graph can represent the same object. We will explain some details of our tracklet clustering in the following paragraphs.
Feasible Solutions. Given a tracklet graph G(V, E), we hope to partition G into disjoint sub-graphs G[s τ ], and each sub-graph represents a distinct object. Here ∀s τ ⊆ V , τ represents the object ID. Thus, every tracklet u ∈ s τ is from the same object τ and any two tracklets u ∈ s τ , w ∈ s τ from two different sub-graphs are from different objects τ and τ . For the graph partition problem, the global optimal solution cannot be easily guaranteed. But we can still define the feasible solutions as follows.
• Each sub-graph G[s τ ] should be a connected graph, i.e., ∀τ , ∀u, w ∈ s τ , ∃P ∈ G[s τ ], s.t., u, w ∈ P , where P is a path inside G[s τ ].
• The cost on the edge inside each sub-graph should have a finite value, i.e., ∀τ , ∀u, w ∈ s τ , if ∃e ∈ E for u, w, p e = 0.
Objective Function. The objective function is defined to minimize the total clustering cost on all graph edges. We define π(u, w) ∈ {±1} as the clustering label for tracklets u and w. If u and w are partitioned into one sub-graph, π(u, w) is set to be +1; otherwise, π(u, w) is set to be −1.
The objective function is defined as follows,
O = min π∈{±1} u,w∈V u∈N (w) π(u, w) · c(u, w),(2)
where N (w) represents the set of neighboring tracklets of w with edge shared in the graph.
Clustering. The graph partition is formulated as a clustering problem. However, the minimum cost of graph cut problem defined by Equation (2) is APX-hard [24]. Besides, the number of clusters is unknown in advance. In this work, we exploit a greedy search-based clustering method proposed by [34] to minimize the cost. Five clustering operations, i.e., assign, merge, split, switch, and break, are used. The advantage of adopting different types of clustering operation is to avoid being stuck at the local minimum as much as possible in the optimization.
Proposed TrackletNet Tracker
Tracklet Generation with Epipolar Constraints
As defined in Section 3, a tracklet contains consecutively detected objects with bounding box information and appearance features with dimension d ap . To simplify the generation of tracklets, we associate two consecutive detections based on IOU and appearance similarity in adjacent frames with a high association threshold to guarantee the mis-association as small as possible [41,35].
However, the association accuracy can still be affected by the fast motion of the camera. For example, as shown in the Figure 2(a)(b), the target detection in the t-th frame has a large IOU with another detection in the (t + 1)-th frame. As a result, the detection may easily get mis-associated.
This issue can be well solved by epipolar geometry (EG) [8], i.e., x t Fx t+1 = 0 for any matched static feature point x in two frames, where F is the fundamental matrix. First, if we assume the target is static or has slow motion, then the four corner points x i,t of the target detection bounding box in the t-th frame should lie on the corresponding epipolar lines in the (t + 1)-th frame, i.e., the predicted target bounding box in the (t + 1)-th frame should intersect with the four epipolar lines as much as possible as shown in Figure 2(c). Second, we also assume the size of the bounding box does not have much change in adjacent frames, then the optimal predicted bounding box can be obtained, which is shown in red in Figure 2(d).
Followed by the above two assumptions, we can predict the target bounding box location in the (t + 1)-th frame by formulating an optimization problem. Define four corner points of the target bounding box in the t-th frame as x i,t , where i ∈ {1, 2, 3, 4}, like the example shown in Figure 2(a). Similarly, we define x i,t+1 , i ∈ {1, 2, 3, 4}, as the bounding box in the (t + 1)-th frame. Then we can define the cost function as follows,
f (x i,t+1 ) = 4 i=1 x i,t Fx i,t+1 2 2 + (x 3,t+1 − x 1,t+1 ) − (x 3,t − x 1,t ) 2 2 ,(3)
where the first term guarantees the predicted bounding box should intersect with four corresponding epipolar lines as much as possible, while the second term is the target size constraint. One example of predicted bounding box, as shown in Figure 2(d), is well aligned with the true target in (t + 1)-th frame. Then, in the detection association, IOU is calculated between predicted bounding boxes and detection bounding boxes in the (t + 1)-th frame. Fundamental matrix F can be estimated by the RANSAC [6] algorithm with matched SURF points [1] between two consecutive frames.
The optimization of the cost function in Equation (3) can be reformulated into a Least Square problem and solved efficiently.
Multi-Scale TrackletNet
To measure the connectivity between two tracklets, we aggregate different types of information, including temporal and appearance features via the designed multi-scale TrackletNet. The architecture of the proposed TrackletNet is shown in Figure 3.
For each frame t, a vector consisting of the bounding box parameters, i.e., (x t , y t , w t , h t ), concatenated by an embedded appearance feature extracted from the FaceNet [30], is used to represent an individual detection from a tracklet. Considering two tracklets with edge-shared in the graph, we concatenate the embedded feature of each detection from these two tracklets inside a time window with a fixed size T . Then the feature space in the time window of the two tracklets is (4 + d ap ) × T . As for frames between the two target tracklets, we use a (4 + d ap ) dimensional interpolated vector instead at each missing frame t. Besides, zeropadding is used for frames after the second tracklet. To better represent the time duration of input tracklets, two binary masks are used as individual channels with (4 + d ap ) × T dimension for each input tracklet. For each frame t, if the detection exists, then we set the t-th column of the binary mask to be all 1 vector; otherwise we set 0 vector instead. As a result, the size of the input tensor of the TrackletNet is B × (4 + d ap ) × T × 3, where B is the batch size and 3 indicates the number of channels, one for the embedded feature space and the other two for the binary masks.
TrackletNet contains three convolution layers Conv1, Conv2, Conv3, one average pooling layer AvgPool, and two fully connected layers FC1, FC2. For each convolution layer, four different sizes of kernels are used, i.e., 1 × 3, 1 × 5, 1 × 9, 1 × 13. Note that our convolution is only in the time domain, which can measure the continuity for each dimension of the feature. Different sizes of kernels will look for feature changes in different scales. The large kernels have the ability to measure the continuity of two tracklets even if they are far away in the time domain, while small kernels can focus on appearance difference if input tracklets are in small pieces. Each convolution is followed by Figure 3. Architecture of Multi-scale TrackletNet. First, we extract embedded features from two input tracklets, which include 4D location features and 512D appearance features along the time window of 64 frames. The input tensor has three channels, i.e., one for tracklet embedded features and the other two for binary masks, where white color represents 1 and black color represents 0. Four types of 1D convolution kernels are applied for feature extraction in three convolution layers. For each convolution layer, max pooling is adopted for down-sampling in the time domain. Average pooling is conducted on the dimensions of the appearance feature after Conv3. Then two fully connected layers are conducted to get the final output. one max pooling layer which down-samples by 2 in the time domain. After Conv3, we take the average pooling on appearance feature dimensions. AvgPool plays a role of the weighted majority vote on the discontinuity of all appearance dimensions. Then we concatenate all features and use two fully connected layers for the final output. The output is defined as the similarity between the two input tracklets, which ranges from zero to one.
There are some important properties of the TrackletNet, which are listed as follows.
• TrackletNet focuses on the continuity of the embedded features along the time. Because of the independence among different feature dimensions, no convolution is conducted across the dimensions of the embedded features. In other words, the convolution kernels only capture the dependency along time.
• Binary masks of the input tensor play a role as the tracklet indicator, telling the temporal locations of the tracklets. They help the network learn if the discontinuity of two tracklets is caused by frames without detection or the abrupt changes of the tracklets.
• The network integrates object Re-ID, temporal and spatial dependency as one unified framework.
Experiments
Dataset
We use MOT16 and MOT17 [21] datasets to train and evaluate our tracking performance. For MOT16 dataset, there are 7 training video sequences and 7 testing video sequences. The benchmark also provides public deformable part models (DPM) [5] detections for both training and testing data. MOT17 has the same video sequences as MOT16 but provides more accurate ground truth in the evaluation. In addition to DPM, Faster-RCNN [25] and scale dependent pooling (SDP) [38] detections are also provided for evaluating the tracking performance. The number of trajectories in the training data is 546 and the number of total frames is 5, 316.
Implementation Details
Our proposed multi-scale TrackletNet is purely trained on MOT dataset. The extracted appearance feature has 512 dimensions, i.e., d ap = 512. The time window T is set to 64 and the batch size B is set to 32. We use Adam optimizer with a learning rate of 10 −3 at the beginning. We decrease the learning rate by 10 times for every 2, 000 steps until it reaches 10 −5 . As mentioned above, the MOT dataset is quite small for training a complex neural network. However, the framework of our proposed TNT is carefully designed to avoid over-fitting. In addition, augmentation approaches are used for generating the training data, i.e., tracklets, as follows.
Bounding box randomization. Instead of using the ground truth bounding boxes for training, we randomly disturb the size and location of bounding boxes by a factor α sampled from the normal distribution N (0, 0.05 2 ). Since the detection results could be very noisy, this randomization will make sure the data from training and testing are as similar as possible. For each embedded detection before TrackletNet, the four parameters, i.e., (x, y, w, h), are normalized by the size of the frame image to ensure the input of TrackletNet keeps the same scale in different datasets.
Tracklet generation. Here, we randomly divide the trajectory of each object into small pieces of tracklets as follows. For each frame, we sample a random number from the uniform distribution, if it is smaller than a threshold, then we set this frame as the breaking frame. Then we split the entire trajectory based on the breaking frames into tracklets.
In the training stage, we randomly generate tracklets with augmentations mentioned above. For each training data, two tracklets are randomly selected as the input if they can satisfy the condition of the edge defined in the graph model in Section 3.1. If they are from the same object, the training label is set to be 1; otherwise, 0 is assigned as the label. To make it no bias, positive and negative pairs are sampled equally.
Feature Map Visualization
To better understand the effectiveness of our proposed TrackletNet, we also plot two examples of feature maps as shown in Figure 4. For each column (a) and (b) in Figure 4, the top figure shows the spatial locations of the two input tracklets in the 64-frame time window. Blue and green colors represent two tracklets respectively. The bottom figure shows the corresponding feature map in the time-channel plane after the max pooling of Conv3 with kernel size 5. The horizontal axis represents the time domain which aligns with the figures in the top row, while the vertical axis represents different channels in the feature map. For the example shown in (a), most higher values of the feature map are on the left side since the connection between the two tracklets is on the left part of the time window. As for (b), higher values in the feature map are on the middle side of the time window, which also matches the situation of the two input tracklets. From the feature map, we can see that the connection part of the input tracklets has strong activation, which is critical for the connectivity measurement.
Tracking Performance
Quantitative results on MOT16 and MOT17 datasets. We also provide our quantitative results on MOT16 and MOT17 benchmark datasets compared with other state-ofthe-art methods, which are shown in Table 1 and Table 2. Note that we use IDF1 [26] and MOTA as major factors to evaluate the reliability of a tracker. As mentioned in [26], there are several weaknesses of MOTA metric, which is very sensitive to the detection threshold. Instead, IDF1 score compares ground truth trajectory and computes trajectory by a bipartite graph, which reflects how long of an object has been correctly tracked. We can see that our IDF1 score is much higher than other state-of-the-art methods. For other metrics shown in the table, we are also among the top rankings.
Qualitative results for different scenarios. With the trained model on the MOT dataset, we also test our proposed tracker on other scenarios without any fine-tuning. Promising results are also achieved. Figure 5 shows some qualitative tracking results using our proposed tracker on other applications, like 3D pose estimation and UAV applications.
Ablation Study
Occlusion Handling. Occlusion is one of the major challenges in MOT. Our framework can easily handle both partial and full occlusions even with a very long time range. When a person is occluded, the detection as well as appearance features are unreliable. During generating the track- [20] 48 lets, when we test that there is a large change in appearance, we just stop detection association even the detection result is available. After several or tens of frames, when the same person appears again from occlusion, a new tracklet will be assigned to the person. Then the connectivity between these two tracklets will be measured to distinguish whether they are the same person. Once they are confirmed with the same ID, we can easily fill out the missing detections with linear interpolation. Figure 6 shows qualitative results for handling occlusions. The first row of Figure 6 is from the MOT17-08 sequence. At frame 566, the person with a red bounding box is fully occluded by a statue. But it can be correctly tracked after it appears again at frame 604. The second row is one example of the MOT17-01 sequence, the person with the red bounding box goes across five other pedestrians, but the IDs of all targets keep consis- eration, we run detection association on MOT17-10 and MOT17-13 with the Faster-RCNN detector because these two sequences have large camera motion. Table 3 shows the results with/without epipolar geometry. Two types of error rates are evaluated, i.e., false discovery rate (FDR) and false negative rate (FNR), which are defined as follows,
Tracker IDF1 ↑ MOTA ↑ MT ↑ ML ↓ FP ↓ FN ↓ IDsw. ↓ Frag ↓ GCRAFDR = FP TP + FP , FNR = FN TP + FN ,(4)
where TP, FP and FN represent true positive, false positive and false negative, respectively. From Table 3, we can see that FDR is quite small in both cases, which means only a small portion of incorrect associations is involved in the tracklet generation. It shows the effectiveness of our tracklet-based graph model. On the other hand, FNR largely drops with epipolar geometry adopted, especially for the MOT17-13 sequence, which reflects the effectiveness of the proposed tracklet generation strategy.
Robustness to Appearance Features. Another major advantage of our TrackletNet is the ability to address overfitting learning of appearance features. Different from [20], our TrackletNet is trained only on MOT dataset without using additional tracking datasets, but we can still achieve very good performance. This is because of the dimension independence of appearance features in training the network with convolutions only conducted in the time domain. As a result, the complexity of the network is largely reduced, which also decreases the effect of overfitting.
To test the model robustness to appearance features, we disturb the appearance features with Gaussian noise on MOT17-02 sequence. The compared baseline method is using the Bhattacharyya distance of appearance features between the input pair of tracklets as the edge cost in the graph, which is commonly used in person Re-ID tasks. The comparison results are shown in Table 4 with Gaussian noise using different standard deviations (Std). From the table, we can see that the baseline method degrades largely with the increasing of noise level, while the tracking performance is not affected much for TNT. This is because TNT measures the temporal continuity of features as the similarity rather than using feature distance itself, which can largely suppress unreliable detections or noise in tracking.
Conclusion and Future Work
In this paper, we propose a novel multi-object tracking method TNT based on a tracklet graph model, including tracklet vertex generation with epipolar geometry and connectivity edge measurement by a multi-scale Tracklet-Net. Our TNT outperforms other state-of-the-art methods on MOT16 and MOT17 benchmarks. We also show some qualitative results on different scenarios and applications using TNT. Robustness of TNT is further discussed with handling occlusions.
However, fast camera motion is still a challenge in 2D tracking. In our future work, we are going to convert 2D tracking to 3D tracking with the help of visual odometry. Once the 3D location of the object in the world coordinate can be estimated, the trajectory of the object should be much smoother than the 2D case.
| 4,120 |
1811.07258
|
2901012461
|
Multi-object tracking (MOT) is an important and practical task related to both surveillance systems and moving camera applications, such as autonomous driving and robotic vision. However, due to unreliable detection, occlusion and fast camera motion, tracked targets can be easily lost, which makes MOT very challenging. Most recent works treat tracking as a re-identification (Re-ID) task, but how to combine appearance and temporal features is still not well addressed. In this paper, we propose an innovative and effective tracking method called TrackletNet Tracker (TNT) that combines temporal and appearance information together as a unified framework. First, we define a graph model which treats each tracklet as a vertex. The tracklets are generated by appearance similarity with CNN features and intersection-over-union (IOU) with epipolar constraints to compensate camera movement between adjacent frames. Then, for every pair of two tracklets, the similarity is measured by our designed multi-scale TrackletNet. Afterwards, the tracklets are clustered into groups which represent individual object IDs. Our proposed TNT has the ability to handle most of the challenges in MOT, and achieve promising results on MOT16 and MOT17 benchmark datasets compared with other state-of-the-art methods.
|
Another category of tracking is based on end-to-end frameworks @cite_37 @cite_16 @cite_26 , where we input raw video sequences and output object trajectory. In other words, the detection and tracking are trained jointly in a single-stage network. One major advantage of this framework is that the errors will not be accumulated from detection to tracking. The temporal information across frames can help improve the detection performance, while reliable detections can also feedback reliable tracking. However, such a framework requires a lot of training data. Without enough training data, overfitting becomes a severe problem. Unlike detection based training, tracking annotations for video sequences are usually hard to get, which becomes the major limitation of the end-to-end tracking framework.
|
{
"abstract": [
"Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.",
"The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https: github.com myfavouritekk T-CNN .",
""
],
"cite_N": [
"@cite_37",
"@cite_16",
"@cite_26"
],
"mid": [
"2756784878",
"2336589871",
""
]
}
|
Exploit the Connectivity: Multi-Object Tracking with TrackletNet
|
Multi-object tracking is an important topic in computer vision and machine learning field. This technique can be used in many tasks, such as traffic flow counting from surveillance cameras, human behavior prediction and autonomous driving assistance. However, due to noisy detections and occlusions, tracking multiple objects in a long time range is very challenging. To address such problems, many methods follow the tracking-by-detection framework, i.e., tracking is applied as an association approach given the detection results. Built upon the tracking-by-detection framework, multiple cues can be combined together into the Figure 1. Our TNT framework for multi-object tracking. Given the detections in different frames, detection association is computed to generate Tracklets for the Vertex Set V . After that, each two tracklets are put into a novel TrackletNet to measure the connectivity, which formed the similarity on the Edge Set E. A graph model G can be derived from V and E. Finally, the tracklets with the same ID are grouped into one cluster using the graph partition approach. tracking scheme. 1) Appearance feature of each detected object [27,41,33,37]. With a well-embedded appearance, features should be similar if they are from the same object, while they can be very different if they are from distinct objects. 2) Temporal relation for locations among frames in a trajectory [22]. With slow motion and high frame rate of cameras, we can assume that the trajectories of objects are smooth in time domain. 3) Interaction cue among different target objects which considers the relationship among neighboring targets [28]. As a result, we should take into account all these cues as an optimization problem.
In this paper, the proposed TrackletNet Tracker (TNT) takes advantages of the above useful cues together into a unified framework based on an undirected graph model [23]. Each vertex in our graph model represents one tracklet and the edge between two vertices measures the connectivity of two tracklets. Here, the tracklet is defined as a small piece of consecutive detections of an object. Due to the unreliable detections and occlusions, the entire trajectory of an object may be divided into several distinct tracklets. Given the graph representation, tracking can be regarded as a clustering approach that groups the tracklets into one big cluster.
To generate the tracklets, i.e., vertices of the graph, we associate detections among consecutive frames based on intersection-over-union (IOU) and the similarity of appearance features. However, the IOU criterion becomes unreliable because the position of detection may shift a lot when camera is moving or revolving. In such situation, epipolar geometry is adopted to compensate camera movement and predict the position of bounding boxes in the next frame. To estimate the connectivity on the edge of the graph between two vertices, the TrackletNet is designed for measuring the continuity of two input tracklets, which combines both trajectory and appearance information. The flowchart of our tracking method TNT is shown in Figure 1.
Specifically, we propose the following contributions: 1) We build a graph-based model that takes tracklets, instead of detected objects, as the vertices, to better utilize the temporal information and greatly reduce the computational complexity.
2) To the best of our knowledge, this is the first work to adopt epipolar geometry in tracklet generation to compensate camera movement.
3) A CNN architecture, called multi-scale TrackletNet, is designed to measure the connectivity between two tracklets. This network combines trajectory and appearance information into a unified system. 4) Our model outperforms state-of-the-art methods in multi-object tracking for both MOT16 and MOT17 benchmarks, and it can be also easily applied to other different scenarios.
Tracklet Graph Model
We use tracklets as the vertices in our graph model. Unlike the detection-based graph models, which are computational expensive and not well utilizing temporal informa-tion, we propose a tracklet-based graph model, which treats the tracklet as the vertex and measures the similarity between tracklets. From the tracklet, we can infer the object moving trajectory for a longer time, and we can also measure how the embedded features of the detections change along the time. Moreover, the number of tracklets is much less than the number of detections, which makes the optimization more efficiently.
In the following section, we will discuss in detail about the model parameters and optimization by tracklet clustering.
Graph Definition
G(V, E) Vertex Set. A finite set V in which every element v ∈ V
represents a tracklet of one object across multiple frames, i.e., a set of consecutive detections of the same object along time. For each detection, we define the bounding box with five parameters, i.e., the center of the bounding box (x t , y t ), the width and height (w t , h t ), and the frame index t. Besides the bounding box of the detection, we also extract an appearance feature [30] for each detected object at frame t. Note that because of unreliable detections, an entire trajectory of an object may be divided into multiple pieces of tracklets. The tracklet generation is explained in detail in Section 4.1.
Edge Set. A finite set E in which every element e ∈ E represents an edge between two tracklets u, w ∈ V that are not far away in the time domain, i.e., min tu∈T (u),tw∈T (w) |t u − t w | ≤ δ t , where T (u) is the set of frame indices of the tracklet u. For tracklets that are far away, the edge is not considered between them since not enough information can be utilized for measuring their relationship.
A connectivity measure p e , represents the similarity of the two tracklets connected by the edge e ∈ E. The edge cost is defined as
c = log 1 − p e p e .(1)
Moreover, the connectivity is defined to be 0 if two tracklets have overlap in the time domain since they must belong to distinct objects. This is because an object cannot appear in two tracklets at the same time. The connectivity is measured by our designed TrackletNet, which will be introduced in Section 4.2.
Tracklet Clustering
After the tracklet graph is built, we acquire the object trajectories by clustering the graph into different sub-graphs. The tracklets in each sub-graph can represent the same object. We will explain some details of our tracklet clustering in the following paragraphs.
Feasible Solutions. Given a tracklet graph G(V, E), we hope to partition G into disjoint sub-graphs G[s τ ], and each sub-graph represents a distinct object. Here ∀s τ ⊆ V , τ represents the object ID. Thus, every tracklet u ∈ s τ is from the same object τ and any two tracklets u ∈ s τ , w ∈ s τ from two different sub-graphs are from different objects τ and τ . For the graph partition problem, the global optimal solution cannot be easily guaranteed. But we can still define the feasible solutions as follows.
• Each sub-graph G[s τ ] should be a connected graph, i.e., ∀τ , ∀u, w ∈ s τ , ∃P ∈ G[s τ ], s.t., u, w ∈ P , where P is a path inside G[s τ ].
• The cost on the edge inside each sub-graph should have a finite value, i.e., ∀τ , ∀u, w ∈ s τ , if ∃e ∈ E for u, w, p e = 0.
Objective Function. The objective function is defined to minimize the total clustering cost on all graph edges. We define π(u, w) ∈ {±1} as the clustering label for tracklets u and w. If u and w are partitioned into one sub-graph, π(u, w) is set to be +1; otherwise, π(u, w) is set to be −1.
The objective function is defined as follows,
O = min π∈{±1} u,w∈V u∈N (w) π(u, w) · c(u, w),(2)
where N (w) represents the set of neighboring tracklets of w with edge shared in the graph.
Clustering. The graph partition is formulated as a clustering problem. However, the minimum cost of graph cut problem defined by Equation (2) is APX-hard [24]. Besides, the number of clusters is unknown in advance. In this work, we exploit a greedy search-based clustering method proposed by [34] to minimize the cost. Five clustering operations, i.e., assign, merge, split, switch, and break, are used. The advantage of adopting different types of clustering operation is to avoid being stuck at the local minimum as much as possible in the optimization.
Proposed TrackletNet Tracker
Tracklet Generation with Epipolar Constraints
As defined in Section 3, a tracklet contains consecutively detected objects with bounding box information and appearance features with dimension d ap . To simplify the generation of tracklets, we associate two consecutive detections based on IOU and appearance similarity in adjacent frames with a high association threshold to guarantee the mis-association as small as possible [41,35].
However, the association accuracy can still be affected by the fast motion of the camera. For example, as shown in the Figure 2(a)(b), the target detection in the t-th frame has a large IOU with another detection in the (t + 1)-th frame. As a result, the detection may easily get mis-associated.
This issue can be well solved by epipolar geometry (EG) [8], i.e., x t Fx t+1 = 0 for any matched static feature point x in two frames, where F is the fundamental matrix. First, if we assume the target is static or has slow motion, then the four corner points x i,t of the target detection bounding box in the t-th frame should lie on the corresponding epipolar lines in the (t + 1)-th frame, i.e., the predicted target bounding box in the (t + 1)-th frame should intersect with the four epipolar lines as much as possible as shown in Figure 2(c). Second, we also assume the size of the bounding box does not have much change in adjacent frames, then the optimal predicted bounding box can be obtained, which is shown in red in Figure 2(d).
Followed by the above two assumptions, we can predict the target bounding box location in the (t + 1)-th frame by formulating an optimization problem. Define four corner points of the target bounding box in the t-th frame as x i,t , where i ∈ {1, 2, 3, 4}, like the example shown in Figure 2(a). Similarly, we define x i,t+1 , i ∈ {1, 2, 3, 4}, as the bounding box in the (t + 1)-th frame. Then we can define the cost function as follows,
f (x i,t+1 ) = 4 i=1 x i,t Fx i,t+1 2 2 + (x 3,t+1 − x 1,t+1 ) − (x 3,t − x 1,t ) 2 2 ,(3)
where the first term guarantees the predicted bounding box should intersect with four corresponding epipolar lines as much as possible, while the second term is the target size constraint. One example of predicted bounding box, as shown in Figure 2(d), is well aligned with the true target in (t + 1)-th frame. Then, in the detection association, IOU is calculated between predicted bounding boxes and detection bounding boxes in the (t + 1)-th frame. Fundamental matrix F can be estimated by the RANSAC [6] algorithm with matched SURF points [1] between two consecutive frames.
The optimization of the cost function in Equation (3) can be reformulated into a Least Square problem and solved efficiently.
Multi-Scale TrackletNet
To measure the connectivity between two tracklets, we aggregate different types of information, including temporal and appearance features via the designed multi-scale TrackletNet. The architecture of the proposed TrackletNet is shown in Figure 3.
For each frame t, a vector consisting of the bounding box parameters, i.e., (x t , y t , w t , h t ), concatenated by an embedded appearance feature extracted from the FaceNet [30], is used to represent an individual detection from a tracklet. Considering two tracklets with edge-shared in the graph, we concatenate the embedded feature of each detection from these two tracklets inside a time window with a fixed size T . Then the feature space in the time window of the two tracklets is (4 + d ap ) × T . As for frames between the two target tracklets, we use a (4 + d ap ) dimensional interpolated vector instead at each missing frame t. Besides, zeropadding is used for frames after the second tracklet. To better represent the time duration of input tracklets, two binary masks are used as individual channels with (4 + d ap ) × T dimension for each input tracklet. For each frame t, if the detection exists, then we set the t-th column of the binary mask to be all 1 vector; otherwise we set 0 vector instead. As a result, the size of the input tensor of the TrackletNet is B × (4 + d ap ) × T × 3, where B is the batch size and 3 indicates the number of channels, one for the embedded feature space and the other two for the binary masks.
TrackletNet contains three convolution layers Conv1, Conv2, Conv3, one average pooling layer AvgPool, and two fully connected layers FC1, FC2. For each convolution layer, four different sizes of kernels are used, i.e., 1 × 3, 1 × 5, 1 × 9, 1 × 13. Note that our convolution is only in the time domain, which can measure the continuity for each dimension of the feature. Different sizes of kernels will look for feature changes in different scales. The large kernels have the ability to measure the continuity of two tracklets even if they are far away in the time domain, while small kernels can focus on appearance difference if input tracklets are in small pieces. Each convolution is followed by Figure 3. Architecture of Multi-scale TrackletNet. First, we extract embedded features from two input tracklets, which include 4D location features and 512D appearance features along the time window of 64 frames. The input tensor has three channels, i.e., one for tracklet embedded features and the other two for binary masks, where white color represents 1 and black color represents 0. Four types of 1D convolution kernels are applied for feature extraction in three convolution layers. For each convolution layer, max pooling is adopted for down-sampling in the time domain. Average pooling is conducted on the dimensions of the appearance feature after Conv3. Then two fully connected layers are conducted to get the final output. one max pooling layer which down-samples by 2 in the time domain. After Conv3, we take the average pooling on appearance feature dimensions. AvgPool plays a role of the weighted majority vote on the discontinuity of all appearance dimensions. Then we concatenate all features and use two fully connected layers for the final output. The output is defined as the similarity between the two input tracklets, which ranges from zero to one.
There are some important properties of the TrackletNet, which are listed as follows.
• TrackletNet focuses on the continuity of the embedded features along the time. Because of the independence among different feature dimensions, no convolution is conducted across the dimensions of the embedded features. In other words, the convolution kernels only capture the dependency along time.
• Binary masks of the input tensor play a role as the tracklet indicator, telling the temporal locations of the tracklets. They help the network learn if the discontinuity of two tracklets is caused by frames without detection or the abrupt changes of the tracklets.
• The network integrates object Re-ID, temporal and spatial dependency as one unified framework.
Experiments
Dataset
We use MOT16 and MOT17 [21] datasets to train and evaluate our tracking performance. For MOT16 dataset, there are 7 training video sequences and 7 testing video sequences. The benchmark also provides public deformable part models (DPM) [5] detections for both training and testing data. MOT17 has the same video sequences as MOT16 but provides more accurate ground truth in the evaluation. In addition to DPM, Faster-RCNN [25] and scale dependent pooling (SDP) [38] detections are also provided for evaluating the tracking performance. The number of trajectories in the training data is 546 and the number of total frames is 5, 316.
Implementation Details
Our proposed multi-scale TrackletNet is purely trained on MOT dataset. The extracted appearance feature has 512 dimensions, i.e., d ap = 512. The time window T is set to 64 and the batch size B is set to 32. We use Adam optimizer with a learning rate of 10 −3 at the beginning. We decrease the learning rate by 10 times for every 2, 000 steps until it reaches 10 −5 . As mentioned above, the MOT dataset is quite small for training a complex neural network. However, the framework of our proposed TNT is carefully designed to avoid over-fitting. In addition, augmentation approaches are used for generating the training data, i.e., tracklets, as follows.
Bounding box randomization. Instead of using the ground truth bounding boxes for training, we randomly disturb the size and location of bounding boxes by a factor α sampled from the normal distribution N (0, 0.05 2 ). Since the detection results could be very noisy, this randomization will make sure the data from training and testing are as similar as possible. For each embedded detection before TrackletNet, the four parameters, i.e., (x, y, w, h), are normalized by the size of the frame image to ensure the input of TrackletNet keeps the same scale in different datasets.
Tracklet generation. Here, we randomly divide the trajectory of each object into small pieces of tracklets as follows. For each frame, we sample a random number from the uniform distribution, if it is smaller than a threshold, then we set this frame as the breaking frame. Then we split the entire trajectory based on the breaking frames into tracklets.
In the training stage, we randomly generate tracklets with augmentations mentioned above. For each training data, two tracklets are randomly selected as the input if they can satisfy the condition of the edge defined in the graph model in Section 3.1. If they are from the same object, the training label is set to be 1; otherwise, 0 is assigned as the label. To make it no bias, positive and negative pairs are sampled equally.
Feature Map Visualization
To better understand the effectiveness of our proposed TrackletNet, we also plot two examples of feature maps as shown in Figure 4. For each column (a) and (b) in Figure 4, the top figure shows the spatial locations of the two input tracklets in the 64-frame time window. Blue and green colors represent two tracklets respectively. The bottom figure shows the corresponding feature map in the time-channel plane after the max pooling of Conv3 with kernel size 5. The horizontal axis represents the time domain which aligns with the figures in the top row, while the vertical axis represents different channels in the feature map. For the example shown in (a), most higher values of the feature map are on the left side since the connection between the two tracklets is on the left part of the time window. As for (b), higher values in the feature map are on the middle side of the time window, which also matches the situation of the two input tracklets. From the feature map, we can see that the connection part of the input tracklets has strong activation, which is critical for the connectivity measurement.
Tracking Performance
Quantitative results on MOT16 and MOT17 datasets. We also provide our quantitative results on MOT16 and MOT17 benchmark datasets compared with other state-ofthe-art methods, which are shown in Table 1 and Table 2. Note that we use IDF1 [26] and MOTA as major factors to evaluate the reliability of a tracker. As mentioned in [26], there are several weaknesses of MOTA metric, which is very sensitive to the detection threshold. Instead, IDF1 score compares ground truth trajectory and computes trajectory by a bipartite graph, which reflects how long of an object has been correctly tracked. We can see that our IDF1 score is much higher than other state-of-the-art methods. For other metrics shown in the table, we are also among the top rankings.
Qualitative results for different scenarios. With the trained model on the MOT dataset, we also test our proposed tracker on other scenarios without any fine-tuning. Promising results are also achieved. Figure 5 shows some qualitative tracking results using our proposed tracker on other applications, like 3D pose estimation and UAV applications.
Ablation Study
Occlusion Handling. Occlusion is one of the major challenges in MOT. Our framework can easily handle both partial and full occlusions even with a very long time range. When a person is occluded, the detection as well as appearance features are unreliable. During generating the track- [20] 48 lets, when we test that there is a large change in appearance, we just stop detection association even the detection result is available. After several or tens of frames, when the same person appears again from occlusion, a new tracklet will be assigned to the person. Then the connectivity between these two tracklets will be measured to distinguish whether they are the same person. Once they are confirmed with the same ID, we can easily fill out the missing detections with linear interpolation. Figure 6 shows qualitative results for handling occlusions. The first row of Figure 6 is from the MOT17-08 sequence. At frame 566, the person with a red bounding box is fully occluded by a statue. But it can be correctly tracked after it appears again at frame 604. The second row is one example of the MOT17-01 sequence, the person with the red bounding box goes across five other pedestrians, but the IDs of all targets keep consis- eration, we run detection association on MOT17-10 and MOT17-13 with the Faster-RCNN detector because these two sequences have large camera motion. Table 3 shows the results with/without epipolar geometry. Two types of error rates are evaluated, i.e., false discovery rate (FDR) and false negative rate (FNR), which are defined as follows,
Tracker IDF1 ↑ MOTA ↑ MT ↑ ML ↓ FP ↓ FN ↓ IDsw. ↓ Frag ↓ GCRAFDR = FP TP + FP , FNR = FN TP + FN ,(4)
where TP, FP and FN represent true positive, false positive and false negative, respectively. From Table 3, we can see that FDR is quite small in both cases, which means only a small portion of incorrect associations is involved in the tracklet generation. It shows the effectiveness of our tracklet-based graph model. On the other hand, FNR largely drops with epipolar geometry adopted, especially for the MOT17-13 sequence, which reflects the effectiveness of the proposed tracklet generation strategy.
Robustness to Appearance Features. Another major advantage of our TrackletNet is the ability to address overfitting learning of appearance features. Different from [20], our TrackletNet is trained only on MOT dataset without using additional tracking datasets, but we can still achieve very good performance. This is because of the dimension independence of appearance features in training the network with convolutions only conducted in the time domain. As a result, the complexity of the network is largely reduced, which also decreases the effect of overfitting.
To test the model robustness to appearance features, we disturb the appearance features with Gaussian noise on MOT17-02 sequence. The compared baseline method is using the Bhattacharyya distance of appearance features between the input pair of tracklets as the edge cost in the graph, which is commonly used in person Re-ID tasks. The comparison results are shown in Table 4 with Gaussian noise using different standard deviations (Std). From the table, we can see that the baseline method degrades largely with the increasing of noise level, while the tracking performance is not affected much for TNT. This is because TNT measures the temporal continuity of features as the similarity rather than using feature distance itself, which can largely suppress unreliable detections or noise in tracking.
Conclusion and Future Work
In this paper, we propose a novel multi-object tracking method TNT based on a tracklet graph model, including tracklet vertex generation with epipolar geometry and connectivity edge measurement by a multi-scale Tracklet-Net. Our TNT outperforms other state-of-the-art methods on MOT16 and MOT17 benchmarks. We also show some qualitative results on different scenarios and applications using TNT. Robustness of TNT is further discussed with handling occlusions.
However, fast camera motion is still a challenge in 2D tracking. In our future work, we are going to convert 2D tracking to 3D tracking with the help of visual odometry. Once the 3D location of the object in the world coordinate can be estimated, the trajectory of the object should be much smoother than the 2D case.
| 4,120 |
1811.07192
|
2901632551
|
Approximate inference algorithm is one of the fundamental research fields in machine learning. The two dominant theoretical inference frameworks in machine learning are variational inference (VI) and Markov chain Monte Carlo (MCMC). However, because of the fundamental limitation in the theory, it is very challenging to improve existing VI and MCMC methods on both the computational scalability and statistical efficiency. To overcome this obstacle, we propose a new theoretical inference framework called ergodic Inference based on the fundamental property of ergodic transformations. The key contribution of this work is to establish the theoretical foundation of ergodic inference for the development of practical algorithms in future work.
|
pmlr-v70-hoffman17a proposed another hybrid method based on VI and HMC without auxiliary approximation. The idea is to use a Monte Carlo estimation of the marginal likelihood by averaging over samples from HMC chains, that are initialized by variational distribution. In a very similar framework is proposed using Metropolis-adjusted Langevin dynamics. This idea is very similar to contrastive divergence in @cite_2 . The main disadvantage of this methods is that the HMC parameters are manually pretuned. Especially, As mentioned by , No-U-turn Sampler (NUTS), an adaptive HMC, is not appliable due to engineering difficulties. @cite_6 pointed out that HMC is very sensitive to the choice of Leapfrog step size and number of leaps.
|
{
"abstract": [
"Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious \"momentum\" variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor - a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.",
"It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."
],
"cite_N": [
"@cite_6",
"@cite_2"
],
"mid": [
"1840847274",
"2116064496"
]
}
|
The Theory and Algorithm of Ergodic Inference
|
Statistical inference is the cornerstone of probabilistic modelling in machine learning. The research on inference algorithms always attracts a great attention in the research community, because it is the fundamentally important in the computation of Bayesian inference, deep generative models. The majority of research is focused on algorithmic development in two theoretical frameworks: variational inference (VI) and Markov chain Monte Carlo (MCMC). These two methods are significantly different. VI is an optimisation-based approach, in particular, which fits a simple distribution to a given target. In contrast, MCMC is a simulation-based approach, which sequentially generates asymptotically unbiased samples of arbitrary target.
Unfortunately, both VI and MCMC suffer from fundamental limitations. VI methods are in general biased because the density function of approximate distribution must be in closed-form. MCMC methods are also biased in practice because the Markov property limits the sample simulation in a local sample space close to previous samples. However, Copyright 2018 by the author(s).
VI is in general more scalable in computation. Optimising variational distribution and simulating samples in VI are computationally efficient and can be accelerated by parallelization on GPU. In contrast, simulating Markov chains is computationally inefficient and, more importantly, asynchronized parallel simulation of multiple Markov chains has no effect on reducing sample correlations but multiplies the computation.
Ergodic Measure preserving flow (EMPF), introduced by (Zhang et al., 2018), is a recent novel optimisation-based inference method that overcomes the limitations of both MCMC and VI. However, there is no theoretical proof of the validity of EMPF. In this work, we will generalize EMPF to a novel inference framework called ergodic inference. In particular, the purpose of this work is to establish the theoretical foundation of ergodic inference. We list the key contribution of this work as following
• The mathematical foundation of ergodic inference.
(Section 3 and 4)
• A tractable loss of ergodic inference and the proof of the validity of the loss. (Section 5)
• An ergodic inference model: deep ergodic inference networks (Section 6)
• Clarification of differences between ergodic inference, MCMC and VI (Section 6)
Distance Metric of Probability Measures
Total variation distance is fundamentally important in probability theory, because it defines the strongest convergence of probability measure. Let (Ω, F ) be a measure space,
where Ω denotes the sample space and F denotes the collection of measurable subsets of Ω. Given two probability measure P and Q defined on (Ω, F ), the TV distance between Q and P is defined as
D TV (Q, P ) = sup A∈F |Q(A) − P (A)|.(1)
Convergence in TV, that is D TV (Q, P ) = 0, means Q and P cannot be distinguished on any measurable set.
The Kullback-Leibler (KL) divergence is an important measure of difference between probability measures in statistical methods. For a continuous sample space Ω, the KL divergence is defined as
D KL (Q||P ) = Ω dQ log dQ dP ,(2)
where dP denote the density of probability measure.
Approximate Monte Carlo Inference
Monte Carlo method is the most popular simulation based inference technique in probabilistic modelling. For example, to fit a probabilistic model π by maximum likelihood estimation, it is essential to compute the gradient of the partition function Z(θ) = π * (z)dz. Given the unnormalised density function log π * (z), computing the gradient becomes a problem of expectation estimation
∂ θ Z(θ) = E π(z) [∂ θ log π * (z)].
Monte Carlo methods allow us to construct unbiased estimator of expectation as
E π(z) [f (z)] = lim N →∞ 1 N N i=1 f (z i ),
where z i denotes samples from π. Unfortunately, it is intractable to generate samples from complex distributions, like the posterior distributions in model parameters or latent variables. Because of this challenge, approximate Monte Carlo Inference is fundamentally important. We will review the theoretical foundation of two important inference methods: variational Inference (VI) and Markov chain Monte Carlo (MCMC) in the next two sections.
Variational Inference
The theoretic foundation of VI is Pinsker's inequality. Pinsker's inequality states that the KL divergence is a upper bound of TV distance
D TV (Q, P ) ≤ D KL (Q||P ).(3)
Given a parametric distribution Q and the target distribution π, minimising the KL divergence D KL (Q||π) implies the less TV distance D TV (Q, π). The key challenge of VI is how to construct the parametric family Q so that the estimation of the KL divergence is tractable and family Q is expressive to approximate complex target. This forces most VI methods to choose Q with closed-form density function. Otherwise, the estimation of entropy term H(Q) = − Q(dz) log q(z) becomes challenging. In practice, the approximation family Q in most VI methods are rather simple, like Gaussian distribution, so the approximation bias due to oversimplified Q is the key issue of VI.
However, simple approximate family gives VI methods great computational advantage in practice. First, the main loss function in VI is known as the evidence lower bound (ELBO)
L ELBO = Ω dQ log dπ * dQ ≤ log dπ * .(4)
With analytic form of the entropy of Q, ELBO can be efficiently computed and optimized using standard gradient descent algorithm. Second, simulating i.i.d. samples from a simple variational family Q is straightforward and very efficient.
Markov Chain Monte Carlo
The theoretical foundation of Markov chain Monte Carlo (MCMC) is ergodic theorem. Ergodic theorem states that, given an ergodic Markov chain (Z n ) with a stationary distribution π, the average cross states of chain is equivalent to the average in state space of the chain, that is
E π [f ] = lim m→∞ 1 M f (Z m ∞ ) = lim n→∞ 1 N f (Z n ),
where Z m ∞ denotes the sample of a well-mixed Markov chains after infinitely long transitions. Ergodic theorem implies that we can generate unbiased samples from every Markov transition without waiting forever for the chains to reach stationary state. Therefore, we can trade computational efficiency with a bias that may decrease in a long time. The key challenge of MCMC methods is to define ergodic Markov chains with any given stationary distribution π. This challenge was solved first by Metropolis-Hastings algorithm. We will discuss in detail in Section 4.2.
Ergodic Markov chains enjoy strong stability. Irrespective of the distribution of initial state µ(z 0 ) and the parameter of Markov kernel K(·, ·), the distribution of the state of the chain is guaranteed to converge to the stationary distribution in total variation after every transition. Formally, that means the reduce of TV distance to stationary for all L ≥ 0
D TV (Q L+1 , π) < D TV (Q L , π)
where q L denotes the marginal distribution of the L-th state and q L (dz ′ ) = K(z, dz ′ )q L (dz).
As L increases, the distribution q L converges to a unique stationary distribution π lim l→∞ D TV (Q l , π) = 0.
In spite of the theoretical convergence property, the convergence of MCMC chains is not guaranteed in practice. Because the burn-in stage cannot be infinite long, the samples from MCMC methods are often biased. The problem is that there is no reliable measurement of such a sampling bias related to TV distance or KL divergence. The iterative simulation of Markov chain is another limitation in computational efficiency. Each sample from MCMC methods requires one simulation of Markov transition and this can only be executed in a sequential manner due to the nature of Markov chain. Therefore, the sampling time of MCMC grows linearly with the number of samples.
Ergodic Inference Principle
In this section, we present the mathematical foundation of ergodic inference principle.
Motivation
First, we would like to propose the the following properties of ideal inference method:
• Parallelizable: the simulation of each sample is computationally independent;
• Statistically efficient: there is zero correlation between samples;
• Asymptotic unbiased: more computational power guarantees diminishing of simulation bias. The bias can be eliminated in theory with sufficient computation.
Both MCMC and VI fail to have all the properties above. For this reason, there are existing works on a hybrid methods that combine MCMC and VI, for example, accelerate the burn-in of MCMC using variational approximation in (Hoffman, 2017) or optimise ELBO based on tractable density function of MCMC kernel in (Salimans et al., 2015). To some extend, such algorithmic hybrid approach can be useful in practice. However, the limitation in theoretical foundation of MCMC and VI cannot be eliminated by algorithmic modification. To achieve an ideal inference method, it is necessary to have a new mathematical theoretical foundation.
The Theoretical Foundation
Different from Pinsker's inequality and ergodic theorem, the theoretical motivation of the proposed inference is the characteristic property of ergodic Markov transition: there is a unique invariant distribution for every ergodic Markov Kernel. Formally, let K π be an ergodic Markov transition kernel with an invariant distribution π. By construction of K π , π is guaranteed to be the only distribution satisfies the condition π(dz ′ ) = K π (z, dz ′ )π(dz).
Based on the property of ergodic Markov kernel, we construct the following criteria to verify if a distribution is equivalent to the stationary distribution of the kernel. Given a distribution q, the distribution of q after one Markov transition by K is given by
q 1 (z ′ ) = K π (z, z ′ )q(dz).(5)
We say the distribution q is preserved by K π if
D TV (q 1 , q) = 0.(6)
By the uniqueness of the invariant distribution of ergodic kernel K π , the preservation of q by K π as (6) implies D TV (q, π) = 0. This motivates the following loss function.
Definition 3.1. Given a Markov kernel K π (z, z ′ ) that is ergodic w.r.t. a distribution π, the ergodic loss of a distribution q is defined as
L * (q, K π ) = D TV K π (z, ·)q(dz), q(·) .
As mentioned earlier, the loss L * (q, K π ) is equal to 0 if and only if D TV (q, π) is equal to 0.
Let π be the target distribution and q be the approximate distribution in a parametric family Q. Given an ergodic Markov kernel K π , the closest q ∈ Q to the target π can be identified by the parameter φ * optimising the ergodic loss
L * (·, K π ) φ * = arg min φ L * (q φ , K π ).
If the target distribution is in Q, then the optimal parameter φ * should have the loss
L * (q φ * , K π ) = 0,
otherwise the L 2 norm of the gradient of the loss should vanish ||∂ φ * L * (q φ * , K π )|| 2 2 = 0.
Technical Challenges
There are two technical challenges of ergodic inference methods in practice. First, we need a tractable estimation of a loss function equivalent to D TV (q 1 , q). The estimation of the gradient of the loss should also be tractable for the optimisation of the parameter φ. Second, we need a general parametric family Q that can approximate any target distribution up to a certain amount of error. More specific, the error can be controlled and even eliminated by increase the complexity of approximation family of Q, i.e. the number of parameters of Q is unlimited. The computational cost of optimisation is associated with the complexity of Q.
We will present the solution to the first challenge in Section 5 and the solution to the second challenge in Section 6.
Ergodic Transformations
The key of solving the technical challenges in ergodic inference is the reparameterization of the ergodic Markov kernel. This is important in both algorithmic development and theoretical analysis.
Ergodic Transformations and Markov Kernels
Ergodic Markov kernels are essentially conditional distributions, which can be reparameterized by deterministic transformations known as measure preserving transformations (MPTs). Given a probability measure µ, a deterministic transformation T preserves µ if for any measurable subset of sample space A, µ(T −1 (A)) = µ(A). The shear transformation T (x, y) = (x + y, x), which preserves Lebesgue measure, is a classic example of MPT (Billingsley, 1986).
The following conditions are often used in the literature MCMC theory for verification of ergodic property:
1. Irreducibility: T (A) = A , ∀A ∈ F except ∅ and Ω.
2. Density preservation: π(T (z)) = π(z).
3. Lebesgue preservation: the determinant of the Jacobian of T is equal to 1.
Formally, we define the reparameterisation of Ergodic Markov chains as following. Definition 4.1. (Ergodic Reparameterisation of MCMC) Given a target distribution π(z), a MCMC kernel K(z, z ′ ) with invariant π can be reformed as two steps:
1. Simulate an auxiliary variable r with distribution µ(r)
2. Deterministic transformation (z ′ , r ′ ) = T πµ (z, r),
where T πµ is an ergodic transformation that preserves the probability measure π(z)µ(r).
Remark. The transformation T πµ in ergodic reparameterisation is fundamentally different from volume preserving transformation V (z) in the sample space of z for two reasons.
• T πµ (z, r) does not preserve the volume/entropy in the sample space of z, but V (z) must preserves the volume/entropy in the space of z.
• T πµ (z, r) preserves the probability measure π(z), but V (z) does not preserve π(z) in general.
Ergodic transformations also allow us to form the expectation under Markov transition as composition of functions, that is not used in classic MCMC literature. Formally, this is given by the following proposition.
Proposition 1. Given an ergodic transformation T π w.r.t. π, the expectation is preserved by the transformation, which means, for any function f
Ω f (z)π(dz) = Ω f •T π (z)π(dz) = Ω ′ f (z ′ )T π * π(dz ′ ),
where Ω ′ is the image of Ω under T π and T π * π(·) denotes the pushforward probability measure of π under T π . Because T π preserves π, Ω ′ = Ω and D TV (π, T π * π) = 0.
In the next two sections, we will demonstrate the ergodic reparameterization with two well-known MCMC kernels.
Metropolis-Hastings Transformations
Metropolis-Hastings (MH) algorithm is the first and most well-known MCMC methods. We will show that it is straightforward to form the MH transition kernel as an ergodic transformation. Given a target distribution π(z) and a transition proposal distribution q(r|z), MH kernel in most text books is described as following two steps:
1. Sample r from q(·|z).
2. Return the new state of the chain as r with probability
p MH = min 1, π(r)q(z | r) π(z)q(r | z) ,(7)
otherwise the state remains as z.
It is straightforward to verify that MH transition kernel preserves the density function as
π(z) q(r|z) min 1, π(r)q(z | r) π(z)q(r | z) = min {π(z)q(r | z), π(r)q(z | r)} =π(r) q(z|r) min 1, π(z)q(r | z) π(r)q(z | r) ,
where the MH transition kernel K MH (·, ·) is in squared rackets. This verification of stationary distribution is known as detailed balance. It is important because it proves the existence of stationary distribution.
Now we consider an alternative representation of MH kernel. In particular, we define a stationary distribution as the joint distribution of all random variables involved in the target π and MH kernel K MH , that is π(z, r, u) = π(z)q(r|z)ν(u), where ν(u) denotes uniform distribution between [0, 1]. Following the ergodic reparameterization (Definition 4.1), we can rewrite the MH algorithm as 1. Resample r from q(·|z) and u from ν(·).
2. Return the next state (z ′ , r ′ , u ′ ) = T MH (z, r, u) defined as
T MH (z, r, u) = (z, r, u)δ(u > p MH ) + (r, z, u)δ(u < p MH ),(8)
where δ(·) denotes indicator function.
Notice that the transformation T MH (z, r, u) above is a deterministic function. It is obvious that resampling r and u from their conditional distribution leaves π(z, r, u) invariant. Then, it is straightforward to show the preservation of density function
π(s)δ(s ′ = T MH (s)) = π(s ′ )δ(s = T MH (s ′ )),
where s denote the triple (z, r, u). It is also easy to verify that the determinate of Jacobian of ∂ (z,r,u) T MH (z, r, u) is always equal to 1.
Hamiltonian Measure Preserving Transformations
Hamiltonian Monte Carlo (HMC), originally known as Hybrid Monte Carlo, is an important MCMC method. Originally, HMC is considered as a hybrid method, because its combines both deterministic and stochastic simulation. The deterministic simulation in HMC essentially refers to any dynamics that generalize the classic Hamiltonian dynamics in physics.
Hamiltonian system in physics is a system of moving particles in an energy field and the energy of the system is constant over time. Given n particles, the state of Hamiltonian system is defined by the position z ∈ R n and the momenta r ∈ R n . The position z is associated with potential energy U : R n → R and the momentum r is associated with kinetic energy K : R n → R. The state (z, r) evolves over time t, according to Hamiltons equations:
z(t) = ∂ r K(r);ṙ(t) = −∂ r U (z),(9)
whereż denotes the derivative of z w.r.t. time t. It is straightforward to verify that the total energy H = U + K does not change over timė
H(z, r) = (∂ r U (z)) T ∂ r K(r) − (∂ r U (z)) T ∂ r K(r) = 0.
Given an initial condition (z, r), the solution of Hamiltonian dynamics is a function of time t (z(t), r(t)) = T H (t, z, r).
Given a fixed time t, the solution T H becomes a map
T H,t : R 2n → R 2n between two states (z, r) and (z ′ , r ′ ) with the same total energy H. Intuitively, z(t) forms a trajectory of particle traversing in a n-dimensional space and the velocity of the particle is given byż(t) = ∂ r K(r(t)).
It is well-known in MCMC literature that T H,t is essentially a family of measure preserving transformations with any parameter t ∈ R = 0. It is clear that T H,t is irreducible if t = 0 and density preserving w.r.t. exp(−H). The volume preservation property of Hamiltonian dynamics in the state space (z, r) is a well-known result of Liouvilles Theorem. Therefore, we know that T H,t (z, r) with any t = 0 is an ergodic transformation w.r.t. the distribution π(z)µ(r) ∝ exp(−H(z, r)). This implies T H,t also preserves π ∝ exp(−U ) by the definition of marginal distribution.
In practice, Hamiltonian dynamics do not have closedform solutions. Fortunately, there is a rich literature on the numeric simulation of Hamiltonian dynamics. The most known approximate approach in HMC is Leapfrog algorithm, which is constructed as a sequential of shear transformations. Leapfrog algorithm enjoys strong stability and good approximation error is around squared discretized step size. See more detailed analysis in (Neal, 2010;Leimkuhler & Reich, 2004).
Ergodic Loss
π-Ergodic Loss Function
By the definition of TV distance, we know that q is the stationary distribution of K if and only if for all function
f (·) with E π [f (z)] < ∞, E q1 [f (z)] = E q [f (z)].(10)
However, it is impossible to compare the expectation of all possible function f , but given specific function f it is possible to estimate
L K,f (φ) = |E q1 [f (z)] − E q [f (z)]| .(11)
With the optimal choice of function f and certain condition, we can claim that L K,f (φ) = 0 implies D TV (q, π) = 0. The log density function is an intuitive choice, because we can identify a distribution by its density function. Therefore, we define the following π-ergodic loss.
Definition 5.1. (Ergodic Loss Function)
L K,π (φ) = |E q1 [log π(z)] − E q [log π(z)]| .(12)
Theorem 1. (Ergodic Loss Convergence Theorem) Given the ergodic Markov kernel K π with invariant distribution π, the loss L K,π (φ) = 0 if and only if E π [log π(z)] = E q [log π(z)].
Proof. The convergence of loss L K,π (φ) = 0 implies
E q1(z)µ(r) [log π(z)] = E q(z)µ(r) [log π(z)] ,(13)
where q 1 (z) is given by (5). Notice that q 1 is essentially the marginal of the pushforward of q(z)µ(r) under the measure preserving transformation T πµ . By Proposition 1, the expectations in (13) can be written as following
E q1(z) [log π(z)] ∆ = Ω log π • T πµ d(qµ) = Ω log π d(qµ),(14)
where d(qµ) is the shorthand notations for q(z)µ(r)dzdr.
Replacing qµ on both sides in (14) with any distribution, the equality still holds. If we replace qµ in (14) with with the pushforward probability measure of q(z)µ(r) under T πµ , denoted by T πµ * (qµ), we have
Ω log π • T πµ • d(T πµ * (qµ)) = Ω log π • d(T πµ * (qµ)),
which can be rewritten as
Ω log π • T 1 πµ • T πµ d(qµ 1 ) = Ω log π • T πµ d(qµ),(15)
where T 1 πµ denotes T πµ = (z, r 1 ) and dµ 1 denotes µ(dr 1 ). Notice that the LHS of (15) is an expectation under the distribution of z after two ergodic Markov transitions from q, that is E q2(z) [log π(z)]. Therefore, by (14) and (15), we have
E q2(z) [log π(z)] ∆ = Ω log π • T 1 πµ • T πµ d(qµ 1 ) = Ω log π • T πµ d(qµ) = E q(z) [log π(z)].(16)
By induction, we know the expectation of E qn [log π] does not change after any number of measure preserving transformation T πµ , that gives
E q∞(z) [log π(z)] = E q(z) [log π(z)].(17)
By (17), we know if we simulate infinitely long ergodic Markov chain by kernel K π , then the expectation E q∞(z) [log π(z)] is the same as the initial expectation E q(z) [log π(z)].
Because an ergodic Markov chain has unique invariant distribution, (17) implies
E π(z) [log π(z)] = E q(z) [log π(z)].(18)
Recall that the convergence of loss L K,π * (φ) cannot be sufficient for the convergence of the TV distance D TV (q, π) = 0. Fortunately, under some reasonable condition, the loss L K,π * (φ) = 0 implies the convergence in TV distance. Formally, this is given by the following theorem.
Theorem 2. (Ergodic Measure Convergence Theorem) Let K π be an ergodic Markov kernel with invariant distribution π. Assume that the entropy of Q is not less than the entropy of π, that is H(Q) ≥ H(π), the loss L K,π * (φ) = 0 if and only if D TV (q, π) = 0.
Proof. By the definition of the KL divergence, we have
D KL (q||π) = E q [log q] − E q [log π].(19)
By Theorem 1, we have
D KL (q||π) = E q [log q] − E π [log π],(20)
which is equivalent to
D KL (q(z)||π) = H(π) − H(Q).
Because the KL divergence is never less than 0, we have
H(π) ≥ H(Q).
Finally, by the assumption H(π) ≤ H(Q), we know H(π) = H(Q), so we know 0 ≤ D TV (q, π) ≤ D KL (q||π) = 0, which implies D TV (q, π) = 0.
By the monotonic convergence in TV distance of ergodic Markov chain, it is straightforward to show that Proposition 2. Given a smooth ergodic transformations w.r.t. the probability measure π(z), if E q [log π] < E π [log π] , the loss
E q [log π * (z)] − E q1 [log π * (z)] > 0.(21)
Assume that E q [log π] < E π [log π] , we have
L * K,π * (φ) = E q [| log π * (z)|] − E q1 [| log π * (z)|] ,(22)
Optimising π * -Ergodic Loss
Let q 01 (z, z 1 ) be the joint distribution q(z)K(z, z 1 ). Then, we can rewrite (22) as
L * K,π * (φ) = E q01 [log π * (z) − log π * (z 1 )] ,(23)
which can be estimated by samples of (z, z 1 ). To optimise the loss (25), we need to compute the gradient ∂ φ L * K,π * (φ). Notice that the z and z 1 are coupled by the kernel K and the density function of most MCMC kernels, which makes the computation of the gradient ∂ φ L * K,π * (φ) unstable. To avoid this, we reparameterize both q(·) and the ergodic Markov kernel K(z, ·) by a transformation T φ and a measure preserving transformation T π respectively. This allows us to transform some simple random variable r and r 1 , that is independent of φ, into (z, z 1 ) as z = T φ (r), z 1 = T π (z, r 1 ).
(24) Therefore, we can compute the loss with following reformulation
L * K,π * (φ) = E µ(r)µ1(r1) [L π * ,T φ ,Tπ (r, r 1 )] ,(25)
where L π * ,T φ ,Tπ = log π * (z) − log π * (z 1 ) and (z, z 1 ) = T φ,π as (24).
As discussed above, the only requirement of approximate family Q in ergodic inference is the transformation T φ is known and it is a measurable function. It is an important advantage over VI, where the density function of Q must be in closed form.
Deep Ergodic Inference Model
Ergodic transformations are not only fundamentally important in the ergodic loss, they are also powerful tools for constructing flexible approximation family Q. In this section, we will present how to construct and optimise the approximation family Q by stacking multiple layers of ergodic transformations.
Definition
Let {K 1 , K 2 , . . . , K N } be N ergodic transition kernel with independent parameters {φ 1 , φ 2 , . . . , φ N }. Let q 0 be the distribution of initial state also has parameter φ 0 . By ergodic reparameterization, we reform each ergodic Markov kernel K n (z, z ′ ) as a transformation z n = T n (z n−1 , r), where T n is a deterministic function depends on the kernel parameter φ n and r is sampled from a standard distribution µ n . We also reparameterize the initial distribution q from a simple distribution µ 0 by a transformation T 0 . Then, we can generate samples of z n by transforming samples of (r 0 , r 1 , . . . ,
r N −1 ) from µ(·) = N −1 i=0 µ i (·) as z n = T rN−1 • · · · • T r1 • T 0 (r 0 ),(26)
where T rn (·) denotes T n (·, r n ). We call this multiple layer ergodic transformation T rN −1 • T r1 • T 0 (·) deep ergodic inference network (DEIN). The expectation of q N can be reformed as
E qN [f (z N )] = E µ [f • T rN−1 • · · · • T r1 • T 0 (r 0 )],
which allows us to estimate the gradient of any function by Monte Carlo method
∂ φ E qN [f (z N )] ≈ 1 M M i=1 ∂ φ f • T r i N −1 • · · ·• T r i 1 • T 0 (r i 0 ) .
Optimisation and Convergence of DEINs
This is a non-parametric model because the number of parameters of this model grows with the number of transformations. Different from deep neural networks, DEIN has strong stability by the natural of ergodicity. In particular, DEINs can be arbitrarily deep and the stability and simulation quality is guaranteed to improve with the depth.
First, we define a loss (12) for each transition K n as
L n (φ n ) = E qn [log π * (z)] − E qn−1 [log π * (z)] ,
where q N denotes the marginal of the last state q n (z; φ 0:n ) = K(z n−1 , z n )q n−1 (z n−1 ; φ 0:n−1 ).
(27)
Proposition 3. Assume that E q0 [log π(z)] < E π [log π(z)], minimizing the ergodic loss L * K,π * in (22) with q N of deep ergodic Inference network is equivalent to maximizing the total ergodic loss
N n=1 L n (φ n ) L N (φ) = E qN [log π * (z)] − E q0 [log π * (z)] .(28)
which is equivalent to
L N (φ; φ 0 ) = E qN [log π * (z)].(29)
when the parameter of q 0 is fixed.
The total loss (29) is consistent with the loss proposed by (Zhang et al., 2018) in ergodic measure preserving flows.
By Proposition 2, it is straightforward to show that DEINs enjoy incremental improvement as the depth grows. Theorem 3. (Incremental Convergence of DEIN) Given a N -layer DEIN defined as (26), the optimal total ergodic loss L N (φ * ) = max φ L N (φ) increases monotonically as N increases.
Similar to the convergence of ergodic Markov chains, we have the asymptotic unbiased convergence of DEINs as following. Theorem 4. (Asymptotic Unbiased Convergence of DEINs) For arbitrarily small ǫ > 0, there always exists a DEIN with finite number of layer N , so that with the optimal distribution q * N has the ergodic loss L * K,π = D TV K(z, ·)dq * N , q * N (·) ≤ ǫ.
Comparison with Auto-Tuning MCMC
From an algorithmic perspective, auto-tuning MCMC (AM-CMC) and DEIN are very similar, because both methods simulate ergodic Markov chains and optimise the parameters of the kernel w.r.t. a loss. This may give a false impression of that AMCMC and DEIN share the same theoretical foundation.
To clear this impression, we will discuss the fundamental difference between DEINs and AMCMC. First of all, AM-CMC is essentially a class of MCMC methods with autotuning strategy of kernel parameters. In particular, the purpose of auto-tuning is to boost the statistical power of samples from MCMC by encouraging distant jump between states in Euclidean space, which is inspired by the work of (Pasarica & Gelman, 2010) on reducing sample correlation of MCMC. In contrast, as a parametric family in ergodic inference methods. The parameters in DEINs is optimised w.r.t. the ergodic loss, which is based on the ergodic inference principle in Section 3.2.
The fundamental difference have two important effects in practice. The first effect is on the sample correlation. By the nature of Markov property, optimising the auto-tuning loss can never eliminate the correlation of samples from MCMC. In contrast, the samples from DEINs are generated by deterministic transformation of i.i.d. samples from initial distribution, which is still i.i.d. samples. The second consequence is on the MH-correction. In particular, MH correction is optional for DEINs for three reasons. First, DEIN is a parametric approximate family Q rather than unbiased simulation procedure. Second, by optimising the ergodic loss, DEINs guarantee the convergence towards the target in TV distance. Finally, even with approximate ergodic transformations, the existence of a stationary distribution (not necessarily the target) is guaranteed by measure preserving property, in particularly with the depth of DEIN is always finite. In contrast, the convergence of AMCMC chains is only guaranteed with MH correction. In particular, without MH correction, the existence of a stationary distribution of MCMC chains becomes questionable. With unlimited number of recurrent Markov transitions, Markov chains are not guaranteed to converge to any distribution. The existence of stationary distribution is the necessary condition of ergodic theorem (Robert & Casella, 2005). Therefore, without MHcorrection (implicitly proved by detailed balance condition), the bias of samples from MCMC may not be bounded. This is particularly true when the Markov kernel parameter is tuned to maximize the jumping distance between states.
Comparison with Normalising Flows
Normalizing Flow (NF), introduced by (Rezende & Mohamed, 2015), is a recent variational inference framework, where the variational parametric distribution is defined in an iterative procedure. The fundamental idea of NF is to define an expressive parametric family by a sequence of deterministic transformations with closed-form Jacobian. Let z 0 be a random variable from a simple distribution µ, like Gaussian, and f 1 . . . , f M be M deterministic functions from R n to R n . We define a sequence of random variable z 1 . . . z M as
z M = f M • · · · • f 1 (z 0 ).
By the rule of changing variables, the density function of z M is given by
log p(dz M ) = log q(dz 0 ) − i=1 log |det ∂ zi f i (z i )| .
There are three important difference between DEINs and NFs. First, without manually engineering ergodic transformations, DEINs have theoretical guarantee of better performance with more transformations (Theorem 4). In contrast, the transformations f i in NFs is predefined based on heuristics and experimental evidence. Second, ergodic transformations T π has no closed form solutions, but the transformations f i in NFs is limited to simple functions with tractable Jacobian. Finally, the distribution of DEINs is very expressive, which may not even have a closed form as (27). More importantly, there is no need to compute the density for optimising the parameters. It is the opposite for NFs. In particular, the transformations in NFs are often restricted to simple functions to have closed-form Jacobian. The computation of the Jacobian is also one of computational bottlenecks in optimisation.
Comparison Overview
The key difference between ergodic inference, AMCMC and VI is highlighted in the following • TV-Loss: Optimising the loss function leads to the convergence in TV distance.
• Independent samples: computationally and statistically independent sample simulation.
• Implicit Simulation Density: no closed-form density function of simulation distribution is required in training.
Related Works
Hamiltonian variational inference (HVI), introduced by (Salimans et al., 2015), is an interesting variational framework using MCMC kernel as variational parametric distribution. The motivation of HVI is that the joint density function of all the states of HMC chains is tractable to compute. Unfortunately, the variational lower bound is still intractable to compute, because the reverse probability of HMC chain given the last state is intractable. To overcome this problem, they propose to approximate the reverse density function using neural network. Although HVI shows improvement in performance over VAEs, the additional approximation limits the potential of this method. However, optimising the HMC kernel parameters w.r.t. ELBO is still an attractive feature of HVI.
Hoffman (Hoffman, 2017) proposed another hybrid method based on VI and HMC without auxiliary approximation. The idea is to use a Monte Carlo estimation of the marginal likelihood by averaging over samples from HMC chains, that are initialized by variational distribution.
In (Han et al., 2017) a very similar framework is proposed using Metropolis-adjusted Langevin dynamics. This idea is very similar to contrastive divergence in (Hinton, 2002). The main disadvantage of this methods is that the HMC parameters are manually pretuned. Especially, As mentioned by (Hoffman, 2017), No-U-turn Sampler (NUTS), an adaptive HMC, is not appliable due to engineering difficulties. (Neal, 2010) pointed out that HMC is very sensitive to the choice of Leapfrog step size and number of leaps.
Stein Variational Gradient Descent (SVGD) is a recent particle based dynamical inference method proposed by (Liu, 2017). In SVGD, the approximation distribution is a set point mass q generated by transforming a set of points sampled from a distribution µ using a perturbation function T (x) = x + φ(x), where φ is in a function space with boundary norm. With this setup, the optimisation of T w.r.t. the KL divergence between q and the target π is transformed into a stochastic optimisation in the kernel space of φ. The theoretical foundation of convergence of SVDG is sound and appealing. However, this method faces two practical challenges. First, the optimisation complexity grows quadratically with the number of particles. Second, it is very difficult to approximate high dimensional distribution well with a limited number of point mass approximation.
Summary
I proposed a new generic inference method based on optimization and ergodic deterministic transformations. This work provides us the very foundation of ergodic inference including: the fundamental ergodic inference principle; tractable estimation of ergodic loss and the its gradient; a generic construction of approximation family.
| 6,202 |
1811.07120
|
2900626851
|
Knowledge transfer, zero-shot learning and semantic image retrieval are methods that aim at improving accuracy by utilizing semantic information, e.g. from WordNet. It is assumed that this information can augment or replace missing visual data in the form of labeled training images because semantic similarity somewhat aligns with visual similarity. This assumption may seem trivial, but is crucial for the application of such semantic methods. Any violation can cause mispredictions. Thus, it is important to examine the visual-semantic relationship for a certain target problem. In this paper, we use five different semantic and visual similarity measures each to thoroughly analyze the relationship without relying too much on any single definition. We postulate and verify three highly consequential hypotheses on the relationship. Our results show that it indeed exists and that WordNet semantic similarity carries more information about visual similarity than just the knowledge of "different classes look different". They suggest that classification is not the ideal application for semantic methods and that wrong semantic information is much worse than none.
|
DeVise @cite_35 uses a language model trained on Wikipedia text combined with a visual model to improve classification and enable zero-shot learning on the ImageNet dataset @cite_20 @cite_15 . The visual model is pre-trained without semantic aid and then fine-tuned to maximize a similarity measure between prediction and label in a semantic embedding, thereby improving performance.
|
{
"abstract": [
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond."
],
"cite_N": [
"@cite_35",
"@cite_15",
"@cite_20"
],
"mid": [
"2123024445",
"2117539524",
"2108598243"
]
}
|
Not just a matter of semantics: the relationship between visual similarity and semantic similarity
|
There exist applications in which labeled training data can not be acquired in amounts sufficient to reach the high accuracy associated with contemporary convolutional neural networks (CNNs) with millions of parameters. These include industrial [13,17] and medical [14,27,32] as well as research in other fields like wildlife monitoring [2,3,6]. Semantic methods such as knowledge transfer and zero-shot learning consume information about the semantic relationship between classes from databases like WordNet [18] to allow high-accuracy classification even when training data is insufficient or missing entirely [24]. They can only function when the unknown visual class relationships are predictable by the semantic relationships.
(a) A deer and a forest. By taxonomy only, their semantic similarity is weak. Visual similarity however is strong.
(b) An orchid and a sunflower. Their semantic similarity very strong due to them both being flowers. The visual similarity between them is weak. Figure 1: Examples of semantic-visual disagreement.
In this paper, we analyze and test this crucial assumption by evaluating the relationship between visual and semantic similarity in a detailed and systematic fashion.
To guide our analysis, we formulate three highly consequential, non-trivial hypotheses around the visual-semantic relationship. The exact nature of the links and the similarity terms is specified in section 4. Our first hypothesis concerns the relationship itself:
H 1 There is a link between visual similarity and semantic similarity. It seems trivial on the surface, but each individual component requires a proper, nontrivial definition to ultimately make the hypothesis ver-ifiable (see section 4). The observed effectiveness of semantic methods suggests that knowledge about semantic relationships is somewhat applicable in the visual domain. However, counter-examples are easily found, e.g. figs. 1 and 5. Furthermore, a basic notion of semantic similarity is already contained in the expectation that "different classes look different" (see section 2.1). A similarity measure based on actual semantic knowledge should be linked stronger to visual similarity than this simple baseline.
Semantic methods seek to optimize accuracy and in turn model confusion, but confusion and visual similarity are not trivially related. Insights about the low-level visual similarity may not be applicable to the more abstract confusion. To cover not only largely model-free, but also also model-specific notions, we formulate our second and third hypotheses:
H 2
There is a link between visual similarity and model confusion. When considering low inter-class distance in a feature space to be contributor to confusion, it could also be one in the visual domain. This link strongly depends on the selected features and classifier, but it could also be affected by violations of "different classes look different" in the dataset.
H 3 There is a link between semantic similarity and model confusion. This link should be investigated because it directly relates to the goal of semantic methods, which is to reduce confusion by adding semantic information. It "skips" the low-level visual component and as such is interesting on its own. The expectation that "different classes look different" can already explain the complete confusion matrix of a perfect classifier. We also expect it to partly explain a real classifier's confusions. So to consider H 3 verified, we require semantic similarity to show an even stronger correlation to confusion than this baseline.
Our main contribution is an extensive and insightful evaluation of this relationship across five different semantic and visual similarity measures respectively. It is based on the three aforementioned hypotheses around the relationship. We show quantitative results measuring the agreement between individual measures and across visual and semantic similarities as rank correlation. Moreover, we analyze special cases of agreement and disagreement qualitatively. The results and their various implications are discussed in section 5.5. They suggest that, while the relationship exists even beyond the "different classes look different" baseline, even more investigation is warranted into tasks different from classification because of the semantically reductive nature of class labels. Hence, semantic methods may perform better on more complex tasks.
Semantic Similarity
The term semantic similarity describes the degree to which two concepts interact semantically. A common def-inition requires taking into account only the taxonomical (hierarchical) relationship between the concepts [8, p. 10]. A more general notion is semantic relatedness, where any type of semantic link may be considered [8, p. 10]. Both are semantic measures, which also include distances and dissimilarities [8, p. 9]. We adhere to these definitions in this work, specifically the hierarchical restriction of semantic similarity.
Prerequisites
In certain cases, it is easier to formulate a semantic measure based on hierarchical relationships as a distance first. Such a distance d between two concepts x, y can be converted to a similarity by 1/(1 + d(x, y)) [8, p. 60]. This results in a measure bounded by (0, 1], where 1 stands for maximal similarity, i.e. the distance is zero. We will apply this rule to convert all distances to similarities in our experiments. We also apply it to dissimilarities, which are comparable to distances, but do not fulfill the triangle inequality.
Semantic Baseline When training a classifier without using semantic embeddings or hierarchical classification techniques [29], there is still prior information about semantic similarity given by the classification problem itself. Specifically, it is postulated that "classes that are different look different" (see section 4). Machine learning can not work if this assumption is violated such that different classes look identical. We encode this "knowledge" as a semantic similarity measure, defined as 1 for two identical concepts and zero otherwise. It will serve as a baseline for comparison with all other similarities.
Graph-based Similarities
We can describe a directed acyclic graph G(C, is-a) using the taxonomic relation is-a and the set of all concepts C. The following notions of semantic similarity can be expressed using properties of this graph. The graph distance d G (x, y) between two nodes x, y, which is defined as the length of the shortest path xP y, is an important example. If required, we reduce the graph G to a rooted tree T with root r by iterating through all nodes with multiple ancestors and successively removing the edges to ancestors with the lowest amount of successors. In a tree, we can then define the depth of a concept x as d T (x) = d T (r, x)
A simple approach is presented by Rada et al. in [21, p. 20], where the semantic distance between two concepts x and y is defined as the graph distance d G (x, y) between one concept and the other in G.
To make similarities comparable between different taxonomies, it may be desirable to take the overall depth of the hierarchy into account. Resnik presents such an approach for trees in [22], considering the maximal depth of T and the least common ancestor L(x, y). It is the uniquely defined node in the shortest path between two concepts x and y that is an ancestor to both [8, p. 61]. The similarity between x and y is then given as [22, p. 3]:
2 · max z∈C d T (z) − d T (x, L(x, y)) − d T (y, L(x, y)). (1)
Feature-based Similarities
The following approaches use a set-theoretic view of semantics. The set of features φ(x) of a concept x is usually defined as the set of ancestors A(x) of x [8]. We include the concept x itself, such that φ(x) = A(x) ∪ {x} [28].
Inspired by the Jaccard coefficient, Maedche and Staab propose a similarity measure defined as the intersection over union of the concept features of x and y respectively [16, p. 4]. This similarity is bounded by [0, 1], with identical concepts always resulting in 1.
Sanchez et al. present a dissimilarity measure that represents the ratio of distinct features to shared features of two concepts. It is defined by [28, p. 7723]:
log 2 1 + |φ(x)\φ(y)| + |φ(y)\φ(x)| |φ(x)\φ(y)| + |φ(y)\φ(x)| + |φ(y) ∩ φ(x)| .
(2)
Information-based Similarities
Semantic similarity is also defined using the notion of informativeness of a concept, inspired by information theory. Each concept x is assigned an Information Content (IC) I(x) [22,25]. This can be defined using only properties of the taxonomy, i.e. the graph G (intrinsic IC), or using the probability of observing the concept in corpora (extrinsic IC) [8, p. 54].
We use an intrinsic definition presented by Zhou et al. in [39], based on the descendants D(x):
I(x) = k· 1− |D(x)| |C| +(1−k)· log(d T (x)) log(max z∈C d T (z))
.
(3) With a definition of IC, we can apply an informationbased similarity measure. Jiang and Conrath propose a semantic distance in [10] using the notion of Most Informative Common Ancestor M(x, y) of two concepts x, y. It is defined as the element in (A(x) ∩ A(y)) ∪ (x ∩ y) with the highest IC [8, p. 65]. The distance is then defined as [10, p. 8]:
I(x) + I(y) − 2 · I(M(x, y)).(4)
Perceptual Metrics
Perceptual metrics are usually employed to quantify the distortion or information loss incurred by using compression algorithms. Such methods aim to minimize the difference between the original image and the compressed image and thereby maximize the similarity between both. However, perceptual metrics can also be used to assess the similarity of two independent images.
An image can be represented by an element of a highdimensional vector space. In this case, the Euclidean distance is a natural candidate for a dissimilarity measure. With the rule 1/(1 + d) from section 2.1, the distance is transformed into a visual similarity measure. To normalize the measure w.r.t. image dimensions and to simplify calculations, the mean squared error (MSE) is used. Applying the MSE to estimate image similarity has shortcomings. For example, shifting an image by one pixel significantly changes the distances to other images, including its unshifted self [31]. An alternative, but related measure is the mean absolute difference (MAD), which we also consider in our experiments.
In [37], Wang et al. develop a perceptual metric called Structural Similarity Index to adress shortcomings of previous methods. Specifically, they consider properties of the human visual system such that the index better reflects human judgement of visual similarity.
We use MSE, MAD and SSIM as perceptual metrics to indicate visual similarity in our experiments. There are better performing methods when considering human judgement, e.g. [38]. However, we cannot guarantee that humans always treat visuals and semantics as separate. Therefore, we avoid further methods that are motivated by human properties [34,35] or already incorporate semantic knowledge [15,7].
Feature-based Measures
Features are extracted to represent images at an abstract level. Thus, distances in such a feature space of images correspond to visual similarity in a possibly more robust way than the aforementioned perceptual metrics. Features have inherent or learned invariances w.r.t. certain transformations that should not affect the notion of visual similarity strongly. However, learned features may also be invariant to transformations that do affect visual similarity because the are optimized for semantic distinction. This behavior needs to be considered when selecting abstract features to determine visual similarity.
GIST [20] is an image descriptor that aims at describing a whole scene using a small number of estimations of specific perceptual properties, such that similar content is close in the resulting feature space. It is based on the notion of a spatial envelope, inspired by architecture, that can be extracted from an image and used to calculate statistics.
For reference, we observe the confusions of five ResNet-32 [9] models to represent feature-based visual similarity on the highest level of abstraction. Because confusion is not a symmetric function, we apply a transform (M + M T )/2 to obtain a symmetric representation.
Evaluating the Relationship
Visual similarity and semantic similarity are measures defined on different domains. Semantic similarities compare concepts, but visual similarities compare individual images. To analyze a correlation, a common domain over which both can be evaluated is essential. We propose to calculate similarities over all pairs of classes in an image classification dataset, which can be defined for both visual and semantic similarities. These pairwise similarities are then tested for correlation. The process is clarified in the following:
1. Dataset. We use the CIFAR-100 dataset [12] to verify our hypotheses. This dataset has a scale at which all experiments take a reasonable amount of time. Our computation times grow quadratically with the number of classes as well as images. Hence, we do not consider ImageNet [4,26] or 80 million tiny images [33] despite their larger coverage of semantic concepts.
2. Semantic similarities. We calculate semantic similarity measures over all pairs of classes in the dataset. The taxonomic relation is-a is taken from WordNet [18] by mapping all classes in CIFAR-100 to their counterpart concepts in WordNet, inducing the graph G(C, is-a). Some measures are defined as distances or dissimilarities. We use the rule presented in section 2.1 to derive similarities. The following measures are evaluated over all pairs of concepts (x, y) ∈ C × C (see section 2):
(S1) Graph distance d G (x, y) as proposed by Rada [37]. (V4) Distance between GIST descriptors [20] of images in feature space. (V5) Observed symmetric confusions of five ResNet-32 [9] models trained on the CIFAR-100 training set.
4.
Aggregation. For both visual and semantic similarity, there is more than one candidate method, i.e. (S1)-(S5) and (V1)-(V5). For the following steps, we need a single measure for each type of similarity, which we aggregate from (S1)-(S5) and (V1)-(V5) respectively. Since each method has its merits, selecting only one each would not be representative of the type of similarity. The output of all candidate methods is normalized individually, such that its range is in [0, 1]. We then calculate the average over each type of similarity, i.e. visual and semantic, to obtain two distinct measures (S) and (V).
Baselines.
A basic assumption of machine learning is that "the domains occupied by features of different classes are separated" [19, p. 8]. Intuitively, this should apply to the images of different classes as well. We can then expect to predict at least some of the visual similarity between classes just by knowing whether the classes are identical or not. This knowledge is encoded in the semantic baseline (SB), defined as 1 for identical concepts and zero otherwise (see also section 2.1). We propose a second baseline, the semantic noise (SN), where the aforementioned pairwise semantic similarity (S) is calculated, but the concepts are permuted randomly. This baseline serves to assess the informativeness of the taxonomic relationships.
(S1) (S2) (S3) (S4) (S5) (S1) (S2) (S3) (S4)(
Correlation
The similarity measures mentioned above are useful to define an order of similarity, i.e. whether a concept x is more similar to z than concept y. However, it is not reasonable in all cases to interpret them in a linear fashion like a dot product especially since many are derived from distances or dissimilarities and all were normalized from different ranges of values and then aggregated. We therefore test the similarities for correlation w.r.t. ranking, using Spearman's rank correlation coefficient [30] instead of looking for a linear relationship.
Results
In the following, we present the results of our experiments defined in the previous section. We first examine both types of similarity individually, comparing the five candi-date methods each. Afterwards, the hypotheses proposed in section 1 are tested. We then perform a qualitative analysis of extreme cases in both similarities and investigate cases of (dis-)agreement between them.
Semantic Similarities
We first analyze the pairwise semantic similarities over all classes. Figure 2a shows the average semantic similarity (S) as specified in section 4. The classes on the axes are ordered by a depth first search through the class hierarchy, yielding clearly visible artifacts of the graph structure.
Although we consider semantic similarity to be a single measure when verifying our hypotheses, studying the correlation between our candidate methods (S1)-(S5) is also important. While of course affected by our selection, it reflects upon the degree of agreement between several experts in the domain. Figure 3a visualizes the correlations. The graphbased methods (S1) and (S2) agree more strongly with each other than with the rest. The same is true of feature-based methods (S3) and (S4), which show the strongest correlation. The inter-agreement R, calculated by taking the average of all correlations except for the main diagonal, is 0.89. This is a strong agreement and suggests that the order of similarity between concepts can be, for the most part, considered representative of a universally agreed upon definition (if one existed). At the same time, one needs to consider that all methods utilize the same WordNet hierarchy.
Baselines Our semantic baseline (SB, see section 4) encodes the basic knowledge that different classes look different. This property should also be fulfilled by the average semantic similarity (S, see section 4). We thus expect there to be at least some correlation. The rank correlation between our average semantic similarity (S) and the semantic baseline (SB) is 0.17 with p < 0.05. This is a weak correlation compared to the strong inter-agreement of 0.89, which suggests that the similarities (S1)-(S5) are vastly more complex than (SB), but at the same time have a lot in common. As a second baseline we test the semantic noise (SN, see section 4). It is not correlated with (S) at ρ = 0.01, p > 0.05, meaning that the taxonomic relationship strongly affects (S). If it did not, the labels could be permuted without changing the pairwise similarities.
Visual Similarities
The average visual similarity (V) as estimated over all classes is shown in fig. 2b. For reference, we show the symmetric confusion matrix (see section 4) in fig. 2c. Comparing (V) to (S), the graph structure is less visible. In the confusion matrix however, the artifacts are more present.
Intuitively, visual similarity is a concept that is hard to define clearly and uniquely. Because we selected very different approaches with very different ideas and motivations behind them, we expect the agreement between (V1)-(V5) to be weak. Figure 3b shows the rank correlations between each candidate method. The agreement is strongest between the mean squared error (V1) and the GIST feature distance (V4). Both are L2 distances, but calculated in separate domains, highlighting the strong nonlinearity and complexity of image descriptors. The inter-agreement is very weak at R = 0.17. The results confirms our intuitions that visual similarity is very hard to define in mathematical terms. There is also no body of knowledge that all methods use in the visual domain like WordNet provides for semantics.
Hypotheses
To give a brief overview, the rank correlations between the different components of H 1 -H 3 are shown in fig. 4. In the following, we give our results w.r.t. the individual hypotheses. They are discussed further in section 5.5.
H 1 There is a link between visual similarity and semantic similarity. Using the definitions from section 4 including the semantic baseline (SB), we can examine the respective correlations. The rank correlation between (V) and (S) is 0.23, p < 0.05, indicating a link. Before we consider the hypothesis verified, we also evaluate what fraction of (V) is already explained by the semantic baseline (SB) as per our condition given in section 4. The rank correlation between (V) and (SB) is 0.17, p < 0.05, which is a weaker link than between (V) and (S). Additionally, (V) and (SN) are not correlated, illustrating that the wrong semantic knowledge can be worse than none. Thus, we can verify H 1 .
H 2 There is a link between visual similarity and model confusion. Since model confusion as (V5) is a contributor to average visual similarity (V), we consider only (V-), comprised of (V1)-(V4) for this hypothesis. The rank correlation between (V-) and the symmetric
Special Cases
In this section, we first perform a qualitative analysis of visual similarity and semantic similarity individually by looking at extreme values. We then inspect cases of strong agreement and disagreement between both.
Visual Figures 6a and 6b show the three most similar and three least similar concept pairs in CIFAR-100. The aggregated normalized visual similarity measures are not readily interpretable. Still, the choice of plain.n.01 and sea.n.01 as the most similar pair of concepts appears reasonable. Both classes have the horizon as a central feature, with half of the image showing the sky, which is also true for the second most similar combination, cloud.n.02 and sea.n.01. At the low resolution of CIFAR-100, the third most similar pair of maple.n.02 and oak.n.02 is hard to distinguish visually, except for the slightly larger range of maple hues. The three least similar pairs in CIFAR-100 are visually different on at least three levels. Globally, the colors are almost inverted. The round shapes of orange.n.01 clash with the comparatively hard edges of dolphin.n.02, ray.n.07 and shark.n.01 and locally, the textures are very different.
Semantic We also investigate the range of semantic similarities calculated over the CIFAR-100 dataset. Figure 7a shows examples of the the most semantically similar concept pairs. fox.n.01 and wolf.n.01 are not only most similar semantically, but show a strong visual likeness, too. This also applies to otter.n.02 and skunk.n.04 as well as ray.n.07 and shark.n.01, which are both visually similar to a degree. When inspecting the most dissimilar pairs, there is one pair of cattle.n.01 and forest.n.01 where there is a reasonably strong visual similarity, hinting at a disagreement.
Agreement To further analyze the the correlation, we examine specific cases of very strong agreement or disagreement. Figure 5 shows these extreme cases. We determine agreement based on ranking, so the most strongly agreed upon pairs (see fig. 5a) still show different absolute similarity numbers. Interestingly, they are not cases of extreme similarities. It suggests that even weak disagreements are more likely to be found at similarities close to the boundaries. When investigating strong disagreement as shown in fig. 5b, there are naturally extreme values to be found. All three pairs involve forest.n.01, which was also a part of the second least semantically similar pair. Its partners are all animals which usually have a background visually similar to a forest, hence the strong disagreement. However, the low semantic similarity is possibly an artifact of reducing a whole image to a single concept.
Discussion
H 1 : There is a link between visual similarity and semantic similarity. The relationship is stronger than a simple baseline, but weak overall at ρ = 0.23 vs ρ = 0.17. This should be considered when employing methods where visuals and semantics interact, e.g. in knowledge transfer. Failure cases such as in fig. 5b can only be found when labels are known, which has implications for real-life applications of semantic methods. As labels are unknown or lack visual examples, such cases are not predictable beforehand. This poses problems for applications that rely on accurate classification such as safety-critical equipment or even research in other fields consuming model predictions. A real-world example is wildlife conservationists relying on statistics from automatic camera trap image classification to draw conclusions on biodiversity. That semantic similarity of randomly permuted classes is not correlated with visual similarity at all, while the baseline is, suggests that wrong semantic knowledge can be much worse than no knowledge. H 2 : There is a link between visual similarity and model confusion. Visual similarity is defined on a low level for H 2 . As such, it should not cause model confusion by itself. On the one hand, the model can fail to generalize and cause an avoidable confusion. On the other hand, there may be an issue with the dataset. The test set may be sampled from a different distribution than the training set. It may also violate the postulate that different classes look different by containing the same or similar images across classes.
H 3 : There is a link between semantic similarity and model confusion. Similar to H 1 , it suggests that seman-tic methods could be applied to our data, but maybe not in general because failure cases are unpredictable. However, it implies a stronger effectiveness than H 1 at ρ = 0.39 vs. the baseline at ρ = 0.21. We attribute this to the model's capability of abstraction. It aligns with the idea of taxonomy, which is based on repeated abstraction of concepts. Using a formulation that optimizes semantic similarity instead of cross-entropy (which would correspond to the semantic baseline) or even a hierarchical classifier can recommended in our situation. It may still not generalize to other settings and any real-world application of such methods should be verified with at least a small test set.
Qualitative Some failures or disagreements may not be a result of the relationship itself, but of its application to image classification. The example from fig. 1 is valid when the whole image is reduced to a single concept. Still, the agreement between visual and semantic similarity may increase when the image is described in a more holistic fashion. While "deer" and "forest" as nouns are taxonomically only loosely related, the descriptions "A deer standing in a forest, partially occluded by a tree and tall grass" and "A forest composed of many trees and bushes, with the daytime sky visible" already appear more similar, while those descriptions are still missing some of the images' contents. This suggests that more complex tasks than image classification stand to benefit more from semantic methods.
In further research, not only nouns should be considered, but also adjectives, decompositions of objects into parts as well as working with a more general notion of semantic relatedness instead of simply semantic similarity. Datasets like Visual Genome [11] offer more complex annotations mapped to WordNet concepts that could be subjected to further study. However, tasks much more complex than hierarchical image classification on a semantic level lack a compelling real-world application to the best of our knowledge.
Conclusion
We present results of a comprehensive evaluation of semantic similarity measures and their correlation with visual similarities. We measure against the simple prior knowledge of different classes having different visuals. Then, we show that the relationship between semantic similarity, as calculated from WordNet [18] using five different methods, and visual similarity, also represented by five measures, is more meaningful than that. Furthermore, inter-agreement measures suggest that semantic similarity has a more agreed upon definition than visual similarity, although both concepts are based on human perception.
The results indicate that further research, especially into tasks different from image classification is warranted because of the semantically reductive nature of image labels. It may restrict the performance of semantic methods.
| 4,413 |
1811.07120
|
2900626851
|
Knowledge transfer, zero-shot learning and semantic image retrieval are methods that aim at improving accuracy by utilizing semantic information, e.g. from WordNet. It is assumed that this information can augment or replace missing visual data in the form of labeled training images because semantic similarity somewhat aligns with visual similarity. This assumption may seem trivial, but is crucial for the application of such semantic methods. Any violation can cause mispredictions. Thus, it is important to examine the visual-semantic relationship for a certain target problem. In this paper, we use five different semantic and visual similarity measures each to thoroughly analyze the relationship without relying too much on any single definition. We postulate and verify three highly consequential hypotheses on the relationship. Our results show that it indeed exists and that WordNet semantic similarity carries more information about visual similarity than just the knowledge of "different classes look different". They suggest that classification is not the ideal application for semantic methods and that wrong semantic information is much worse than none.
|
Rodner al show a method that trains Gaussian processes from few examples in @cite_32 . It uses knowledge transfer between related classes to enable few-shot learning with reasonable accuracy. The WordNet hierarchy @cite_11 supplies the relationships between concepts that are ultimately used to steer the knowledge transfer. When compared to individual GP learning of classes, knowledge transfer improves accuracy.
|
{
"abstract": [
"Knowledge transfer from related object categories is a key concept to allow learning with few training examples. We present how to use dependent Gaussian processes for transferring knowledge from a related category in a non-parametric Bayesian way. Our method is able to select this category automatically using efficient model selection techniques. We show how to optionally incorporate semantic similarities obtained from the hierarchical lexical database WordNet [1] into the selection process. The framework is applied to image categorization tasks using state-of-the-art image-based kernel functions. A large scale evaluation shows the benefits of our approach compared to independent learning and a SVM based approach.",
"Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4]."
],
"cite_N": [
"@cite_32",
"@cite_11"
],
"mid": [
"1551592259",
"2081580037"
]
}
|
Not just a matter of semantics: the relationship between visual similarity and semantic similarity
|
There exist applications in which labeled training data can not be acquired in amounts sufficient to reach the high accuracy associated with contemporary convolutional neural networks (CNNs) with millions of parameters. These include industrial [13,17] and medical [14,27,32] as well as research in other fields like wildlife monitoring [2,3,6]. Semantic methods such as knowledge transfer and zero-shot learning consume information about the semantic relationship between classes from databases like WordNet [18] to allow high-accuracy classification even when training data is insufficient or missing entirely [24]. They can only function when the unknown visual class relationships are predictable by the semantic relationships.
(a) A deer and a forest. By taxonomy only, their semantic similarity is weak. Visual similarity however is strong.
(b) An orchid and a sunflower. Their semantic similarity very strong due to them both being flowers. The visual similarity between them is weak. Figure 1: Examples of semantic-visual disagreement.
In this paper, we analyze and test this crucial assumption by evaluating the relationship between visual and semantic similarity in a detailed and systematic fashion.
To guide our analysis, we formulate three highly consequential, non-trivial hypotheses around the visual-semantic relationship. The exact nature of the links and the similarity terms is specified in section 4. Our first hypothesis concerns the relationship itself:
H 1 There is a link between visual similarity and semantic similarity. It seems trivial on the surface, but each individual component requires a proper, nontrivial definition to ultimately make the hypothesis ver-ifiable (see section 4). The observed effectiveness of semantic methods suggests that knowledge about semantic relationships is somewhat applicable in the visual domain. However, counter-examples are easily found, e.g. figs. 1 and 5. Furthermore, a basic notion of semantic similarity is already contained in the expectation that "different classes look different" (see section 2.1). A similarity measure based on actual semantic knowledge should be linked stronger to visual similarity than this simple baseline.
Semantic methods seek to optimize accuracy and in turn model confusion, but confusion and visual similarity are not trivially related. Insights about the low-level visual similarity may not be applicable to the more abstract confusion. To cover not only largely model-free, but also also model-specific notions, we formulate our second and third hypotheses:
H 2
There is a link between visual similarity and model confusion. When considering low inter-class distance in a feature space to be contributor to confusion, it could also be one in the visual domain. This link strongly depends on the selected features and classifier, but it could also be affected by violations of "different classes look different" in the dataset.
H 3 There is a link between semantic similarity and model confusion. This link should be investigated because it directly relates to the goal of semantic methods, which is to reduce confusion by adding semantic information. It "skips" the low-level visual component and as such is interesting on its own. The expectation that "different classes look different" can already explain the complete confusion matrix of a perfect classifier. We also expect it to partly explain a real classifier's confusions. So to consider H 3 verified, we require semantic similarity to show an even stronger correlation to confusion than this baseline.
Our main contribution is an extensive and insightful evaluation of this relationship across five different semantic and visual similarity measures respectively. It is based on the three aforementioned hypotheses around the relationship. We show quantitative results measuring the agreement between individual measures and across visual and semantic similarities as rank correlation. Moreover, we analyze special cases of agreement and disagreement qualitatively. The results and their various implications are discussed in section 5.5. They suggest that, while the relationship exists even beyond the "different classes look different" baseline, even more investigation is warranted into tasks different from classification because of the semantically reductive nature of class labels. Hence, semantic methods may perform better on more complex tasks.
Semantic Similarity
The term semantic similarity describes the degree to which two concepts interact semantically. A common def-inition requires taking into account only the taxonomical (hierarchical) relationship between the concepts [8, p. 10]. A more general notion is semantic relatedness, where any type of semantic link may be considered [8, p. 10]. Both are semantic measures, which also include distances and dissimilarities [8, p. 9]. We adhere to these definitions in this work, specifically the hierarchical restriction of semantic similarity.
Prerequisites
In certain cases, it is easier to formulate a semantic measure based on hierarchical relationships as a distance first. Such a distance d between two concepts x, y can be converted to a similarity by 1/(1 + d(x, y)) [8, p. 60]. This results in a measure bounded by (0, 1], where 1 stands for maximal similarity, i.e. the distance is zero. We will apply this rule to convert all distances to similarities in our experiments. We also apply it to dissimilarities, which are comparable to distances, but do not fulfill the triangle inequality.
Semantic Baseline When training a classifier without using semantic embeddings or hierarchical classification techniques [29], there is still prior information about semantic similarity given by the classification problem itself. Specifically, it is postulated that "classes that are different look different" (see section 4). Machine learning can not work if this assumption is violated such that different classes look identical. We encode this "knowledge" as a semantic similarity measure, defined as 1 for two identical concepts and zero otherwise. It will serve as a baseline for comparison with all other similarities.
Graph-based Similarities
We can describe a directed acyclic graph G(C, is-a) using the taxonomic relation is-a and the set of all concepts C. The following notions of semantic similarity can be expressed using properties of this graph. The graph distance d G (x, y) between two nodes x, y, which is defined as the length of the shortest path xP y, is an important example. If required, we reduce the graph G to a rooted tree T with root r by iterating through all nodes with multiple ancestors and successively removing the edges to ancestors with the lowest amount of successors. In a tree, we can then define the depth of a concept x as d T (x) = d T (r, x)
A simple approach is presented by Rada et al. in [21, p. 20], where the semantic distance between two concepts x and y is defined as the graph distance d G (x, y) between one concept and the other in G.
To make similarities comparable between different taxonomies, it may be desirable to take the overall depth of the hierarchy into account. Resnik presents such an approach for trees in [22], considering the maximal depth of T and the least common ancestor L(x, y). It is the uniquely defined node in the shortest path between two concepts x and y that is an ancestor to both [8, p. 61]. The similarity between x and y is then given as [22, p. 3]:
2 · max z∈C d T (z) − d T (x, L(x, y)) − d T (y, L(x, y)). (1)
Feature-based Similarities
The following approaches use a set-theoretic view of semantics. The set of features φ(x) of a concept x is usually defined as the set of ancestors A(x) of x [8]. We include the concept x itself, such that φ(x) = A(x) ∪ {x} [28].
Inspired by the Jaccard coefficient, Maedche and Staab propose a similarity measure defined as the intersection over union of the concept features of x and y respectively [16, p. 4]. This similarity is bounded by [0, 1], with identical concepts always resulting in 1.
Sanchez et al. present a dissimilarity measure that represents the ratio of distinct features to shared features of two concepts. It is defined by [28, p. 7723]:
log 2 1 + |φ(x)\φ(y)| + |φ(y)\φ(x)| |φ(x)\φ(y)| + |φ(y)\φ(x)| + |φ(y) ∩ φ(x)| .
(2)
Information-based Similarities
Semantic similarity is also defined using the notion of informativeness of a concept, inspired by information theory. Each concept x is assigned an Information Content (IC) I(x) [22,25]. This can be defined using only properties of the taxonomy, i.e. the graph G (intrinsic IC), or using the probability of observing the concept in corpora (extrinsic IC) [8, p. 54].
We use an intrinsic definition presented by Zhou et al. in [39], based on the descendants D(x):
I(x) = k· 1− |D(x)| |C| +(1−k)· log(d T (x)) log(max z∈C d T (z))
.
(3) With a definition of IC, we can apply an informationbased similarity measure. Jiang and Conrath propose a semantic distance in [10] using the notion of Most Informative Common Ancestor M(x, y) of two concepts x, y. It is defined as the element in (A(x) ∩ A(y)) ∪ (x ∩ y) with the highest IC [8, p. 65]. The distance is then defined as [10, p. 8]:
I(x) + I(y) − 2 · I(M(x, y)).(4)
Perceptual Metrics
Perceptual metrics are usually employed to quantify the distortion or information loss incurred by using compression algorithms. Such methods aim to minimize the difference between the original image and the compressed image and thereby maximize the similarity between both. However, perceptual metrics can also be used to assess the similarity of two independent images.
An image can be represented by an element of a highdimensional vector space. In this case, the Euclidean distance is a natural candidate for a dissimilarity measure. With the rule 1/(1 + d) from section 2.1, the distance is transformed into a visual similarity measure. To normalize the measure w.r.t. image dimensions and to simplify calculations, the mean squared error (MSE) is used. Applying the MSE to estimate image similarity has shortcomings. For example, shifting an image by one pixel significantly changes the distances to other images, including its unshifted self [31]. An alternative, but related measure is the mean absolute difference (MAD), which we also consider in our experiments.
In [37], Wang et al. develop a perceptual metric called Structural Similarity Index to adress shortcomings of previous methods. Specifically, they consider properties of the human visual system such that the index better reflects human judgement of visual similarity.
We use MSE, MAD and SSIM as perceptual metrics to indicate visual similarity in our experiments. There are better performing methods when considering human judgement, e.g. [38]. However, we cannot guarantee that humans always treat visuals and semantics as separate. Therefore, we avoid further methods that are motivated by human properties [34,35] or already incorporate semantic knowledge [15,7].
Feature-based Measures
Features are extracted to represent images at an abstract level. Thus, distances in such a feature space of images correspond to visual similarity in a possibly more robust way than the aforementioned perceptual metrics. Features have inherent or learned invariances w.r.t. certain transformations that should not affect the notion of visual similarity strongly. However, learned features may also be invariant to transformations that do affect visual similarity because the are optimized for semantic distinction. This behavior needs to be considered when selecting abstract features to determine visual similarity.
GIST [20] is an image descriptor that aims at describing a whole scene using a small number of estimations of specific perceptual properties, such that similar content is close in the resulting feature space. It is based on the notion of a spatial envelope, inspired by architecture, that can be extracted from an image and used to calculate statistics.
For reference, we observe the confusions of five ResNet-32 [9] models to represent feature-based visual similarity on the highest level of abstraction. Because confusion is not a symmetric function, we apply a transform (M + M T )/2 to obtain a symmetric representation.
Evaluating the Relationship
Visual similarity and semantic similarity are measures defined on different domains. Semantic similarities compare concepts, but visual similarities compare individual images. To analyze a correlation, a common domain over which both can be evaluated is essential. We propose to calculate similarities over all pairs of classes in an image classification dataset, which can be defined for both visual and semantic similarities. These pairwise similarities are then tested for correlation. The process is clarified in the following:
1. Dataset. We use the CIFAR-100 dataset [12] to verify our hypotheses. This dataset has a scale at which all experiments take a reasonable amount of time. Our computation times grow quadratically with the number of classes as well as images. Hence, we do not consider ImageNet [4,26] or 80 million tiny images [33] despite their larger coverage of semantic concepts.
2. Semantic similarities. We calculate semantic similarity measures over all pairs of classes in the dataset. The taxonomic relation is-a is taken from WordNet [18] by mapping all classes in CIFAR-100 to their counterpart concepts in WordNet, inducing the graph G(C, is-a). Some measures are defined as distances or dissimilarities. We use the rule presented in section 2.1 to derive similarities. The following measures are evaluated over all pairs of concepts (x, y) ∈ C × C (see section 2):
(S1) Graph distance d G (x, y) as proposed by Rada [37]. (V4) Distance between GIST descriptors [20] of images in feature space. (V5) Observed symmetric confusions of five ResNet-32 [9] models trained on the CIFAR-100 training set.
4.
Aggregation. For both visual and semantic similarity, there is more than one candidate method, i.e. (S1)-(S5) and (V1)-(V5). For the following steps, we need a single measure for each type of similarity, which we aggregate from (S1)-(S5) and (V1)-(V5) respectively. Since each method has its merits, selecting only one each would not be representative of the type of similarity. The output of all candidate methods is normalized individually, such that its range is in [0, 1]. We then calculate the average over each type of similarity, i.e. visual and semantic, to obtain two distinct measures (S) and (V).
Baselines.
A basic assumption of machine learning is that "the domains occupied by features of different classes are separated" [19, p. 8]. Intuitively, this should apply to the images of different classes as well. We can then expect to predict at least some of the visual similarity between classes just by knowing whether the classes are identical or not. This knowledge is encoded in the semantic baseline (SB), defined as 1 for identical concepts and zero otherwise (see also section 2.1). We propose a second baseline, the semantic noise (SN), where the aforementioned pairwise semantic similarity (S) is calculated, but the concepts are permuted randomly. This baseline serves to assess the informativeness of the taxonomic relationships.
(S1) (S2) (S3) (S4) (S5) (S1) (S2) (S3) (S4)(
Correlation
The similarity measures mentioned above are useful to define an order of similarity, i.e. whether a concept x is more similar to z than concept y. However, it is not reasonable in all cases to interpret them in a linear fashion like a dot product especially since many are derived from distances or dissimilarities and all were normalized from different ranges of values and then aggregated. We therefore test the similarities for correlation w.r.t. ranking, using Spearman's rank correlation coefficient [30] instead of looking for a linear relationship.
Results
In the following, we present the results of our experiments defined in the previous section. We first examine both types of similarity individually, comparing the five candi-date methods each. Afterwards, the hypotheses proposed in section 1 are tested. We then perform a qualitative analysis of extreme cases in both similarities and investigate cases of (dis-)agreement between them.
Semantic Similarities
We first analyze the pairwise semantic similarities over all classes. Figure 2a shows the average semantic similarity (S) as specified in section 4. The classes on the axes are ordered by a depth first search through the class hierarchy, yielding clearly visible artifacts of the graph structure.
Although we consider semantic similarity to be a single measure when verifying our hypotheses, studying the correlation between our candidate methods (S1)-(S5) is also important. While of course affected by our selection, it reflects upon the degree of agreement between several experts in the domain. Figure 3a visualizes the correlations. The graphbased methods (S1) and (S2) agree more strongly with each other than with the rest. The same is true of feature-based methods (S3) and (S4), which show the strongest correlation. The inter-agreement R, calculated by taking the average of all correlations except for the main diagonal, is 0.89. This is a strong agreement and suggests that the order of similarity between concepts can be, for the most part, considered representative of a universally agreed upon definition (if one existed). At the same time, one needs to consider that all methods utilize the same WordNet hierarchy.
Baselines Our semantic baseline (SB, see section 4) encodes the basic knowledge that different classes look different. This property should also be fulfilled by the average semantic similarity (S, see section 4). We thus expect there to be at least some correlation. The rank correlation between our average semantic similarity (S) and the semantic baseline (SB) is 0.17 with p < 0.05. This is a weak correlation compared to the strong inter-agreement of 0.89, which suggests that the similarities (S1)-(S5) are vastly more complex than (SB), but at the same time have a lot in common. As a second baseline we test the semantic noise (SN, see section 4). It is not correlated with (S) at ρ = 0.01, p > 0.05, meaning that the taxonomic relationship strongly affects (S). If it did not, the labels could be permuted without changing the pairwise similarities.
Visual Similarities
The average visual similarity (V) as estimated over all classes is shown in fig. 2b. For reference, we show the symmetric confusion matrix (see section 4) in fig. 2c. Comparing (V) to (S), the graph structure is less visible. In the confusion matrix however, the artifacts are more present.
Intuitively, visual similarity is a concept that is hard to define clearly and uniquely. Because we selected very different approaches with very different ideas and motivations behind them, we expect the agreement between (V1)-(V5) to be weak. Figure 3b shows the rank correlations between each candidate method. The agreement is strongest between the mean squared error (V1) and the GIST feature distance (V4). Both are L2 distances, but calculated in separate domains, highlighting the strong nonlinearity and complexity of image descriptors. The inter-agreement is very weak at R = 0.17. The results confirms our intuitions that visual similarity is very hard to define in mathematical terms. There is also no body of knowledge that all methods use in the visual domain like WordNet provides for semantics.
Hypotheses
To give a brief overview, the rank correlations between the different components of H 1 -H 3 are shown in fig. 4. In the following, we give our results w.r.t. the individual hypotheses. They are discussed further in section 5.5.
H 1 There is a link between visual similarity and semantic similarity. Using the definitions from section 4 including the semantic baseline (SB), we can examine the respective correlations. The rank correlation between (V) and (S) is 0.23, p < 0.05, indicating a link. Before we consider the hypothesis verified, we also evaluate what fraction of (V) is already explained by the semantic baseline (SB) as per our condition given in section 4. The rank correlation between (V) and (SB) is 0.17, p < 0.05, which is a weaker link than between (V) and (S). Additionally, (V) and (SN) are not correlated, illustrating that the wrong semantic knowledge can be worse than none. Thus, we can verify H 1 .
H 2 There is a link between visual similarity and model confusion. Since model confusion as (V5) is a contributor to average visual similarity (V), we consider only (V-), comprised of (V1)-(V4) for this hypothesis. The rank correlation between (V-) and the symmetric
Special Cases
In this section, we first perform a qualitative analysis of visual similarity and semantic similarity individually by looking at extreme values. We then inspect cases of strong agreement and disagreement between both.
Visual Figures 6a and 6b show the three most similar and three least similar concept pairs in CIFAR-100. The aggregated normalized visual similarity measures are not readily interpretable. Still, the choice of plain.n.01 and sea.n.01 as the most similar pair of concepts appears reasonable. Both classes have the horizon as a central feature, with half of the image showing the sky, which is also true for the second most similar combination, cloud.n.02 and sea.n.01. At the low resolution of CIFAR-100, the third most similar pair of maple.n.02 and oak.n.02 is hard to distinguish visually, except for the slightly larger range of maple hues. The three least similar pairs in CIFAR-100 are visually different on at least three levels. Globally, the colors are almost inverted. The round shapes of orange.n.01 clash with the comparatively hard edges of dolphin.n.02, ray.n.07 and shark.n.01 and locally, the textures are very different.
Semantic We also investigate the range of semantic similarities calculated over the CIFAR-100 dataset. Figure 7a shows examples of the the most semantically similar concept pairs. fox.n.01 and wolf.n.01 are not only most similar semantically, but show a strong visual likeness, too. This also applies to otter.n.02 and skunk.n.04 as well as ray.n.07 and shark.n.01, which are both visually similar to a degree. When inspecting the most dissimilar pairs, there is one pair of cattle.n.01 and forest.n.01 where there is a reasonably strong visual similarity, hinting at a disagreement.
Agreement To further analyze the the correlation, we examine specific cases of very strong agreement or disagreement. Figure 5 shows these extreme cases. We determine agreement based on ranking, so the most strongly agreed upon pairs (see fig. 5a) still show different absolute similarity numbers. Interestingly, they are not cases of extreme similarities. It suggests that even weak disagreements are more likely to be found at similarities close to the boundaries. When investigating strong disagreement as shown in fig. 5b, there are naturally extreme values to be found. All three pairs involve forest.n.01, which was also a part of the second least semantically similar pair. Its partners are all animals which usually have a background visually similar to a forest, hence the strong disagreement. However, the low semantic similarity is possibly an artifact of reducing a whole image to a single concept.
Discussion
H 1 : There is a link between visual similarity and semantic similarity. The relationship is stronger than a simple baseline, but weak overall at ρ = 0.23 vs ρ = 0.17. This should be considered when employing methods where visuals and semantics interact, e.g. in knowledge transfer. Failure cases such as in fig. 5b can only be found when labels are known, which has implications for real-life applications of semantic methods. As labels are unknown or lack visual examples, such cases are not predictable beforehand. This poses problems for applications that rely on accurate classification such as safety-critical equipment or even research in other fields consuming model predictions. A real-world example is wildlife conservationists relying on statistics from automatic camera trap image classification to draw conclusions on biodiversity. That semantic similarity of randomly permuted classes is not correlated with visual similarity at all, while the baseline is, suggests that wrong semantic knowledge can be much worse than no knowledge. H 2 : There is a link between visual similarity and model confusion. Visual similarity is defined on a low level for H 2 . As such, it should not cause model confusion by itself. On the one hand, the model can fail to generalize and cause an avoidable confusion. On the other hand, there may be an issue with the dataset. The test set may be sampled from a different distribution than the training set. It may also violate the postulate that different classes look different by containing the same or similar images across classes.
H 3 : There is a link between semantic similarity and model confusion. Similar to H 1 , it suggests that seman-tic methods could be applied to our data, but maybe not in general because failure cases are unpredictable. However, it implies a stronger effectiveness than H 1 at ρ = 0.39 vs. the baseline at ρ = 0.21. We attribute this to the model's capability of abstraction. It aligns with the idea of taxonomy, which is based on repeated abstraction of concepts. Using a formulation that optimizes semantic similarity instead of cross-entropy (which would correspond to the semantic baseline) or even a hierarchical classifier can recommended in our situation. It may still not generalize to other settings and any real-world application of such methods should be verified with at least a small test set.
Qualitative Some failures or disagreements may not be a result of the relationship itself, but of its application to image classification. The example from fig. 1 is valid when the whole image is reduced to a single concept. Still, the agreement between visual and semantic similarity may increase when the image is described in a more holistic fashion. While "deer" and "forest" as nouns are taxonomically only loosely related, the descriptions "A deer standing in a forest, partially occluded by a tree and tall grass" and "A forest composed of many trees and bushes, with the daytime sky visible" already appear more similar, while those descriptions are still missing some of the images' contents. This suggests that more complex tasks than image classification stand to benefit more from semantic methods.
In further research, not only nouns should be considered, but also adjectives, decompositions of objects into parts as well as working with a more general notion of semantic relatedness instead of simply semantic similarity. Datasets like Visual Genome [11] offer more complex annotations mapped to WordNet concepts that could be subjected to further study. However, tasks much more complex than hierarchical image classification on a semantic level lack a compelling real-world application to the best of our knowledge.
Conclusion
We present results of a comprehensive evaluation of semantic similarity measures and their correlation with visual similarities. We measure against the simple prior knowledge of different classes having different visuals. Then, we show that the relationship between semantic similarity, as calculated from WordNet [18] using five different methods, and visual similarity, also represented by five measures, is more meaningful than that. Furthermore, inter-agreement measures suggest that semantic similarity has a more agreed upon definition than visual similarity, although both concepts are based on human perception.
The results indicate that further research, especially into tasks different from image classification is warranted because of the semantically reductive nature of image labels. It may restrict the performance of semantic methods.
| 4,413 |
1811.07120
|
2900626851
|
Knowledge transfer, zero-shot learning and semantic image retrieval are methods that aim at improving accuracy by utilizing semantic information, e.g. from WordNet. It is assumed that this information can augment or replace missing visual data in the form of labeled training images because semantic similarity somewhat aligns with visual similarity. This assumption may seem trivial, but is crucial for the application of such semantic methods. Any violation can cause mispredictions. Thus, it is important to examine the visual-semantic relationship for a certain target problem. In this paper, we use five different semantic and visual similarity measures each to thoroughly analyze the relationship without relying too much on any single definition. We postulate and verify three highly consequential hypotheses on the relationship. Our results show that it indeed exists and that WordNet semantic similarity carries more information about visual similarity than just the knowledge of "different classes look different". They suggest that classification is not the ideal application for semantic methods and that wrong semantic information is much worse than none.
|
Content-based image retrieval is an area that profits significantly from semantic information, especially when such systems are judged by a human ranking baseline. Vogel and Schiele propose an image representation in @cite_14 that describes an image's semantics locally that is then used to rank images by semantic similarity the query. An approach specifically using taxonomic information is presented by Barz al in @cite_8 , where an embedding space is constructed such that the dot product of label pairs matches a semantic similarity measure. They show that this label representation improves both image retrieval as well as image classification.
|
{
"abstract": [
"In this paper, we present a novel image representation that renders it possible to access natural scenes by local semantic description. Our work is motivated by the continuing effort in content-based image retrieval to extract and to model the semantic content of images. The basic idea of the semantic modeling is to classify local image regions into semantic concept classes such as water, rocks, or foliage. Images are represented through the frequency of occurrence of these local concepts. Through extensive experiments, we demonstrate that the image representation is well suited for modeling the semantic content of heterogenous scene categories, and thus for categorization and retrieval. The image representation also allows us to rank natural scenes according to their semantic similarity relative to certain scene categories. Based on human ranking data, we learn a perceptually plausible distance measure that leads to a high correlation between the human and the automatically obtained typicality ranking. This result is especially valuable for content-based image retrieval where the goal is to present retrieval results in descending semantic similarity from the query.",
"Deep neural networks trained for classification have been found to learn powerful image representations, which are also often used for other tasks such as comparing images w.r.t. their visual similarity. However, visual similarity does not imply semantic similarity. In order to learn semantically discriminative features, we propose to map images onto class embeddings whose pair-wise dot products correspond to a measure of semantic similarity between classes. Such an embedding does not only improve image retrieval results, but could also facilitate integrating semantics for other tasks, e.g., novelty detection or few-shot learning. We introduce a deterministic algorithm for computing the class centroids directly based on prior world-knowledge encoded in a hierarchy of classes such as WordNet. Experiments on CIFAR-100, NABirds, and ImageNet show that our learned semantic image embeddings improve the semantic consistency of image retrieval results by a large margin."
],
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"2146022472",
"2963506586"
]
}
|
Not just a matter of semantics: the relationship between visual similarity and semantic similarity
|
There exist applications in which labeled training data can not be acquired in amounts sufficient to reach the high accuracy associated with contemporary convolutional neural networks (CNNs) with millions of parameters. These include industrial [13,17] and medical [14,27,32] as well as research in other fields like wildlife monitoring [2,3,6]. Semantic methods such as knowledge transfer and zero-shot learning consume information about the semantic relationship between classes from databases like WordNet [18] to allow high-accuracy classification even when training data is insufficient or missing entirely [24]. They can only function when the unknown visual class relationships are predictable by the semantic relationships.
(a) A deer and a forest. By taxonomy only, their semantic similarity is weak. Visual similarity however is strong.
(b) An orchid and a sunflower. Their semantic similarity very strong due to them both being flowers. The visual similarity between them is weak. Figure 1: Examples of semantic-visual disagreement.
In this paper, we analyze and test this crucial assumption by evaluating the relationship between visual and semantic similarity in a detailed and systematic fashion.
To guide our analysis, we formulate three highly consequential, non-trivial hypotheses around the visual-semantic relationship. The exact nature of the links and the similarity terms is specified in section 4. Our first hypothesis concerns the relationship itself:
H 1 There is a link between visual similarity and semantic similarity. It seems trivial on the surface, but each individual component requires a proper, nontrivial definition to ultimately make the hypothesis ver-ifiable (see section 4). The observed effectiveness of semantic methods suggests that knowledge about semantic relationships is somewhat applicable in the visual domain. However, counter-examples are easily found, e.g. figs. 1 and 5. Furthermore, a basic notion of semantic similarity is already contained in the expectation that "different classes look different" (see section 2.1). A similarity measure based on actual semantic knowledge should be linked stronger to visual similarity than this simple baseline.
Semantic methods seek to optimize accuracy and in turn model confusion, but confusion and visual similarity are not trivially related. Insights about the low-level visual similarity may not be applicable to the more abstract confusion. To cover not only largely model-free, but also also model-specific notions, we formulate our second and third hypotheses:
H 2
There is a link between visual similarity and model confusion. When considering low inter-class distance in a feature space to be contributor to confusion, it could also be one in the visual domain. This link strongly depends on the selected features and classifier, but it could also be affected by violations of "different classes look different" in the dataset.
H 3 There is a link between semantic similarity and model confusion. This link should be investigated because it directly relates to the goal of semantic methods, which is to reduce confusion by adding semantic information. It "skips" the low-level visual component and as such is interesting on its own. The expectation that "different classes look different" can already explain the complete confusion matrix of a perfect classifier. We also expect it to partly explain a real classifier's confusions. So to consider H 3 verified, we require semantic similarity to show an even stronger correlation to confusion than this baseline.
Our main contribution is an extensive and insightful evaluation of this relationship across five different semantic and visual similarity measures respectively. It is based on the three aforementioned hypotheses around the relationship. We show quantitative results measuring the agreement between individual measures and across visual and semantic similarities as rank correlation. Moreover, we analyze special cases of agreement and disagreement qualitatively. The results and their various implications are discussed in section 5.5. They suggest that, while the relationship exists even beyond the "different classes look different" baseline, even more investigation is warranted into tasks different from classification because of the semantically reductive nature of class labels. Hence, semantic methods may perform better on more complex tasks.
Semantic Similarity
The term semantic similarity describes the degree to which two concepts interact semantically. A common def-inition requires taking into account only the taxonomical (hierarchical) relationship between the concepts [8, p. 10]. A more general notion is semantic relatedness, where any type of semantic link may be considered [8, p. 10]. Both are semantic measures, which also include distances and dissimilarities [8, p. 9]. We adhere to these definitions in this work, specifically the hierarchical restriction of semantic similarity.
Prerequisites
In certain cases, it is easier to formulate a semantic measure based on hierarchical relationships as a distance first. Such a distance d between two concepts x, y can be converted to a similarity by 1/(1 + d(x, y)) [8, p. 60]. This results in a measure bounded by (0, 1], where 1 stands for maximal similarity, i.e. the distance is zero. We will apply this rule to convert all distances to similarities in our experiments. We also apply it to dissimilarities, which are comparable to distances, but do not fulfill the triangle inequality.
Semantic Baseline When training a classifier without using semantic embeddings or hierarchical classification techniques [29], there is still prior information about semantic similarity given by the classification problem itself. Specifically, it is postulated that "classes that are different look different" (see section 4). Machine learning can not work if this assumption is violated such that different classes look identical. We encode this "knowledge" as a semantic similarity measure, defined as 1 for two identical concepts and zero otherwise. It will serve as a baseline for comparison with all other similarities.
Graph-based Similarities
We can describe a directed acyclic graph G(C, is-a) using the taxonomic relation is-a and the set of all concepts C. The following notions of semantic similarity can be expressed using properties of this graph. The graph distance d G (x, y) between two nodes x, y, which is defined as the length of the shortest path xP y, is an important example. If required, we reduce the graph G to a rooted tree T with root r by iterating through all nodes with multiple ancestors and successively removing the edges to ancestors with the lowest amount of successors. In a tree, we can then define the depth of a concept x as d T (x) = d T (r, x)
A simple approach is presented by Rada et al. in [21, p. 20], where the semantic distance between two concepts x and y is defined as the graph distance d G (x, y) between one concept and the other in G.
To make similarities comparable between different taxonomies, it may be desirable to take the overall depth of the hierarchy into account. Resnik presents such an approach for trees in [22], considering the maximal depth of T and the least common ancestor L(x, y). It is the uniquely defined node in the shortest path between two concepts x and y that is an ancestor to both [8, p. 61]. The similarity between x and y is then given as [22, p. 3]:
2 · max z∈C d T (z) − d T (x, L(x, y)) − d T (y, L(x, y)). (1)
Feature-based Similarities
The following approaches use a set-theoretic view of semantics. The set of features φ(x) of a concept x is usually defined as the set of ancestors A(x) of x [8]. We include the concept x itself, such that φ(x) = A(x) ∪ {x} [28].
Inspired by the Jaccard coefficient, Maedche and Staab propose a similarity measure defined as the intersection over union of the concept features of x and y respectively [16, p. 4]. This similarity is bounded by [0, 1], with identical concepts always resulting in 1.
Sanchez et al. present a dissimilarity measure that represents the ratio of distinct features to shared features of two concepts. It is defined by [28, p. 7723]:
log 2 1 + |φ(x)\φ(y)| + |φ(y)\φ(x)| |φ(x)\φ(y)| + |φ(y)\φ(x)| + |φ(y) ∩ φ(x)| .
(2)
Information-based Similarities
Semantic similarity is also defined using the notion of informativeness of a concept, inspired by information theory. Each concept x is assigned an Information Content (IC) I(x) [22,25]. This can be defined using only properties of the taxonomy, i.e. the graph G (intrinsic IC), or using the probability of observing the concept in corpora (extrinsic IC) [8, p. 54].
We use an intrinsic definition presented by Zhou et al. in [39], based on the descendants D(x):
I(x) = k· 1− |D(x)| |C| +(1−k)· log(d T (x)) log(max z∈C d T (z))
.
(3) With a definition of IC, we can apply an informationbased similarity measure. Jiang and Conrath propose a semantic distance in [10] using the notion of Most Informative Common Ancestor M(x, y) of two concepts x, y. It is defined as the element in (A(x) ∩ A(y)) ∪ (x ∩ y) with the highest IC [8, p. 65]. The distance is then defined as [10, p. 8]:
I(x) + I(y) − 2 · I(M(x, y)).(4)
Perceptual Metrics
Perceptual metrics are usually employed to quantify the distortion or information loss incurred by using compression algorithms. Such methods aim to minimize the difference between the original image and the compressed image and thereby maximize the similarity between both. However, perceptual metrics can also be used to assess the similarity of two independent images.
An image can be represented by an element of a highdimensional vector space. In this case, the Euclidean distance is a natural candidate for a dissimilarity measure. With the rule 1/(1 + d) from section 2.1, the distance is transformed into a visual similarity measure. To normalize the measure w.r.t. image dimensions and to simplify calculations, the mean squared error (MSE) is used. Applying the MSE to estimate image similarity has shortcomings. For example, shifting an image by one pixel significantly changes the distances to other images, including its unshifted self [31]. An alternative, but related measure is the mean absolute difference (MAD), which we also consider in our experiments.
In [37], Wang et al. develop a perceptual metric called Structural Similarity Index to adress shortcomings of previous methods. Specifically, they consider properties of the human visual system such that the index better reflects human judgement of visual similarity.
We use MSE, MAD and SSIM as perceptual metrics to indicate visual similarity in our experiments. There are better performing methods when considering human judgement, e.g. [38]. However, we cannot guarantee that humans always treat visuals and semantics as separate. Therefore, we avoid further methods that are motivated by human properties [34,35] or already incorporate semantic knowledge [15,7].
Feature-based Measures
Features are extracted to represent images at an abstract level. Thus, distances in such a feature space of images correspond to visual similarity in a possibly more robust way than the aforementioned perceptual metrics. Features have inherent or learned invariances w.r.t. certain transformations that should not affect the notion of visual similarity strongly. However, learned features may also be invariant to transformations that do affect visual similarity because the are optimized for semantic distinction. This behavior needs to be considered when selecting abstract features to determine visual similarity.
GIST [20] is an image descriptor that aims at describing a whole scene using a small number of estimations of specific perceptual properties, such that similar content is close in the resulting feature space. It is based on the notion of a spatial envelope, inspired by architecture, that can be extracted from an image and used to calculate statistics.
For reference, we observe the confusions of five ResNet-32 [9] models to represent feature-based visual similarity on the highest level of abstraction. Because confusion is not a symmetric function, we apply a transform (M + M T )/2 to obtain a symmetric representation.
Evaluating the Relationship
Visual similarity and semantic similarity are measures defined on different domains. Semantic similarities compare concepts, but visual similarities compare individual images. To analyze a correlation, a common domain over which both can be evaluated is essential. We propose to calculate similarities over all pairs of classes in an image classification dataset, which can be defined for both visual and semantic similarities. These pairwise similarities are then tested for correlation. The process is clarified in the following:
1. Dataset. We use the CIFAR-100 dataset [12] to verify our hypotheses. This dataset has a scale at which all experiments take a reasonable amount of time. Our computation times grow quadratically with the number of classes as well as images. Hence, we do not consider ImageNet [4,26] or 80 million tiny images [33] despite their larger coverage of semantic concepts.
2. Semantic similarities. We calculate semantic similarity measures over all pairs of classes in the dataset. The taxonomic relation is-a is taken from WordNet [18] by mapping all classes in CIFAR-100 to their counterpart concepts in WordNet, inducing the graph G(C, is-a). Some measures are defined as distances or dissimilarities. We use the rule presented in section 2.1 to derive similarities. The following measures are evaluated over all pairs of concepts (x, y) ∈ C × C (see section 2):
(S1) Graph distance d G (x, y) as proposed by Rada [37]. (V4) Distance between GIST descriptors [20] of images in feature space. (V5) Observed symmetric confusions of five ResNet-32 [9] models trained on the CIFAR-100 training set.
4.
Aggregation. For both visual and semantic similarity, there is more than one candidate method, i.e. (S1)-(S5) and (V1)-(V5). For the following steps, we need a single measure for each type of similarity, which we aggregate from (S1)-(S5) and (V1)-(V5) respectively. Since each method has its merits, selecting only one each would not be representative of the type of similarity. The output of all candidate methods is normalized individually, such that its range is in [0, 1]. We then calculate the average over each type of similarity, i.e. visual and semantic, to obtain two distinct measures (S) and (V).
Baselines.
A basic assumption of machine learning is that "the domains occupied by features of different classes are separated" [19, p. 8]. Intuitively, this should apply to the images of different classes as well. We can then expect to predict at least some of the visual similarity between classes just by knowing whether the classes are identical or not. This knowledge is encoded in the semantic baseline (SB), defined as 1 for identical concepts and zero otherwise (see also section 2.1). We propose a second baseline, the semantic noise (SN), where the aforementioned pairwise semantic similarity (S) is calculated, but the concepts are permuted randomly. This baseline serves to assess the informativeness of the taxonomic relationships.
(S1) (S2) (S3) (S4) (S5) (S1) (S2) (S3) (S4)(
Correlation
The similarity measures mentioned above are useful to define an order of similarity, i.e. whether a concept x is more similar to z than concept y. However, it is not reasonable in all cases to interpret them in a linear fashion like a dot product especially since many are derived from distances or dissimilarities and all were normalized from different ranges of values and then aggregated. We therefore test the similarities for correlation w.r.t. ranking, using Spearman's rank correlation coefficient [30] instead of looking for a linear relationship.
Results
In the following, we present the results of our experiments defined in the previous section. We first examine both types of similarity individually, comparing the five candi-date methods each. Afterwards, the hypotheses proposed in section 1 are tested. We then perform a qualitative analysis of extreme cases in both similarities and investigate cases of (dis-)agreement between them.
Semantic Similarities
We first analyze the pairwise semantic similarities over all classes. Figure 2a shows the average semantic similarity (S) as specified in section 4. The classes on the axes are ordered by a depth first search through the class hierarchy, yielding clearly visible artifacts of the graph structure.
Although we consider semantic similarity to be a single measure when verifying our hypotheses, studying the correlation between our candidate methods (S1)-(S5) is also important. While of course affected by our selection, it reflects upon the degree of agreement between several experts in the domain. Figure 3a visualizes the correlations. The graphbased methods (S1) and (S2) agree more strongly with each other than with the rest. The same is true of feature-based methods (S3) and (S4), which show the strongest correlation. The inter-agreement R, calculated by taking the average of all correlations except for the main diagonal, is 0.89. This is a strong agreement and suggests that the order of similarity between concepts can be, for the most part, considered representative of a universally agreed upon definition (if one existed). At the same time, one needs to consider that all methods utilize the same WordNet hierarchy.
Baselines Our semantic baseline (SB, see section 4) encodes the basic knowledge that different classes look different. This property should also be fulfilled by the average semantic similarity (S, see section 4). We thus expect there to be at least some correlation. The rank correlation between our average semantic similarity (S) and the semantic baseline (SB) is 0.17 with p < 0.05. This is a weak correlation compared to the strong inter-agreement of 0.89, which suggests that the similarities (S1)-(S5) are vastly more complex than (SB), but at the same time have a lot in common. As a second baseline we test the semantic noise (SN, see section 4). It is not correlated with (S) at ρ = 0.01, p > 0.05, meaning that the taxonomic relationship strongly affects (S). If it did not, the labels could be permuted without changing the pairwise similarities.
Visual Similarities
The average visual similarity (V) as estimated over all classes is shown in fig. 2b. For reference, we show the symmetric confusion matrix (see section 4) in fig. 2c. Comparing (V) to (S), the graph structure is less visible. In the confusion matrix however, the artifacts are more present.
Intuitively, visual similarity is a concept that is hard to define clearly and uniquely. Because we selected very different approaches with very different ideas and motivations behind them, we expect the agreement between (V1)-(V5) to be weak. Figure 3b shows the rank correlations between each candidate method. The agreement is strongest between the mean squared error (V1) and the GIST feature distance (V4). Both are L2 distances, but calculated in separate domains, highlighting the strong nonlinearity and complexity of image descriptors. The inter-agreement is very weak at R = 0.17. The results confirms our intuitions that visual similarity is very hard to define in mathematical terms. There is also no body of knowledge that all methods use in the visual domain like WordNet provides for semantics.
Hypotheses
To give a brief overview, the rank correlations between the different components of H 1 -H 3 are shown in fig. 4. In the following, we give our results w.r.t. the individual hypotheses. They are discussed further in section 5.5.
H 1 There is a link between visual similarity and semantic similarity. Using the definitions from section 4 including the semantic baseline (SB), we can examine the respective correlations. The rank correlation between (V) and (S) is 0.23, p < 0.05, indicating a link. Before we consider the hypothesis verified, we also evaluate what fraction of (V) is already explained by the semantic baseline (SB) as per our condition given in section 4. The rank correlation between (V) and (SB) is 0.17, p < 0.05, which is a weaker link than between (V) and (S). Additionally, (V) and (SN) are not correlated, illustrating that the wrong semantic knowledge can be worse than none. Thus, we can verify H 1 .
H 2 There is a link between visual similarity and model confusion. Since model confusion as (V5) is a contributor to average visual similarity (V), we consider only (V-), comprised of (V1)-(V4) for this hypothesis. The rank correlation between (V-) and the symmetric
Special Cases
In this section, we first perform a qualitative analysis of visual similarity and semantic similarity individually by looking at extreme values. We then inspect cases of strong agreement and disagreement between both.
Visual Figures 6a and 6b show the three most similar and three least similar concept pairs in CIFAR-100. The aggregated normalized visual similarity measures are not readily interpretable. Still, the choice of plain.n.01 and sea.n.01 as the most similar pair of concepts appears reasonable. Both classes have the horizon as a central feature, with half of the image showing the sky, which is also true for the second most similar combination, cloud.n.02 and sea.n.01. At the low resolution of CIFAR-100, the third most similar pair of maple.n.02 and oak.n.02 is hard to distinguish visually, except for the slightly larger range of maple hues. The three least similar pairs in CIFAR-100 are visually different on at least three levels. Globally, the colors are almost inverted. The round shapes of orange.n.01 clash with the comparatively hard edges of dolphin.n.02, ray.n.07 and shark.n.01 and locally, the textures are very different.
Semantic We also investigate the range of semantic similarities calculated over the CIFAR-100 dataset. Figure 7a shows examples of the the most semantically similar concept pairs. fox.n.01 and wolf.n.01 are not only most similar semantically, but show a strong visual likeness, too. This also applies to otter.n.02 and skunk.n.04 as well as ray.n.07 and shark.n.01, which are both visually similar to a degree. When inspecting the most dissimilar pairs, there is one pair of cattle.n.01 and forest.n.01 where there is a reasonably strong visual similarity, hinting at a disagreement.
Agreement To further analyze the the correlation, we examine specific cases of very strong agreement or disagreement. Figure 5 shows these extreme cases. We determine agreement based on ranking, so the most strongly agreed upon pairs (see fig. 5a) still show different absolute similarity numbers. Interestingly, they are not cases of extreme similarities. It suggests that even weak disagreements are more likely to be found at similarities close to the boundaries. When investigating strong disagreement as shown in fig. 5b, there are naturally extreme values to be found. All three pairs involve forest.n.01, which was also a part of the second least semantically similar pair. Its partners are all animals which usually have a background visually similar to a forest, hence the strong disagreement. However, the low semantic similarity is possibly an artifact of reducing a whole image to a single concept.
Discussion
H 1 : There is a link between visual similarity and semantic similarity. The relationship is stronger than a simple baseline, but weak overall at ρ = 0.23 vs ρ = 0.17. This should be considered when employing methods where visuals and semantics interact, e.g. in knowledge transfer. Failure cases such as in fig. 5b can only be found when labels are known, which has implications for real-life applications of semantic methods. As labels are unknown or lack visual examples, such cases are not predictable beforehand. This poses problems for applications that rely on accurate classification such as safety-critical equipment or even research in other fields consuming model predictions. A real-world example is wildlife conservationists relying on statistics from automatic camera trap image classification to draw conclusions on biodiversity. That semantic similarity of randomly permuted classes is not correlated with visual similarity at all, while the baseline is, suggests that wrong semantic knowledge can be much worse than no knowledge. H 2 : There is a link between visual similarity and model confusion. Visual similarity is defined on a low level for H 2 . As such, it should not cause model confusion by itself. On the one hand, the model can fail to generalize and cause an avoidable confusion. On the other hand, there may be an issue with the dataset. The test set may be sampled from a different distribution than the training set. It may also violate the postulate that different classes look different by containing the same or similar images across classes.
H 3 : There is a link between semantic similarity and model confusion. Similar to H 1 , it suggests that seman-tic methods could be applied to our data, but maybe not in general because failure cases are unpredictable. However, it implies a stronger effectiveness than H 1 at ρ = 0.39 vs. the baseline at ρ = 0.21. We attribute this to the model's capability of abstraction. It aligns with the idea of taxonomy, which is based on repeated abstraction of concepts. Using a formulation that optimizes semantic similarity instead of cross-entropy (which would correspond to the semantic baseline) or even a hierarchical classifier can recommended in our situation. It may still not generalize to other settings and any real-world application of such methods should be verified with at least a small test set.
Qualitative Some failures or disagreements may not be a result of the relationship itself, but of its application to image classification. The example from fig. 1 is valid when the whole image is reduced to a single concept. Still, the agreement between visual and semantic similarity may increase when the image is described in a more holistic fashion. While "deer" and "forest" as nouns are taxonomically only loosely related, the descriptions "A deer standing in a forest, partially occluded by a tree and tall grass" and "A forest composed of many trees and bushes, with the daytime sky visible" already appear more similar, while those descriptions are still missing some of the images' contents. This suggests that more complex tasks than image classification stand to benefit more from semantic methods.
In further research, not only nouns should be considered, but also adjectives, decompositions of objects into parts as well as working with a more general notion of semantic relatedness instead of simply semantic similarity. Datasets like Visual Genome [11] offer more complex annotations mapped to WordNet concepts that could be subjected to further study. However, tasks much more complex than hierarchical image classification on a semantic level lack a compelling real-world application to the best of our knowledge.
Conclusion
We present results of a comprehensive evaluation of semantic similarity measures and their correlation with visual similarities. We measure against the simple prior knowledge of different classes having different visuals. Then, we show that the relationship between semantic similarity, as calculated from WordNet [18] using five different methods, and visual similarity, also represented by five measures, is more meaningful than that. Furthermore, inter-agreement measures suggest that semantic similarity has a more agreed upon definition than visual similarity, although both concepts are based on human perception.
The results indicate that further research, especially into tasks different from image classification is warranted because of the semantically reductive nature of image labels. It may restrict the performance of semantic methods.
| 4,413 |
1811.07120
|
2900626851
|
Knowledge transfer, zero-shot learning and semantic image retrieval are methods that aim at improving accuracy by utilizing semantic information, e.g. from WordNet. It is assumed that this information can augment or replace missing visual data in the form of labeled training images because semantic similarity somewhat aligns with visual similarity. This assumption may seem trivial, but is crucial for the application of such semantic methods. Any violation can cause mispredictions. Thus, it is important to examine the visual-semantic relationship for a certain target problem. In this paper, we use five different semantic and visual similarity measures each to thoroughly analyze the relationship without relying too much on any single definition. We postulate and verify three highly consequential hypotheses on the relationship. Our results show that it indeed exists and that WordNet semantic similarity carries more information about visual similarity than just the knowledge of "different classes look different". They suggest that classification is not the ideal application for semantic methods and that wrong semantic information is much worse than none.
|
The relationship between visual similarity and semantic similarity has been subject of previous investigation. In @cite_25 , Deselaers and Ferrari consider a semantic similarity measure by Jiang and Conrath (see sec:semsiminf and @cite_12 ) as well as category histograms, in conjunction with the ImageNet dataset. They propose a novel distance function based on semantic as well as visual similarity to use in a nearest neighbor setting that outperforms purely visual distance functions. The authors also show a positive correlation between visual and semantic similarity for their choice of similarity measures on the ImageNet dataset. Their selections of Jiang-Conrath distance and the GIST feature descriptor are also evaluated in our work, where we add several other methods to compare.
|
{
"abstract": [
"Many computer vision approaches take for granted positive answers to questions such as “Are semantic categories visually separable?” and “Is visual similarity correlated to semantic similarity?”. In this paper, we study experimentally whether these assumptions hold and show parallels to questions investigated in cognitive science about the human visual system. The insights gained from our analysis enable building a novel distance function between images assessing whether they are from the same basic-level category. This function goes beyond direct visual distance as it also exploits semantic similarity measured through ImageNet. We demonstrate experimentally that it outperforms purely visual distances.",
"This paper presents a new approach for measuring semantic similarity distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task."
],
"cite_N": [
"@cite_25",
"@cite_12"
],
"mid": [
"2168371480",
"2953332543"
]
}
|
Not just a matter of semantics: the relationship between visual similarity and semantic similarity
|
There exist applications in which labeled training data can not be acquired in amounts sufficient to reach the high accuracy associated with contemporary convolutional neural networks (CNNs) with millions of parameters. These include industrial [13,17] and medical [14,27,32] as well as research in other fields like wildlife monitoring [2,3,6]. Semantic methods such as knowledge transfer and zero-shot learning consume information about the semantic relationship between classes from databases like WordNet [18] to allow high-accuracy classification even when training data is insufficient or missing entirely [24]. They can only function when the unknown visual class relationships are predictable by the semantic relationships.
(a) A deer and a forest. By taxonomy only, their semantic similarity is weak. Visual similarity however is strong.
(b) An orchid and a sunflower. Their semantic similarity very strong due to them both being flowers. The visual similarity between them is weak. Figure 1: Examples of semantic-visual disagreement.
In this paper, we analyze and test this crucial assumption by evaluating the relationship between visual and semantic similarity in a detailed and systematic fashion.
To guide our analysis, we formulate three highly consequential, non-trivial hypotheses around the visual-semantic relationship. The exact nature of the links and the similarity terms is specified in section 4. Our first hypothesis concerns the relationship itself:
H 1 There is a link between visual similarity and semantic similarity. It seems trivial on the surface, but each individual component requires a proper, nontrivial definition to ultimately make the hypothesis ver-ifiable (see section 4). The observed effectiveness of semantic methods suggests that knowledge about semantic relationships is somewhat applicable in the visual domain. However, counter-examples are easily found, e.g. figs. 1 and 5. Furthermore, a basic notion of semantic similarity is already contained in the expectation that "different classes look different" (see section 2.1). A similarity measure based on actual semantic knowledge should be linked stronger to visual similarity than this simple baseline.
Semantic methods seek to optimize accuracy and in turn model confusion, but confusion and visual similarity are not trivially related. Insights about the low-level visual similarity may not be applicable to the more abstract confusion. To cover not only largely model-free, but also also model-specific notions, we formulate our second and third hypotheses:
H 2
There is a link between visual similarity and model confusion. When considering low inter-class distance in a feature space to be contributor to confusion, it could also be one in the visual domain. This link strongly depends on the selected features and classifier, but it could also be affected by violations of "different classes look different" in the dataset.
H 3 There is a link between semantic similarity and model confusion. This link should be investigated because it directly relates to the goal of semantic methods, which is to reduce confusion by adding semantic information. It "skips" the low-level visual component and as such is interesting on its own. The expectation that "different classes look different" can already explain the complete confusion matrix of a perfect classifier. We also expect it to partly explain a real classifier's confusions. So to consider H 3 verified, we require semantic similarity to show an even stronger correlation to confusion than this baseline.
Our main contribution is an extensive and insightful evaluation of this relationship across five different semantic and visual similarity measures respectively. It is based on the three aforementioned hypotheses around the relationship. We show quantitative results measuring the agreement between individual measures and across visual and semantic similarities as rank correlation. Moreover, we analyze special cases of agreement and disagreement qualitatively. The results and their various implications are discussed in section 5.5. They suggest that, while the relationship exists even beyond the "different classes look different" baseline, even more investigation is warranted into tasks different from classification because of the semantically reductive nature of class labels. Hence, semantic methods may perform better on more complex tasks.
Semantic Similarity
The term semantic similarity describes the degree to which two concepts interact semantically. A common def-inition requires taking into account only the taxonomical (hierarchical) relationship between the concepts [8, p. 10]. A more general notion is semantic relatedness, where any type of semantic link may be considered [8, p. 10]. Both are semantic measures, which also include distances and dissimilarities [8, p. 9]. We adhere to these definitions in this work, specifically the hierarchical restriction of semantic similarity.
Prerequisites
In certain cases, it is easier to formulate a semantic measure based on hierarchical relationships as a distance first. Such a distance d between two concepts x, y can be converted to a similarity by 1/(1 + d(x, y)) [8, p. 60]. This results in a measure bounded by (0, 1], where 1 stands for maximal similarity, i.e. the distance is zero. We will apply this rule to convert all distances to similarities in our experiments. We also apply it to dissimilarities, which are comparable to distances, but do not fulfill the triangle inequality.
Semantic Baseline When training a classifier without using semantic embeddings or hierarchical classification techniques [29], there is still prior information about semantic similarity given by the classification problem itself. Specifically, it is postulated that "classes that are different look different" (see section 4). Machine learning can not work if this assumption is violated such that different classes look identical. We encode this "knowledge" as a semantic similarity measure, defined as 1 for two identical concepts and zero otherwise. It will serve as a baseline for comparison with all other similarities.
Graph-based Similarities
We can describe a directed acyclic graph G(C, is-a) using the taxonomic relation is-a and the set of all concepts C. The following notions of semantic similarity can be expressed using properties of this graph. The graph distance d G (x, y) between two nodes x, y, which is defined as the length of the shortest path xP y, is an important example. If required, we reduce the graph G to a rooted tree T with root r by iterating through all nodes with multiple ancestors and successively removing the edges to ancestors with the lowest amount of successors. In a tree, we can then define the depth of a concept x as d T (x) = d T (r, x)
A simple approach is presented by Rada et al. in [21, p. 20], where the semantic distance between two concepts x and y is defined as the graph distance d G (x, y) between one concept and the other in G.
To make similarities comparable between different taxonomies, it may be desirable to take the overall depth of the hierarchy into account. Resnik presents such an approach for trees in [22], considering the maximal depth of T and the least common ancestor L(x, y). It is the uniquely defined node in the shortest path between two concepts x and y that is an ancestor to both [8, p. 61]. The similarity between x and y is then given as [22, p. 3]:
2 · max z∈C d T (z) − d T (x, L(x, y)) − d T (y, L(x, y)). (1)
Feature-based Similarities
The following approaches use a set-theoretic view of semantics. The set of features φ(x) of a concept x is usually defined as the set of ancestors A(x) of x [8]. We include the concept x itself, such that φ(x) = A(x) ∪ {x} [28].
Inspired by the Jaccard coefficient, Maedche and Staab propose a similarity measure defined as the intersection over union of the concept features of x and y respectively [16, p. 4]. This similarity is bounded by [0, 1], with identical concepts always resulting in 1.
Sanchez et al. present a dissimilarity measure that represents the ratio of distinct features to shared features of two concepts. It is defined by [28, p. 7723]:
log 2 1 + |φ(x)\φ(y)| + |φ(y)\φ(x)| |φ(x)\φ(y)| + |φ(y)\φ(x)| + |φ(y) ∩ φ(x)| .
(2)
Information-based Similarities
Semantic similarity is also defined using the notion of informativeness of a concept, inspired by information theory. Each concept x is assigned an Information Content (IC) I(x) [22,25]. This can be defined using only properties of the taxonomy, i.e. the graph G (intrinsic IC), or using the probability of observing the concept in corpora (extrinsic IC) [8, p. 54].
We use an intrinsic definition presented by Zhou et al. in [39], based on the descendants D(x):
I(x) = k· 1− |D(x)| |C| +(1−k)· log(d T (x)) log(max z∈C d T (z))
.
(3) With a definition of IC, we can apply an informationbased similarity measure. Jiang and Conrath propose a semantic distance in [10] using the notion of Most Informative Common Ancestor M(x, y) of two concepts x, y. It is defined as the element in (A(x) ∩ A(y)) ∪ (x ∩ y) with the highest IC [8, p. 65]. The distance is then defined as [10, p. 8]:
I(x) + I(y) − 2 · I(M(x, y)).(4)
Perceptual Metrics
Perceptual metrics are usually employed to quantify the distortion or information loss incurred by using compression algorithms. Such methods aim to minimize the difference between the original image and the compressed image and thereby maximize the similarity between both. However, perceptual metrics can also be used to assess the similarity of two independent images.
An image can be represented by an element of a highdimensional vector space. In this case, the Euclidean distance is a natural candidate for a dissimilarity measure. With the rule 1/(1 + d) from section 2.1, the distance is transformed into a visual similarity measure. To normalize the measure w.r.t. image dimensions and to simplify calculations, the mean squared error (MSE) is used. Applying the MSE to estimate image similarity has shortcomings. For example, shifting an image by one pixel significantly changes the distances to other images, including its unshifted self [31]. An alternative, but related measure is the mean absolute difference (MAD), which we also consider in our experiments.
In [37], Wang et al. develop a perceptual metric called Structural Similarity Index to adress shortcomings of previous methods. Specifically, they consider properties of the human visual system such that the index better reflects human judgement of visual similarity.
We use MSE, MAD and SSIM as perceptual metrics to indicate visual similarity in our experiments. There are better performing methods when considering human judgement, e.g. [38]. However, we cannot guarantee that humans always treat visuals and semantics as separate. Therefore, we avoid further methods that are motivated by human properties [34,35] or already incorporate semantic knowledge [15,7].
Feature-based Measures
Features are extracted to represent images at an abstract level. Thus, distances in such a feature space of images correspond to visual similarity in a possibly more robust way than the aforementioned perceptual metrics. Features have inherent or learned invariances w.r.t. certain transformations that should not affect the notion of visual similarity strongly. However, learned features may also be invariant to transformations that do affect visual similarity because the are optimized for semantic distinction. This behavior needs to be considered when selecting abstract features to determine visual similarity.
GIST [20] is an image descriptor that aims at describing a whole scene using a small number of estimations of specific perceptual properties, such that similar content is close in the resulting feature space. It is based on the notion of a spatial envelope, inspired by architecture, that can be extracted from an image and used to calculate statistics.
For reference, we observe the confusions of five ResNet-32 [9] models to represent feature-based visual similarity on the highest level of abstraction. Because confusion is not a symmetric function, we apply a transform (M + M T )/2 to obtain a symmetric representation.
Evaluating the Relationship
Visual similarity and semantic similarity are measures defined on different domains. Semantic similarities compare concepts, but visual similarities compare individual images. To analyze a correlation, a common domain over which both can be evaluated is essential. We propose to calculate similarities over all pairs of classes in an image classification dataset, which can be defined for both visual and semantic similarities. These pairwise similarities are then tested for correlation. The process is clarified in the following:
1. Dataset. We use the CIFAR-100 dataset [12] to verify our hypotheses. This dataset has a scale at which all experiments take a reasonable amount of time. Our computation times grow quadratically with the number of classes as well as images. Hence, we do not consider ImageNet [4,26] or 80 million tiny images [33] despite their larger coverage of semantic concepts.
2. Semantic similarities. We calculate semantic similarity measures over all pairs of classes in the dataset. The taxonomic relation is-a is taken from WordNet [18] by mapping all classes in CIFAR-100 to their counterpart concepts in WordNet, inducing the graph G(C, is-a). Some measures are defined as distances or dissimilarities. We use the rule presented in section 2.1 to derive similarities. The following measures are evaluated over all pairs of concepts (x, y) ∈ C × C (see section 2):
(S1) Graph distance d G (x, y) as proposed by Rada [37]. (V4) Distance between GIST descriptors [20] of images in feature space. (V5) Observed symmetric confusions of five ResNet-32 [9] models trained on the CIFAR-100 training set.
4.
Aggregation. For both visual and semantic similarity, there is more than one candidate method, i.e. (S1)-(S5) and (V1)-(V5). For the following steps, we need a single measure for each type of similarity, which we aggregate from (S1)-(S5) and (V1)-(V5) respectively. Since each method has its merits, selecting only one each would not be representative of the type of similarity. The output of all candidate methods is normalized individually, such that its range is in [0, 1]. We then calculate the average over each type of similarity, i.e. visual and semantic, to obtain two distinct measures (S) and (V).
Baselines.
A basic assumption of machine learning is that "the domains occupied by features of different classes are separated" [19, p. 8]. Intuitively, this should apply to the images of different classes as well. We can then expect to predict at least some of the visual similarity between classes just by knowing whether the classes are identical or not. This knowledge is encoded in the semantic baseline (SB), defined as 1 for identical concepts and zero otherwise (see also section 2.1). We propose a second baseline, the semantic noise (SN), where the aforementioned pairwise semantic similarity (S) is calculated, but the concepts are permuted randomly. This baseline serves to assess the informativeness of the taxonomic relationships.
(S1) (S2) (S3) (S4) (S5) (S1) (S2) (S3) (S4)(
Correlation
The similarity measures mentioned above are useful to define an order of similarity, i.e. whether a concept x is more similar to z than concept y. However, it is not reasonable in all cases to interpret them in a linear fashion like a dot product especially since many are derived from distances or dissimilarities and all were normalized from different ranges of values and then aggregated. We therefore test the similarities for correlation w.r.t. ranking, using Spearman's rank correlation coefficient [30] instead of looking for a linear relationship.
Results
In the following, we present the results of our experiments defined in the previous section. We first examine both types of similarity individually, comparing the five candi-date methods each. Afterwards, the hypotheses proposed in section 1 are tested. We then perform a qualitative analysis of extreme cases in both similarities and investigate cases of (dis-)agreement between them.
Semantic Similarities
We first analyze the pairwise semantic similarities over all classes. Figure 2a shows the average semantic similarity (S) as specified in section 4. The classes on the axes are ordered by a depth first search through the class hierarchy, yielding clearly visible artifacts of the graph structure.
Although we consider semantic similarity to be a single measure when verifying our hypotheses, studying the correlation between our candidate methods (S1)-(S5) is also important. While of course affected by our selection, it reflects upon the degree of agreement between several experts in the domain. Figure 3a visualizes the correlations. The graphbased methods (S1) and (S2) agree more strongly with each other than with the rest. The same is true of feature-based methods (S3) and (S4), which show the strongest correlation. The inter-agreement R, calculated by taking the average of all correlations except for the main diagonal, is 0.89. This is a strong agreement and suggests that the order of similarity between concepts can be, for the most part, considered representative of a universally agreed upon definition (if one existed). At the same time, one needs to consider that all methods utilize the same WordNet hierarchy.
Baselines Our semantic baseline (SB, see section 4) encodes the basic knowledge that different classes look different. This property should also be fulfilled by the average semantic similarity (S, see section 4). We thus expect there to be at least some correlation. The rank correlation between our average semantic similarity (S) and the semantic baseline (SB) is 0.17 with p < 0.05. This is a weak correlation compared to the strong inter-agreement of 0.89, which suggests that the similarities (S1)-(S5) are vastly more complex than (SB), but at the same time have a lot in common. As a second baseline we test the semantic noise (SN, see section 4). It is not correlated with (S) at ρ = 0.01, p > 0.05, meaning that the taxonomic relationship strongly affects (S). If it did not, the labels could be permuted without changing the pairwise similarities.
Visual Similarities
The average visual similarity (V) as estimated over all classes is shown in fig. 2b. For reference, we show the symmetric confusion matrix (see section 4) in fig. 2c. Comparing (V) to (S), the graph structure is less visible. In the confusion matrix however, the artifacts are more present.
Intuitively, visual similarity is a concept that is hard to define clearly and uniquely. Because we selected very different approaches with very different ideas and motivations behind them, we expect the agreement between (V1)-(V5) to be weak. Figure 3b shows the rank correlations between each candidate method. The agreement is strongest between the mean squared error (V1) and the GIST feature distance (V4). Both are L2 distances, but calculated in separate domains, highlighting the strong nonlinearity and complexity of image descriptors. The inter-agreement is very weak at R = 0.17. The results confirms our intuitions that visual similarity is very hard to define in mathematical terms. There is also no body of knowledge that all methods use in the visual domain like WordNet provides for semantics.
Hypotheses
To give a brief overview, the rank correlations between the different components of H 1 -H 3 are shown in fig. 4. In the following, we give our results w.r.t. the individual hypotheses. They are discussed further in section 5.5.
H 1 There is a link between visual similarity and semantic similarity. Using the definitions from section 4 including the semantic baseline (SB), we can examine the respective correlations. The rank correlation between (V) and (S) is 0.23, p < 0.05, indicating a link. Before we consider the hypothesis verified, we also evaluate what fraction of (V) is already explained by the semantic baseline (SB) as per our condition given in section 4. The rank correlation between (V) and (SB) is 0.17, p < 0.05, which is a weaker link than between (V) and (S). Additionally, (V) and (SN) are not correlated, illustrating that the wrong semantic knowledge can be worse than none. Thus, we can verify H 1 .
H 2 There is a link between visual similarity and model confusion. Since model confusion as (V5) is a contributor to average visual similarity (V), we consider only (V-), comprised of (V1)-(V4) for this hypothesis. The rank correlation between (V-) and the symmetric
Special Cases
In this section, we first perform a qualitative analysis of visual similarity and semantic similarity individually by looking at extreme values. We then inspect cases of strong agreement and disagreement between both.
Visual Figures 6a and 6b show the three most similar and three least similar concept pairs in CIFAR-100. The aggregated normalized visual similarity measures are not readily interpretable. Still, the choice of plain.n.01 and sea.n.01 as the most similar pair of concepts appears reasonable. Both classes have the horizon as a central feature, with half of the image showing the sky, which is also true for the second most similar combination, cloud.n.02 and sea.n.01. At the low resolution of CIFAR-100, the third most similar pair of maple.n.02 and oak.n.02 is hard to distinguish visually, except for the slightly larger range of maple hues. The three least similar pairs in CIFAR-100 are visually different on at least three levels. Globally, the colors are almost inverted. The round shapes of orange.n.01 clash with the comparatively hard edges of dolphin.n.02, ray.n.07 and shark.n.01 and locally, the textures are very different.
Semantic We also investigate the range of semantic similarities calculated over the CIFAR-100 dataset. Figure 7a shows examples of the the most semantically similar concept pairs. fox.n.01 and wolf.n.01 are not only most similar semantically, but show a strong visual likeness, too. This also applies to otter.n.02 and skunk.n.04 as well as ray.n.07 and shark.n.01, which are both visually similar to a degree. When inspecting the most dissimilar pairs, there is one pair of cattle.n.01 and forest.n.01 where there is a reasonably strong visual similarity, hinting at a disagreement.
Agreement To further analyze the the correlation, we examine specific cases of very strong agreement or disagreement. Figure 5 shows these extreme cases. We determine agreement based on ranking, so the most strongly agreed upon pairs (see fig. 5a) still show different absolute similarity numbers. Interestingly, they are not cases of extreme similarities. It suggests that even weak disagreements are more likely to be found at similarities close to the boundaries. When investigating strong disagreement as shown in fig. 5b, there are naturally extreme values to be found. All three pairs involve forest.n.01, which was also a part of the second least semantically similar pair. Its partners are all animals which usually have a background visually similar to a forest, hence the strong disagreement. However, the low semantic similarity is possibly an artifact of reducing a whole image to a single concept.
Discussion
H 1 : There is a link between visual similarity and semantic similarity. The relationship is stronger than a simple baseline, but weak overall at ρ = 0.23 vs ρ = 0.17. This should be considered when employing methods where visuals and semantics interact, e.g. in knowledge transfer. Failure cases such as in fig. 5b can only be found when labels are known, which has implications for real-life applications of semantic methods. As labels are unknown or lack visual examples, such cases are not predictable beforehand. This poses problems for applications that rely on accurate classification such as safety-critical equipment or even research in other fields consuming model predictions. A real-world example is wildlife conservationists relying on statistics from automatic camera trap image classification to draw conclusions on biodiversity. That semantic similarity of randomly permuted classes is not correlated with visual similarity at all, while the baseline is, suggests that wrong semantic knowledge can be much worse than no knowledge. H 2 : There is a link between visual similarity and model confusion. Visual similarity is defined on a low level for H 2 . As such, it should not cause model confusion by itself. On the one hand, the model can fail to generalize and cause an avoidable confusion. On the other hand, there may be an issue with the dataset. The test set may be sampled from a different distribution than the training set. It may also violate the postulate that different classes look different by containing the same or similar images across classes.
H 3 : There is a link between semantic similarity and model confusion. Similar to H 1 , it suggests that seman-tic methods could be applied to our data, but maybe not in general because failure cases are unpredictable. However, it implies a stronger effectiveness than H 1 at ρ = 0.39 vs. the baseline at ρ = 0.21. We attribute this to the model's capability of abstraction. It aligns with the idea of taxonomy, which is based on repeated abstraction of concepts. Using a formulation that optimizes semantic similarity instead of cross-entropy (which would correspond to the semantic baseline) or even a hierarchical classifier can recommended in our situation. It may still not generalize to other settings and any real-world application of such methods should be verified with at least a small test set.
Qualitative Some failures or disagreements may not be a result of the relationship itself, but of its application to image classification. The example from fig. 1 is valid when the whole image is reduced to a single concept. Still, the agreement between visual and semantic similarity may increase when the image is described in a more holistic fashion. While "deer" and "forest" as nouns are taxonomically only loosely related, the descriptions "A deer standing in a forest, partially occluded by a tree and tall grass" and "A forest composed of many trees and bushes, with the daytime sky visible" already appear more similar, while those descriptions are still missing some of the images' contents. This suggests that more complex tasks than image classification stand to benefit more from semantic methods.
In further research, not only nouns should be considered, but also adjectives, decompositions of objects into parts as well as working with a more general notion of semantic relatedness instead of simply semantic similarity. Datasets like Visual Genome [11] offer more complex annotations mapped to WordNet concepts that could be subjected to further study. However, tasks much more complex than hierarchical image classification on a semantic level lack a compelling real-world application to the best of our knowledge.
Conclusion
We present results of a comprehensive evaluation of semantic similarity measures and their correlation with visual similarities. We measure against the simple prior knowledge of different classes having different visuals. Then, we show that the relationship between semantic similarity, as calculated from WordNet [18] using five different methods, and visual similarity, also represented by five measures, is more meaningful than that. Furthermore, inter-agreement measures suggest that semantic similarity has a more agreed upon definition than visual similarity, although both concepts are based on human perception.
The results indicate that further research, especially into tasks different from image classification is warranted because of the semantically reductive nature of image labels. It may restrict the performance of semantic methods.
| 4,413 |
1811.06564
|
2901710196
|
We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.
|
Attention has been recently brought to the communicational aspects of multiagent systems. Recent research has explored cooperative @cite_13 @cite_15 @cite_4 and semi-cooperative scenarios such as negotiation games @cite_5 . The emergence of communication in adversarial scenarios has been explored less extensively.
|
{
"abstract": [
"Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols -- one grounded in the semantics of the game, and one which is ungrounded and is a form of cheap talk. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded channel. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.",
"The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are interested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communication. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver. The receiver must rely on this message to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the \"word meanings\" induced in the game to better reflect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents' code into natural language. Both of these are necessary steps towards developing machines that are able to communicate with humans productively.",
"We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.",
"Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). We compare a reinforcement learning approach and one using a differentiable relaxation (straight-through Gumbel-softmax estimator) and observe that the latter is much faster to converge and it results in more effective protocols. Interestingly, we also observe that the protocol we induce by optimizing the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol."
],
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_4"
],
"mid": [
"2786533521",
"2950472486",
"2395575420",
"2963681240"
]
}
|
Seq2Seq Mimic Games: A Signaling Perspective
|
We propose an initial step towards models to study the emergence of communication in adversarial environments. In particular, we explore Seq2Seq [18][5] [21] based agents in a class of games inspired by the Imitation game [20].
We analyze these games from the perspective of signaling games [17] [23]. Agents are required to learn how to maximize their expected reward by improving their communication policies. Experiments do not assume previous knowledge or training and all agents are a priori ungrounded, i.e. tabula rasa.
Training Seq2Seq models is known to be challenging [1][2] [22] [9]. In this work, we use an actorcritic architecture, however unlike Bahdanau et al. [2], our emphasis is not on sequence prediction but on maximizing expected rewards by developing a adequate communicational strategy.
When analyzed from a signaling perspective, we show how agents in adversarial conditions may learn to communicate and transfer information when intrinsic or environmental conditions are adequate: e.g. handicap and computational advantages. Depending on the game structure, we show how agents reach separating or pooling equilibria [17].
Game class description
We study a class of three-player imperfect information iterated games. Two agents, one of each type c ∈ {blue, red} (i.e. A blue and A red ), and a single interrogator agent (I) exchange messages m, i.e. sequences m = s 1 s 2 · · ·s L of discrete symbols s l ∈ Σ from a prearranged alphabet Σ. This alphabet includes a special symbol <EOS> to indicate the end of sentences. The lack of common knowledge is a key distinction with respect to the classic Imitation game definition. Agents are not aware of their own or others limitations. Like classic reinforcement learning, they need to explore and discover their competitive advantages or disadvantages through interaction.
In every iteration t, the interrogator I starts by sending a question/primer m Q t to both A c agents. Afterwards, I receives their answers as two anonymous messages m Ai t and is rewarded if it can determine the type c i of the source of each message. Blue and red are in adversarial positions. While A blue is rewarded when its messages are recognized, A red is rewarded when it misleads I and c = red messages are incorrectly identified as c = blue. After each iteration, all messages, sources, rewards and I's inferred types are made available to all agents. Agents use this information to update their communication strategies and beliefs.
Model
We use Seq2Seq actor-critic models in every agent. While actors are parametric generative models that produce sequences, critics provide a subjective estimation of the expected reward for a given partial sequence. Agents train a critic using data and later optimize their behavioral strategy (actor) using the critic's feedback. After each round, all information is made public, so critics can be trained using all available experiences including adversaries' responses. Figure 1 shows a block diagram describing the agents' model structure. All encoders and decoders use gated recurrent units (GRU) [5] and a simple attention mechanism [3]. To simplify notation, we omit subscript t unless the context is not clear.
Actors A c agents (c ∈ {red, blue}) use a Seq2Seq model [18] as actor, i.e. an GRU encoder E A c followed by a GRU decoder D A c that terminates in a softmax. The interrogator I actor has two output branches: an encoder E A I followed by a decoder D A I or a discriminator C A I . Which output is used depends on the game step (questioning vs classifying).
Critics Critics follow a similar structure: a GRU encoder E C followed by a feed-forward network F C that terminates in a softmax function. Given a partial input sequence m 1:k = s 1 · · ·s k , the critic estimates the corresponding q-value Q(m 1:k , s k+1 ) for every possible s k+1 ∈ Σ. We use a technique similar to DQN [15] and vectorize the calculation to obtain a single vector with Q values for each possible s k+1 ∈ Σ.
E A I Interrogator I D A I F D I E A c actor A A c D A c m Q t P (blue|m) m Ai m Q m E C c F C c Q Ai (m 1:k , s k+1 ) m 1:k E C I F C I Q I (m 1:k , s k+1 ) m 1:k critic A C c Agents A c actor I A critic I C actor I D
Learning
In every iteration t, we sample N end-to-end interactions (m Q , c A1 , c A2 , m A1 , m A2 , c I 1 , c I 2 ) where c I i indicates the inferred type with respect to message m Ai . There are two training stages. We first train critics and the discriminator using sampled data. Secondly, we train actors using feedback from their respective critics.
Training with data Samples are used to train both the critics and the discriminator. All critics (A C c , I C ) are trained using concatenated sequences m = m Q ||m Ai as input. We use binary cross entropy H(x, y) = − [y · log x + (1 − y) · log(1 − x)] and target values η(x) = δ c I ,x where δ i,j is the Kronecker delta.
The corresponding losses for each message m and respective known type c are shown in Table 1. While critics are trained using partial sequences, the discriminator only considers full messages. Losses are accumulated over all samples and optimization is done using Adam [12]. Training with critics We train actors using feedback from their respective critics. We use trigger messages to obtain a response from actors, i.e. A A c , receives m Q and outputs m Ai . We use m Q messages from previous training samples. For the interrogator, we use fixed empty triggers, actor I A , receives m T = EOS and outputs m Q . As each symbol m k is generated, we retrieve both the symbol and the corresponding multinomial distribution π m 1:k−1 used by the decoder.
From the respective critic, we obtain a vector Q(m 1:k−1 , * ) with q-values for each possible symbol. The dot product π m 1:k−1 · Q(m 1:k−1 , * ) results in the policy expected reward. The sum of the scores of all samples is later optimized using Adam [12].
Experiments
We ran an initial set of experiments where we varied environmental properties or agent capabilities. All experiments involve continuous learning. Experiment parameters are shown in Table 2.
Within these experiments, it is important to remember that it is in the best interest of agent red not to reveal its identity. In the games we explore, two kinds of equilibria are expected: separating or pooling. In pooling equilibria, the interrogator is not able to extract enough information from messages to determine sources correctly. Instead, separating equilibria is reached whenever messages carry enough information for interrogator to effectively determine the source. In all games, we impose limitations on the number of Seq2Seq hidden units h and the maximum amount of symbols an agent can emit before their communication channel is terminated. Agents are not explicitly aware of these factors but can indirectly perceive them through interaction.
The Identical experiment is a game where all settings and S c agents are identical. The interrogator has no chance of differentiating them resulting as expected in a pooling equilibrium. We use this game to confirm basic behavior.
In Handicap experiments (2 and 3), we assign different length limits to agents. Experiment 2 shows how blue discovers this advantage to distinguish itself (separating equilibrium). However, when the advantage is given to red (experiment 3), it is in red best interest to hide this fact and the game converges to a pooling equilibrium.
The second group of experiments enables questioning. We vary the amount of hidden units in the encoder/decoders as described in Table 2.
Results of experiment 5 are expected: red does not exploit its advantages and is able to mimic blue leading to a pooling equilibria. Experiment 4 shows an interesting outcome. The interrogator is able to separate types, we intuitively expected I to randomize questions, instead both I and blue converge to fixed sequences. Detailed analysis shows that due the limited amount of hidden units, red is not able to mimic the constant outputs of blue. Stochastic gradient descent updates affect already learned responses. This results in unstable outputs that allow I to differentiate between agents. When the number of hidden units increases (Neurons B (6, 7)), the issue does not occur anymore. For a higher number of units, we could not detect situations where the interrogator was able to different sources.
Conclusions
In this work, we presented a broad class of games inspired on the Imitation game that we relate to signaling theory. We used these games to explore how communication may arise in adversarial scenarios. We explored some factors that may enable or hinder separating equilibria. To our knowledge, this is the first piece of research that explores signaling theory in games that involve Seq2Seq models. Last but not least, we present a simple operational approach to train ungrounded Seq2Seq agents in this domain. In future work, we intend to pre-training agents in some known language such as English. This will allow us to explore a more complex range of experiments by extending our work to question-answering settings and grounded communication.
| 1,593 |
1811.06564
|
2901710196
|
We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.
|
Generative adversarial networks (GANs) @cite_1 have resulted in a wide range of interesting adversarial applications. However, the extension to sequence to sequence models (Seq2Seq) @cite_20 has been difficult. Combining GAN with Seq2Seq models is challenging because discrete samples drawn from categorical distributions hinder backpropagation. In addition, alternatives on how to perform reward imputation to partial sequences @cite_3 have been proposed.
|
{
"abstract": [
"We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation - perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [9]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
],
"cite_N": [
"@cite_3",
"@cite_1",
"@cite_20"
],
"mid": [
"590442793",
"",
"2949888546"
]
}
|
Seq2Seq Mimic Games: A Signaling Perspective
|
We propose an initial step towards models to study the emergence of communication in adversarial environments. In particular, we explore Seq2Seq [18][5] [21] based agents in a class of games inspired by the Imitation game [20].
We analyze these games from the perspective of signaling games [17] [23]. Agents are required to learn how to maximize their expected reward by improving their communication policies. Experiments do not assume previous knowledge or training and all agents are a priori ungrounded, i.e. tabula rasa.
Training Seq2Seq models is known to be challenging [1][2] [22] [9]. In this work, we use an actorcritic architecture, however unlike Bahdanau et al. [2], our emphasis is not on sequence prediction but on maximizing expected rewards by developing a adequate communicational strategy.
When analyzed from a signaling perspective, we show how agents in adversarial conditions may learn to communicate and transfer information when intrinsic or environmental conditions are adequate: e.g. handicap and computational advantages. Depending on the game structure, we show how agents reach separating or pooling equilibria [17].
Game class description
We study a class of three-player imperfect information iterated games. Two agents, one of each type c ∈ {blue, red} (i.e. A blue and A red ), and a single interrogator agent (I) exchange messages m, i.e. sequences m = s 1 s 2 · · ·s L of discrete symbols s l ∈ Σ from a prearranged alphabet Σ. This alphabet includes a special symbol <EOS> to indicate the end of sentences. The lack of common knowledge is a key distinction with respect to the classic Imitation game definition. Agents are not aware of their own or others limitations. Like classic reinforcement learning, they need to explore and discover their competitive advantages or disadvantages through interaction.
In every iteration t, the interrogator I starts by sending a question/primer m Q t to both A c agents. Afterwards, I receives their answers as two anonymous messages m Ai t and is rewarded if it can determine the type c i of the source of each message. Blue and red are in adversarial positions. While A blue is rewarded when its messages are recognized, A red is rewarded when it misleads I and c = red messages are incorrectly identified as c = blue. After each iteration, all messages, sources, rewards and I's inferred types are made available to all agents. Agents use this information to update their communication strategies and beliefs.
Model
We use Seq2Seq actor-critic models in every agent. While actors are parametric generative models that produce sequences, critics provide a subjective estimation of the expected reward for a given partial sequence. Agents train a critic using data and later optimize their behavioral strategy (actor) using the critic's feedback. After each round, all information is made public, so critics can be trained using all available experiences including adversaries' responses. Figure 1 shows a block diagram describing the agents' model structure. All encoders and decoders use gated recurrent units (GRU) [5] and a simple attention mechanism [3]. To simplify notation, we omit subscript t unless the context is not clear.
Actors A c agents (c ∈ {red, blue}) use a Seq2Seq model [18] as actor, i.e. an GRU encoder E A c followed by a GRU decoder D A c that terminates in a softmax. The interrogator I actor has two output branches: an encoder E A I followed by a decoder D A I or a discriminator C A I . Which output is used depends on the game step (questioning vs classifying).
Critics Critics follow a similar structure: a GRU encoder E C followed by a feed-forward network F C that terminates in a softmax function. Given a partial input sequence m 1:k = s 1 · · ·s k , the critic estimates the corresponding q-value Q(m 1:k , s k+1 ) for every possible s k+1 ∈ Σ. We use a technique similar to DQN [15] and vectorize the calculation to obtain a single vector with Q values for each possible s k+1 ∈ Σ.
E A I Interrogator I D A I F D I E A c actor A A c D A c m Q t P (blue|m) m Ai m Q m E C c F C c Q Ai (m 1:k , s k+1 ) m 1:k E C I F C I Q I (m 1:k , s k+1 ) m 1:k critic A C c Agents A c actor I A critic I C actor I D
Learning
In every iteration t, we sample N end-to-end interactions (m Q , c A1 , c A2 , m A1 , m A2 , c I 1 , c I 2 ) where c I i indicates the inferred type with respect to message m Ai . There are two training stages. We first train critics and the discriminator using sampled data. Secondly, we train actors using feedback from their respective critics.
Training with data Samples are used to train both the critics and the discriminator. All critics (A C c , I C ) are trained using concatenated sequences m = m Q ||m Ai as input. We use binary cross entropy H(x, y) = − [y · log x + (1 − y) · log(1 − x)] and target values η(x) = δ c I ,x where δ i,j is the Kronecker delta.
The corresponding losses for each message m and respective known type c are shown in Table 1. While critics are trained using partial sequences, the discriminator only considers full messages. Losses are accumulated over all samples and optimization is done using Adam [12]. Training with critics We train actors using feedback from their respective critics. We use trigger messages to obtain a response from actors, i.e. A A c , receives m Q and outputs m Ai . We use m Q messages from previous training samples. For the interrogator, we use fixed empty triggers, actor I A , receives m T = EOS and outputs m Q . As each symbol m k is generated, we retrieve both the symbol and the corresponding multinomial distribution π m 1:k−1 used by the decoder.
From the respective critic, we obtain a vector Q(m 1:k−1 , * ) with q-values for each possible symbol. The dot product π m 1:k−1 · Q(m 1:k−1 , * ) results in the policy expected reward. The sum of the scores of all samples is later optimized using Adam [12].
Experiments
We ran an initial set of experiments where we varied environmental properties or agent capabilities. All experiments involve continuous learning. Experiment parameters are shown in Table 2.
Within these experiments, it is important to remember that it is in the best interest of agent red not to reveal its identity. In the games we explore, two kinds of equilibria are expected: separating or pooling. In pooling equilibria, the interrogator is not able to extract enough information from messages to determine sources correctly. Instead, separating equilibria is reached whenever messages carry enough information for interrogator to effectively determine the source. In all games, we impose limitations on the number of Seq2Seq hidden units h and the maximum amount of symbols an agent can emit before their communication channel is terminated. Agents are not explicitly aware of these factors but can indirectly perceive them through interaction.
The Identical experiment is a game where all settings and S c agents are identical. The interrogator has no chance of differentiating them resulting as expected in a pooling equilibrium. We use this game to confirm basic behavior.
In Handicap experiments (2 and 3), we assign different length limits to agents. Experiment 2 shows how blue discovers this advantage to distinguish itself (separating equilibrium). However, when the advantage is given to red (experiment 3), it is in red best interest to hide this fact and the game converges to a pooling equilibrium.
The second group of experiments enables questioning. We vary the amount of hidden units in the encoder/decoders as described in Table 2.
Results of experiment 5 are expected: red does not exploit its advantages and is able to mimic blue leading to a pooling equilibria. Experiment 4 shows an interesting outcome. The interrogator is able to separate types, we intuitively expected I to randomize questions, instead both I and blue converge to fixed sequences. Detailed analysis shows that due the limited amount of hidden units, red is not able to mimic the constant outputs of blue. Stochastic gradient descent updates affect already learned responses. This results in unstable outputs that allow I to differentiate between agents. When the number of hidden units increases (Neurons B (6, 7)), the issue does not occur anymore. For a higher number of units, we could not detect situations where the interrogator was able to different sources.
Conclusions
In this work, we presented a broad class of games inspired on the Imitation game that we relate to signaling theory. We used these games to explore how communication may arise in adversarial scenarios. We explored some factors that may enable or hinder separating equilibria. To our knowledge, this is the first piece of research that explores signaling theory in games that involve Seq2Seq models. Last but not least, we present a simple operational approach to train ungrounded Seq2Seq agents in this domain. In future work, we intend to pre-training agents in some known language such as English. This will allow us to explore a more complex range of experiments by extending our work to question-answering settings and grounded communication.
| 1,593 |
1811.06564
|
2901710196
|
We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.
|
With respect to backpropagating errors, reparametrization @cite_7 has been used multiple times to allow for backpropagation through stochastic nodes. In particular, Gumbel-Softmax @cite_16 has allowed categorical distributions @cite_8 in stochastic computational graphs. This technique has been shown as an alternative to reinforcement learning @cite_4 within the scope of cooperative referential games. More recently, similar ideas resulted in SeqGAN @cite_11 being proposed. Further incremental improvements have been published, such as applying actor-critic models @cite_17 or combining with proximal policy optimization (PPO) @cite_14 in order to improve learning performance.
|
{
"abstract": [
"In sequence generation task, many works use policy gradient for model optimization to tackle the intractable backpropagation issue when maximizing the non-differentiable evaluation metrics or fooling the discriminator in adversarial learning. In this paper, we replace policy gradient with proximal policy optimization (PPO), which is a proved more efficient reinforcement learning algorithm, and propose a dynamic approach for PPO (PPO-dynamic). We demonstrate the efficacy of PPO and PPO-dynamic on conditional sequence generation tasks including synthetic experiment and chit-chat chatbot. The results show that PPO and PPO-dynamic can beat policy gradient by stability and performance.",
"Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). We compare a reinforcement learning approach and one using a differentiable relaxation (straight-through Gumbel-softmax estimator) and observe that the latter is much faster to converge and it results in more effective protocols. Interestingly, we also observe that the protocol we induce by optimizing the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol.",
"The ability to backpropagate stochastic gradients through continuous latent distributions has been crucial to the emergence of variational autoencoders and stochastic gradient variational Bayes. The key ingredient is an unbiased and low-variance way of estimating gradients with respect to distribution parameters from gradients evaluated at distribution samples. The \"reparameterization trick\" provides a class of transforms yielding such estimators for many continuous distributions, including the Gaussian and other members of the location-scale family. However the trick does not readily extend to mixture density models, due to the difficulty of reparameterizing the discrete distribution over mixture weights. This report describes an alternative transform, applicable to any continuous multivariate distribution with a differentiable density function from which samples can be drawn, and uses it to derive an unbiased estimator for mixture density weight derivatives. Combined with the reparameterization trick applied to the individual mixture components, this estimator makes it straightforward to train variational autoencoders with mixture-distributed latent variables, or to perform stochastic variational inference with a mixture density variational posterior.",
"Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.",
"",
"Generative Adversarial Networks (GAN) have limitations when the goal is to generate sequences of discrete elements. The reason for this is that samples from a distribution on discrete objects such as the multinomial are not differentiable with respect to the distribution parameters. This problem can be avoided by using the Gumbel-softmax distribution, which is a continuous approximation to a multinomial distribution parameterized in terms of the softmax function. In this work, we evaluate the performance of GANs based on recurrent neural networks with Gumbel-softmax output distributions in the task of generating sequences of discrete elements.",
"As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_17",
"@cite_16",
"@cite_11"
],
"mid": [
"2888742002",
"2963681240",
"2495475966",
"2950151997",
"",
"2565378226",
"2523469089"
]
}
|
Seq2Seq Mimic Games: A Signaling Perspective
|
We propose an initial step towards models to study the emergence of communication in adversarial environments. In particular, we explore Seq2Seq [18][5] [21] based agents in a class of games inspired by the Imitation game [20].
We analyze these games from the perspective of signaling games [17] [23]. Agents are required to learn how to maximize their expected reward by improving their communication policies. Experiments do not assume previous knowledge or training and all agents are a priori ungrounded, i.e. tabula rasa.
Training Seq2Seq models is known to be challenging [1][2] [22] [9]. In this work, we use an actorcritic architecture, however unlike Bahdanau et al. [2], our emphasis is not on sequence prediction but on maximizing expected rewards by developing a adequate communicational strategy.
When analyzed from a signaling perspective, we show how agents in adversarial conditions may learn to communicate and transfer information when intrinsic or environmental conditions are adequate: e.g. handicap and computational advantages. Depending on the game structure, we show how agents reach separating or pooling equilibria [17].
Game class description
We study a class of three-player imperfect information iterated games. Two agents, one of each type c ∈ {blue, red} (i.e. A blue and A red ), and a single interrogator agent (I) exchange messages m, i.e. sequences m = s 1 s 2 · · ·s L of discrete symbols s l ∈ Σ from a prearranged alphabet Σ. This alphabet includes a special symbol <EOS> to indicate the end of sentences. The lack of common knowledge is a key distinction with respect to the classic Imitation game definition. Agents are not aware of their own or others limitations. Like classic reinforcement learning, they need to explore and discover their competitive advantages or disadvantages through interaction.
In every iteration t, the interrogator I starts by sending a question/primer m Q t to both A c agents. Afterwards, I receives their answers as two anonymous messages m Ai t and is rewarded if it can determine the type c i of the source of each message. Blue and red are in adversarial positions. While A blue is rewarded when its messages are recognized, A red is rewarded when it misleads I and c = red messages are incorrectly identified as c = blue. After each iteration, all messages, sources, rewards and I's inferred types are made available to all agents. Agents use this information to update their communication strategies and beliefs.
Model
We use Seq2Seq actor-critic models in every agent. While actors are parametric generative models that produce sequences, critics provide a subjective estimation of the expected reward for a given partial sequence. Agents train a critic using data and later optimize their behavioral strategy (actor) using the critic's feedback. After each round, all information is made public, so critics can be trained using all available experiences including adversaries' responses. Figure 1 shows a block diagram describing the agents' model structure. All encoders and decoders use gated recurrent units (GRU) [5] and a simple attention mechanism [3]. To simplify notation, we omit subscript t unless the context is not clear.
Actors A c agents (c ∈ {red, blue}) use a Seq2Seq model [18] as actor, i.e. an GRU encoder E A c followed by a GRU decoder D A c that terminates in a softmax. The interrogator I actor has two output branches: an encoder E A I followed by a decoder D A I or a discriminator C A I . Which output is used depends on the game step (questioning vs classifying).
Critics Critics follow a similar structure: a GRU encoder E C followed by a feed-forward network F C that terminates in a softmax function. Given a partial input sequence m 1:k = s 1 · · ·s k , the critic estimates the corresponding q-value Q(m 1:k , s k+1 ) for every possible s k+1 ∈ Σ. We use a technique similar to DQN [15] and vectorize the calculation to obtain a single vector with Q values for each possible s k+1 ∈ Σ.
E A I Interrogator I D A I F D I E A c actor A A c D A c m Q t P (blue|m) m Ai m Q m E C c F C c Q Ai (m 1:k , s k+1 ) m 1:k E C I F C I Q I (m 1:k , s k+1 ) m 1:k critic A C c Agents A c actor I A critic I C actor I D
Learning
In every iteration t, we sample N end-to-end interactions (m Q , c A1 , c A2 , m A1 , m A2 , c I 1 , c I 2 ) where c I i indicates the inferred type with respect to message m Ai . There are two training stages. We first train critics and the discriminator using sampled data. Secondly, we train actors using feedback from their respective critics.
Training with data Samples are used to train both the critics and the discriminator. All critics (A C c , I C ) are trained using concatenated sequences m = m Q ||m Ai as input. We use binary cross entropy H(x, y) = − [y · log x + (1 − y) · log(1 − x)] and target values η(x) = δ c I ,x where δ i,j is the Kronecker delta.
The corresponding losses for each message m and respective known type c are shown in Table 1. While critics are trained using partial sequences, the discriminator only considers full messages. Losses are accumulated over all samples and optimization is done using Adam [12]. Training with critics We train actors using feedback from their respective critics. We use trigger messages to obtain a response from actors, i.e. A A c , receives m Q and outputs m Ai . We use m Q messages from previous training samples. For the interrogator, we use fixed empty triggers, actor I A , receives m T = EOS and outputs m Q . As each symbol m k is generated, we retrieve both the symbol and the corresponding multinomial distribution π m 1:k−1 used by the decoder.
From the respective critic, we obtain a vector Q(m 1:k−1 , * ) with q-values for each possible symbol. The dot product π m 1:k−1 · Q(m 1:k−1 , * ) results in the policy expected reward. The sum of the scores of all samples is later optimized using Adam [12].
Experiments
We ran an initial set of experiments where we varied environmental properties or agent capabilities. All experiments involve continuous learning. Experiment parameters are shown in Table 2.
Within these experiments, it is important to remember that it is in the best interest of agent red not to reveal its identity. In the games we explore, two kinds of equilibria are expected: separating or pooling. In pooling equilibria, the interrogator is not able to extract enough information from messages to determine sources correctly. Instead, separating equilibria is reached whenever messages carry enough information for interrogator to effectively determine the source. In all games, we impose limitations on the number of Seq2Seq hidden units h and the maximum amount of symbols an agent can emit before their communication channel is terminated. Agents are not explicitly aware of these factors but can indirectly perceive them through interaction.
The Identical experiment is a game where all settings and S c agents are identical. The interrogator has no chance of differentiating them resulting as expected in a pooling equilibrium. We use this game to confirm basic behavior.
In Handicap experiments (2 and 3), we assign different length limits to agents. Experiment 2 shows how blue discovers this advantage to distinguish itself (separating equilibrium). However, when the advantage is given to red (experiment 3), it is in red best interest to hide this fact and the game converges to a pooling equilibrium.
The second group of experiments enables questioning. We vary the amount of hidden units in the encoder/decoders as described in Table 2.
Results of experiment 5 are expected: red does not exploit its advantages and is able to mimic blue leading to a pooling equilibria. Experiment 4 shows an interesting outcome. The interrogator is able to separate types, we intuitively expected I to randomize questions, instead both I and blue converge to fixed sequences. Detailed analysis shows that due the limited amount of hidden units, red is not able to mimic the constant outputs of blue. Stochastic gradient descent updates affect already learned responses. This results in unstable outputs that allow I to differentiate between agents. When the number of hidden units increases (Neurons B (6, 7)), the issue does not occur anymore. For a higher number of units, we could not detect situations where the interrogator was able to different sources.
Conclusions
In this work, we presented a broad class of games inspired on the Imitation game that we relate to signaling theory. We used these games to explore how communication may arise in adversarial scenarios. We explored some factors that may enable or hinder separating equilibria. To our knowledge, this is the first piece of research that explores signaling theory in games that involve Seq2Seq models. Last but not least, we present a simple operational approach to train ungrounded Seq2Seq agents in this domain. In future work, we intend to pre-training agents in some known language such as English. This will allow us to explore a more complex range of experiments by extending our work to question-answering settings and grounded communication.
| 1,593 |
1906.10607
|
2954881165
|
In a disaster situation, first responders need to quickly acquire situational awareness and prioritize response based on the need, resources available and impact. Can they do this based on digital media such as Twitter alone, or newswire alone, or some combination of the two? We examine this question in the context of the 2015 Nepal Earthquakes. Because newswire articles are longer, effective summaries can be helpful in saving time yet giving key content. We evaluate the effectiveness of several unsupervised summarization techniques in capturing key content. We propose a method to link tweets written by the public and newswire articles, so that we can compare their key characteristics: timeliness, whether tweets appear earlier than their corresponding news articles, and content. A novel idea is to view relevant tweets as a summary of the matching news article and evaluate these summaries. Whenever possible, we present both quantitative and qualitative evaluations. One of our main findings is that tweets and newswire articles provide complementary perspectives that form a holistic view of the disaster situation.
|
Twitter for emergency applications has been studied by several researchers, e.g., @cite_35 @cite_14 @cite_7 @cite_1 @cite_28 . @cite_35 , researchers concluded that Twitter was not yet ready for first responders. However, it was helpful for civilians. These were the early days of Twitter, as we find from @cite_14 that individuals immediately posted specific information helpful to early recognition and characterization of emergency events'' in the case of the Boston marathon bombing. @cite_7 , researchers found that tangible, useful information was found in the early period before storm system Sandy and it got buried in emotional tweets as the storm actually hit. However, we think more studies are needed on this issue, since the tweets collected were rather small, approximately 27,000, using just the hashtag #sandy. A bilingual analysis of tweets obtained over 84 days overlapping the Tohoku earthquake showed, among other results, the correlation between Twitter data and earthquake events @cite_1 . A survey of this literature can be found in @cite_28 .
|
{
"abstract": [
"",
"Immediately following the Boston Marathon attacks, individuals near the scene posted a deluge of data to social media sites. Previous work has shown that these data can be leveraged to provide rapid insight during natural disasters, disease outbreaks and ongoing conflicts that can assist in the public health and medical response. Here, we examine and discuss the social media messages posted immediately after and around the Boston Marathon bombings, and find that specific keywords appear frequently prior to official public safety and news media reports. Individuals immediately adjacent to the explosions posted messages within minutes via Twitter which identify the location and specifics of events, demonstrating a role for social media in the early recognition and characterization of emergency events. *Christopher Cassa and Rumi Chunara contributed equally to this work. Language: en",
"Little is known about the ways in which social media, such as Twitter, function as conduits for information related to crises and emergencies. The current study analyzed the content of over 1,500 Tweets that were sent in the days leading up to the landfall of Hurricane Sandy. Time-series analyses reveal that relevant information became less prevalent as the crisis moved from the prodromal to acute phase, and information concerning specific remedial behaviors was absent. Implications for government agencies and emergency responders are discussed.",
"Abstract The importance of timely, accurate and effective use of available information is essential to the proper management of emergency situations. In recent years, emerging technologies have provided new approaches towards the distribution and acquisition of crowdsourced information to facilitate situational awareness and management during emergencies. In this regard, internet and social networks have shown potential to be an effective tool in disseminating and obtaining up-to-date information. Among the most popular social networks, research has pointed to Twitter as a source of information that offers valuable real-time data for decision-making. The objective of this paper is to conduct a systematic literature review that provides an overview of the current state of research concerning the use of Twitter to emergencies management, as well as presents the challenges and future research directions.",
"Social media such as Facebook and Twitter have proven to be a useful resource to understand public opinion towards real world events. In this paper, we investigate over 1.5 million Twitter messages (tweets) for the period 9th March 2011 to 31st May 2011 in order to track awareness and anxiety levels in the Tokyo metropolitan district to the 2011 Tohoku Earthquake and subsequent tsunami and nuclear emergencies. These three events were tracked using both English and Japanese tweets. Preliminary results indicated: 1) close correspondence between Twitter data and earthquake events, 2) strong correlation between English and Japanese tweets on the same events, 3) tweets in the native language play an important roles in early warning, 4) tweets showed how quickly Japanese people’s anxiety returned to normal levels after the earthquake event. Several distinctions between English and Japanese tweets on earthquake events are also discussed. The results suggest that Twitter data can be used as a useful resource for tracking the public mood of populations affected by natural disasters as well as an early warning system."
],
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_1"
],
"mid": [
"",
"2141594386",
"1972649645",
"2887765139",
"1756216167"
]
}
| 0 |
||
1906.10607
|
2954881165
|
In a disaster situation, first responders need to quickly acquire situational awareness and prioritize response based on the need, resources available and impact. Can they do this based on digital media such as Twitter alone, or newswire alone, or some combination of the two? We examine this question in the context of the 2015 Nepal Earthquakes. Because newswire articles are longer, effective summaries can be helpful in saving time yet giving key content. We evaluate the effectiveness of several unsupervised summarization techniques in capturing key content. We propose a method to link tweets written by the public and newswire articles, so that we can compare their key characteristics: timeliness, whether tweets appear earlier than their corresponding news articles, and content. A novel idea is to view relevant tweets as a summary of the matching news article and evaluate these summaries. Whenever possible, we present both quantitative and qualitative evaluations. One of our main findings is that tweets and newswire articles provide complementary perspectives that form a holistic view of the disaster situation.
|
Researchers have examined the question of whether Twitter can replace newswire for breaking news @cite_20 . They studied a period of 77 days in 2011 during which 27 events occurred. The biggest disasters in this event-set are: an airplane crash with 43 deaths, and a magnitude 5.8 earthquake in Virginia that caused infrastructural damage. None of these disasters, bad as they are, rise to the level of the Nepal Earthquake(s) of 2015 in which almost 10,000 lives were lost. They collected a large dataset of tweets and news articles, but then eliminated a large collection of tweets based on clustering. More elimination of tweets led to only 97 linked tweet-news article pairs, which is a small dataset.
|
{
"abstract": [
"Twitter is often considered to be a useful source of real-time news, potentially replacing newswire for this purpose. But is this true? In this paper, we examine the extent to which news reporting in newswire and Twitter overlap and whether Twitter often reports news faster than traditional newswire providers. In particular, we analyse 77 days worth of tweet and newswire articles with respect to both manually identified major news events and larger volumes of automatically identified news events. Our results indicate that Twitter reports the same events as newswire providers, in addition to a long tail of minor events ignored by mainstream media. However, contrary to popular belief, neither stream leads the other when dealing with major news events, indicating that the value that Twitter can bring in a news setting comes predominantly from increased event coverage, not timeliness of reporting."
],
"cite_N": [
"@cite_20"
],
"mid": [
"1549229937"
]
}
| 0 |
||
1906.10607
|
2954881165
|
In a disaster situation, first responders need to quickly acquire situational awareness and prioritize response based on the need, resources available and impact. Can they do this based on digital media such as Twitter alone, or newswire alone, or some combination of the two? We examine this question in the context of the 2015 Nepal Earthquakes. Because newswire articles are longer, effective summaries can be helpful in saving time yet giving key content. We evaluate the effectiveness of several unsupervised summarization techniques in capturing key content. We propose a method to link tweets written by the public and newswire articles, so that we can compare their key characteristics: timeliness, whether tweets appear earlier than their corresponding news articles, and content. A novel idea is to view relevant tweets as a summary of the matching news article and evaluate these summaries. Whenever possible, we present both quantitative and qualitative evaluations. One of our main findings is that tweets and newswire articles provide complementary perspectives that form a holistic view of the disaster situation.
|
In @cite_31 , a framework for connecting news articles to Twitter conversations is proposed using Local cosine similarity, global cosine similarity, local frequency of the hashtag and global frequency of the hashtag as the classification features extracted for each article-hashtag pair. The task of linking tweets with related news articles is studied in another paper to construct user profiles @cite_10 . The authors proposed two sets of strategies to find relevant news articles to each tweet in this paper. In addition to URL-based strategies, which is similar to the idea used in @cite_25 , they also proposed several content-based strategies that include computing the similarity between hashtag-based, entity-based and bag-of-word-based representations of tweets and news articles to discover the relation between them. In addition to user modeling, the tweet-news linking task has been employed in document summarization @cite_11 , sentiment analysis @cite_17 and event extraction @cite_27
|
{
"abstract": [
"",
"",
"In the era of data-driven journalism, data analytics can deliver tools to support journalists in connecting to new and developing news stories, e.g., as echoed in micro-blogs such as Twitter, the new citizen-driven media. In this paper, we propose a framework for tracking and automatically connecting news articles to Twitter conversations as captured by Twitter hashtags. For example, such a system could alert journalists about news that get a lot of Twitter reaction, so that they can investigate those conversations for new developments in the story, promote their article to a set of interested consumers, or discover general sentiment towards the story. Mapping articles to appropriate hashtags is nevertheless very challenging, due to different language styles used in articles versus tweets, the streaming aspect of news and tweets, as well as the user behavior when marking certain tweet-terms as hashtags. As a case-study, we continuously track the RSS feeds of Irish Times news articles and a focused Twitter stream over a two months period, and present a system that assigns hashtags to each article, based on its Twitter echo. We propose a machine learning approach for classifying and ranking article-hashtag pairs. Our empirical study shows that our system delivers high precision for this task.",
"As the most popular microblogging platform, the vast amount of content on Twitter is constantly growing so that the retrieval of relevant information (streams) is becoming more and more difficult every day. Representing the semantics of individual Twitter activities and modeling the interests of Twitter users would allow for personalization and therewith countervail the information overload. Given the variety and recency of topics people discuss on Twitter, semantic user profiles generated from Twitter posts moreover promise to be beneficial for other applications on the Social Web as well. However, automatically inferring the semantic meaning of Twitter posts is a non-trivial problem. In this paper we investigate semantic user modeling based on Twitter posts. We introduce and analyze methods for linking Twitter posts with related news articles in order to contextualize Twitter activities. We then propose and compare strategies that exploit the semantics extracted from both tweets and related news articles to represent individual Twitter activities in a semantically meaningful way. A large-scale evaluation validates the benefits of our approach and shows that our methods relate tweets to news articles with high precision and coverage, enrich the semantics of tweets clearly and have strong impact on the construction of semantic user profiles for the Social Web.",
"News reporting has seen a shift toward fast-paced online reporting in new sources such as social media. Web Search engines that support a news vertical have historically relied upon articles published by major newswire providers when serving news-related queries. In this paper, we investigate to what extent real-time content from newswire, blogs, Twitter and Wikipedia sources are useful to return to the user in the current fast-paced news search setting. In particular, we perform a detailed user study using the emerging medium of crowdsourcing to determine when and where integrating news-related content from these various sources can better serve the user's news need. We sampled approximately 300 news-related search queries using Google Trends and Bitly data in real-time for two time periods. For these queries, we have crowdsourced workers compare Web search rankings for each, with similar rankings integrating real-time news content from sources such as Twitter or the blogosphere. Our results show that users exhibited a preference for rankings integrating newswire articles for only half of our queries, indicating that relying solely on newswire providers for news-related content is now insufficient. Moreover, our results show that users preferred rankings that integrate tweets more often than those that integrate newswire articles, showing the potential of using social media to better serve news queries.",
"We present a framework for sentiment analysis on tweets related to news items. Given a set of tweets and news items, our framework classifies tweets as positive or negative and links them to the related news items. For the classification of tweets we use three of the most used machine learning methods, namely Naive Bayes, Complementary Naive Bayes, and Logistic Regression, and for linking tweets to news items, Natural Language Processing (NLP) techniques are used, including Zemberek NLP library for stemming and morphological analysis and then bag-of-words method for mapping. To test the framework, we collected 6000 tweets and labeled them manually to build a classifier for sentiment analysis. We considered tweets and news in Turkish language only in this work. Our results show that Naive Bayes performs well on classifying tweets in Turkish."
],
"cite_N": [
"@cite_11",
"@cite_27",
"@cite_31",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2464131477",
"1499014607",
"155754185",
"1993608327",
"2311769156"
]
}
| 0 |
||
1906.10607
|
2954881165
|
In a disaster situation, first responders need to quickly acquire situational awareness and prioritize response based on the need, resources available and impact. Can they do this based on digital media such as Twitter alone, or newswire alone, or some combination of the two? We examine this question in the context of the 2015 Nepal Earthquakes. Because newswire articles are longer, effective summaries can be helpful in saving time yet giving key content. We evaluate the effectiveness of several unsupervised summarization techniques in capturing key content. We propose a method to link tweets written by the public and newswire articles, so that we can compare their key characteristics: timeliness, whether tweets appear earlier than their corresponding news articles, and content. A novel idea is to view relevant tweets as a summary of the matching news article and evaluate these summaries. Whenever possible, we present both quantitative and qualitative evaluations. One of our main findings is that tweets and newswire articles provide complementary perspectives that form a holistic view of the disaster situation.
|
@cite_21 , researchers proposed two methods that leverage tweets for ranking sentences in news articles for summarization: a voting method based on tweet hit counts of sentences, and a random walk on a heterogeneous graph (HGRW) consisting of tweets and news article sentences as nodes and the edge weights are defined by weighted idf-modified-cosine scores. The best ROUGE-1 F-score @cite_6 is achieved by a version of HGRW that outputs both sentences from news articles and tweets in the summary, where the summary consists of top four sentences tweets as highlights of the article.
|
{
"abstract": [
"Single-document summarization is a challenging task. In this paper, we explore effective ways using the tweets linking to news for generating extractive summary of each document. We reveal the very basic value of tweets that can be utilized by regarding every tweet as a vote for candidate sentences. Base on such finding, we resort to unsupervised summarization models by leveraging the linking tweets to master the ranking of candidate extracts via random walk on a heterogeneous graph. The advantage is that we can use the linking tweets to opportunistically \"supervise\" the summarization with no need of reference summaries. Furthermore, we analyze the influence of the volume and latency of tweets on the quality of output summaries since tweets come after news release. Compared to truly supervised summarizer unaware of tweets, our method achieves significantly better results with reasonably small tradeoff on latency; compared to the same using tweets as auxiliary features, our method is comparable while needing less tweets and much shorter time to achieve significant outperformance.",
"ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST."
],
"cite_N": [
"@cite_21",
"@cite_6"
],
"mid": [
"1966421434",
"2154652894"
]
}
| 0 |
||
1906.10607
|
2954881165
|
In a disaster situation, first responders need to quickly acquire situational awareness and prioritize response based on the need, resources available and impact. Can they do this based on digital media such as Twitter alone, or newswire alone, or some combination of the two? We examine this question in the context of the 2015 Nepal Earthquakes. Because newswire articles are longer, effective summaries can be helpful in saving time yet giving key content. We evaluate the effectiveness of several unsupervised summarization techniques in capturing key content. We propose a method to link tweets written by the public and newswire articles, so that we can compare their key characteristics: timeliness, whether tweets appear earlier than their corresponding news articles, and content. A novel idea is to view relevant tweets as a summary of the matching news article and evaluate these summaries. Whenever possible, we present both quantitative and qualitative evaluations. One of our main findings is that tweets and newswire articles provide complementary perspectives that form a holistic view of the disaster situation.
|
Tweet summarization has also been studied, e.g., see @cite_23 and references cited therein. Our problem is a little different, we consider tweets that are linked and found relevant (or partially relevant) to news articles from the perspective of summaries of those news articles. We then evaluate them to get an idea of how much content of the articles is captured by these tweets.
|
{
"abstract": [
"During mass convergence events such as natural disasters, microblogging platforms like Twitter are widely used by affected people to post situational awareness messages. These crisis-related messages disperse among multiple categories like infrastructure damage, information about missing, injured, and dead people etc. The challenge here is to extract important situational updates from these messages, assign them appropriate informational categories, and finally summarize big trove of information in each category. In this paper, we propose a novel framework which first assigns tweets into different situational classes and then summarize those tweets. In the summarization phase, we propose a two stage summarization framework which first extracts a set of important tweets from the whole set of information through an Integer-linear programming (ILP) based optimization technique and then follows a word graph and content word based abstractive summarization technique to produce the final summary. Our method is time and memory efficient and outperforms the baseline in terms of quality, coverage of events, , effectiveness, and utility in disaster scenarios."
],
"cite_N": [
"@cite_23"
],
"mid": [
"2460513959"
]
}
| 0 |
||
1906.10667
|
2954360742
|
Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior. Often, this is addressed in the context of hierarchical reinforcement learning, where the aim is to decompose a policy into lower-level primitives or options, and a higher-level meta-policy that triggers the appropriate behaviors for a given situation. However, the meta-policy must still produce appropriate decisions in all states. In this work, we propose a policy design that decomposes into primitives, similarly to hierarchical reinforcement learning, but without a high-level meta-policy. Instead, each primitive can decide for themselves whether they wish to act in the current state. We use an information-theoretic mechanism for enabling this decentralized decision: each primitive chooses how much information it needs about the current state to make a decision and the primitive that requests the most information about the current state acts in the world. The primitives are regularized to use as little information as possible, which leads to natural competition and specialization. We experimentally demonstrate that this policy architecture improves over both flat and hierarchical policies in terms of generalization.
|
There are a wide variety of hierarchical reinforcement learning approaches. One of the most widely applied HRL framework is the framework (). An option can be thought of as an action that extends over multiple timesteps thus providing the notion of temporal abstraction or subroutines in an MDP. Each option has its own policy (which is followed if the option is selected) and the termination condition (to stop the execution of that option). Many strategies are proposed for discovering options using task-specific hierarchies, such as pre-defined sub-goals , hand-designed features , or diversity-promoting priors . These approaches do not generalize well to new tasks. @cite_1 proposed an approach to learn options in an end-to-end manner by parameterizing the intra-option policy as well as the policy and termination condition for all the options. Eigen-options use the eigenvalues of the Laplacian (for the transition graph induced by the MDP) to derive an intrinsic reward for discovering options as well as learning an intra-option policy.
|
{
"abstract": [
"Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup & Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2523728418"
]
}
|
Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives
|
Learning policies that generalize to new environments or tasks is a fundamental challenge in reinforcement learning. While deep reinforcement learning has enabled training powerful policies, which outperform humans on specific, well-defined tasks [24], their performance often diminishes when the properties of the environment or the task change to regimes not encountered during training. This is in stark contrast to how humans learn, plan, and act: humans can seamlessly switch between different aspects of a task, transfer knowledge to new tasks from remotely related but essentially distinct prior experience, and combine primitives (or skills) used for distinct aspects of different tasks in meaningful ways to solve new problems. A hypothesis hinting at the reasons for this discrepancy is that the world is inherently compositional, such that its features can be described by compositions of small sets of primitive mechanisms [26]. Since humans seem to benefit from learning skills and learning to combine skills, it might be a useful inductive bias for the learning models as well.
This is addressed to some extent by the hierarchical reinforcement learning (HRL) methods, which focus on learning representations at multiple spatial and temporal scales, thus enabling better exploration strategies and improved generalization performance [9,36,10,19]. However, hierarchical approaches rely on some form of learned high-level controller, which decides when to activate different components in the hierarchy. While low-level sub-policies can specialize to smaller portions of the state space, the top-level controller (or master policy) needs to know how to deal with any given state. That is, it should provide optimal behavior for the entire accessible state space. As the Figure 1: Illustration of our model. An intrinsic competition mechanism, based on the amount of information each primitive provides, is used to select a primitive to be active for a given input. Each primitive focuses on distinct features of the environment; in this case, one policy focuses on boxes, a second one on gates, and the third one on spheres.
master policy is trained on a particular state distribution, learning it in a way that generalizes to new environments effectively can, therefore, become the bottleneck for such approaches [31,3].
We argue, and empirically show, that in order to achieve better generalization, the interaction between the low-level primitives and the selection thereof should itself be performed without requiring a single centralized network that understands the entire state space. We, therefore, propose a fully decentralized approach as an alternative to standard HRL, where we only learn a set of low-level primitives without learning a high-level controller. We construct a factorized representation of the policy by learning simple "primitive" policies, which focus on distinct regions of the state space. Rather than being gated by a single meta-policy, the primitives directly compete with one another to determine which one should be active at any given time, based on the degree to which their state encoders "recognize" the current state input.
We frame the problem as one of information transfer between the current state and a dynamically selected primitive policy. Each policy can by itself decide to request information about the current state, and the amount of information requested is used to determine which primitive acts in the current state. Since the amount of state information that a single primitive can access is limited, each primitive is encouraged to use its resources wisely. Constraining the amount of accessible information in this way naturally leads to a decentralized competition and decision mechanism where individual primitives specialize in smaller regions of the state space. We formalize this information-driven objective based on the variational information bottleneck. The resulting set of competing primitives achieves both a meaningful factorization of the policy and an effective decision mechanism for which primitives to use. Importantly, not relying on a centralized meta-policy means that individual primitive mechanisms can be recombined in a "plug-and-play" fashion, and can be transferred seamlessly to new environments.
Contributions: In summary, the contributions of our work are as follows: (1) We propose a method for learning and operating a set of functional primitives in a fully decentralized way, without requiring a high-level meta-controller to select active primitives (see Figure 1 for illustration). (2) We introduce an information-theoretic objective, the effects of which are twofold: a) it leads to the specialization of individual primitives to distinct regions of the state space, and b) it enables a competition mechanism, which is used to select active primitives in a decentralized manner. (3) We demonstrate the superior transfer learning performance of our model, which is due to the flexibility of the proposed framework regarding the dynamic addition, removal, and recombination of primitives. Decentralized primitives can be successfully transferred to larger or previously unseen environments, and outperform models with an explicit meta-controller for primitive selection.
Preliminaries
We consider a Markov decision process (MDP) defined by the tuple (S, A, P, r, γ), where the state space S and the action space A may be discrete or continuous. The environment emits a bounded reward r : S × A → [r min , r max ] on each transition and γ ∈ [0, 1) is the discount factor. π(.|s) denotes a policy over the actions given the current state s. R(π) = E π [ t γ t r(s t )] denotes the expected total return when the policy π is followed. The standard objective in reinforcement learning is to maximize the expected total return R(π). We use the concept of the information bottleneck [39] to learn compressed representations. The information bottleneck objective is formalized as minimizing the mutual information of a bottleneck representation layer with the input while maximizing its mutual information with the corresponding output. This type of input compression has been shown to improve generalization [39,1,2]. Computing the mutual information is generally intractable, but can be approximated using variational inference [2].
Information-Theoretic Decentralized Learning of Distinct Primitives
Our goal is to learn a policy, composed of multiple primitive sub-policies, to maximize average returns over T -step interactions for a distribution of tasks. Simple primitives which focus on solving a part of the given task (and not the complete task) should generalize more effectively, as they can be applied to similar aspects of different tasks (subtasks) even if the overall objective of the tasks are drastically different. Learning primitives in this way can also be viewed as learning a factorized representation of a policy, which is composed of several independent policies.
Our proposed approach consists of three components: 1) a mechanism for restricting a particular primitive to a subset of the state space; 2) a competition mechanism between primitives to select the most effective primitive for a given state; 3) a regularization mechanism to improve the generalization performance of the policy as a whole. We consider experiments with both fixed and variable sets of primitives and show that our method allows for primitives to be added or removed during training, or recombined in new ways. Each primitive is represented by a differentiable, parameterized function approximator, such as a neural network.
Primitives with an Information Bottleneck
state s (z 1 , . . . , z K ) (L 1 , . . . , L K ) (α 1 , . . . , α K ) E α π k action a encoder D KL (·||N )
decoder softmax Figure 2: The primitive-selection mechanism of our model. The primitive with most information acts in the environment, and gets the reward.
To encourage each primitive to encode information from a particular part of state space, we limit the amount of information each primitive can access from the state. In particular, each primitive has an information bottleneck with respect to the input state, preventing it from using all the information from the state.
To implement an information bottleneck, we design each of the K primitives to be composed of an encoder p enc (Z k | S) and a decoder p dec (A | Z k ), together forming the primitive policy,
π k θ (A | S) = z p enc (z k | S) p dec (A | z k ) dz k . 1
The encoder output Z is meant to represent the information about the current state S that an individual primitive believes is important to access in order to perform well. The decoder takes this encoded information and produces a distribution over the actions A. Following the variational information bottleneck objective [2], we penalize the KL divergence of Z and the prior,
L k = D KL (p enc (Z k |S)||N (0, 1)) .(1)
In other words, a primitive pays an "information cost" proportional to L k for accessing the information about the current state. 1 In practice, we estimate the marginalization over Z using a single sample throughout our experiments.
In the experiments below, we fix the prior to be a unit Gaussian. In the general case, we can learn the prior as well and include its parameters in θ. The information bottleneck encourages each primitive to limit its knowledge about the current state, but it will not prevent multiple primitives from specializing to similar parts of the state space. To mitigate this redundancy, and to make individual primitives focus on different regions of the state space, we introduce an information-based competition mechanism to encourage diversity among the primitives, as described in the next section.
Competing Information-Constrained Primitives
We can use the information measure from equation 1 to define a selection mechanism for the primitives without having to learn a centralized meta-policy. The idea is that the information content of an individual primitive encodes its effectiveness in a given state s such that the primitive with the highest value L k should be activated in that particular state. We compute normalized weights α k for each of the k = 1, . . . , K primitives by applying the softmax operator,
α k = exp(L k )/ j exp(L j ) .(2)
The resulting weights α k can be treated as a probability distribution that can be used in different ways: form a mixture of primitives, sample a primitive using from the distribution, or simply select the primitive with the maximum weight. The selected primitive is then allowed to act in the environment.
Trading Reward and Information: To make the different primitives compete for competence in the different regions of the state space, the environment reward is distributed according to their participation in the global decision, i.e. the reward r k given to the k th primitive is weighted by its selection coefficient, such that r k = α k r, with r = k r k . Hence, a primitive gets a higher reward for accessing more information about the current state, but that primitive also pays the price (equal to information cost) for accessing the state information. Hence, a primitive that does not access any state information is not going to get any reward. The information bottleneck and the competition mechanism, when combined with the overall reward maximization objective, should lead to specialization of individual primitives to distinct regions in the state space.
That is, each primitive should specialize in a part of the state space that it can reliably associate rewards with. Since the entire ensemble still needs to understand all of the state space for the given task, different primitives need to encode and focus on different parts of the state space.
Regularization of the Combined Representation
To encourage a diverse set of primitive configurations and to make sure that the model does not collapse to a single primitive (which remains active at all times), we introduce an additional regularization term,
L reg = k α k L k .(3)
This can be rewritten (see Appendix A) as
L reg = −H(α) + LSE(L 1 , . . . , L K ) ,(4)
where H(α) is the entropy of the α distribution, and LSE is the LogSumExp function, LSE(x) = log( j e xj ). The desired behavior is achieved by minimizing L reg . Increasing the entropy of α leads to a diverse set of primitive selections, ensuring that different combinations of the primitives are used. On the other hand, LSE approximates the maximum of its arguments, LSE(x) ≈ max j x j , and, therefore, penalizes the dominating L k terms, thus equalizing their magnitudes.
Objective and Algorithm Summary
Our overall objective function consists of 3 terms, 1. The expected return from the standard RL objective, R(π) which is distributed to the primitives according to their participation, 2. The individual bottleneck terms leading the individual primitives to focus on specific parts of the state space, L k for k = 1, . . . , K, 3. The regularization term applied to the combined model, L reg .
The overall objective for the k th primitive thus takes the form:
J k (θ) ≡ E π θ [r k ] − β ind L k − β reg L reg ,(5)
where E π θ denotes an expectation over the state trajectories generated by the agent's policy, r k = α k r is the reward given to the kth primitive, and β ind , β reg are the parameters controlling the impact of the respective terms.
Implementation: In our experiments, the encoders p enc (z k |S) and decoders p dec (A|z k ) are represented by neural networks, the parameters of which we denote by θ. Actions are sampled through each primitive every step. While our approach is compatible with any RL method, we maximize J(θ) computed on-policy from the sampled trajectories using a score function estimator [42,35] specifically A2C [25] (unless otherwise noted). Every experimental result reported has been averaged over 5 random seeds. Our model introduces 2 extra hyper-parameters β ind , β reg .
Experimental Results
In this section, we briefly outline the tasks that we used to evaluate our proposed method and direct the reader to the appendix for the complete details of each task along with the hyperparameters used for the model. The code is provided with the supplementary material. We designed experiments to address the following questions: a) Learning primitives -Can an ensemble of primitives be learned over a distribution of tasks? b) Transfer Learning using primitives -Can the learned primitives be transferred to unseen/unsolvable sparse environments? c) Comparison to centralized methods -How does our method compare to approaches where the primitives are trained using an explicit meta-controller, in a centralized way?
Baselines. We compare our proposed method to the following baselines:
a) Option Critic [4] -We extended the author's implementation 2 of the Option Critic architecture and experimented with multiple variations in the terms of hyperparameters and state/goal encoding. None of these yielded reasonable performance in partially observed tasks, so we omit it from the results.
b) MLSH (Meta-Learning Shared Hierarchy) [13] -This method uses meta-learning to learn subpolicies that are shared across tasks along with learning a task-specific high-level master. It also requires a phase-wise training schedule between the master and the sub-policies to stabilize training. We use the MLSH implementation provided by the authors 3 .
c) Transfer A2C: In this method, we first learn a single policy on the one task and then transfer the policy to another task, followed by fine-tuning in the second task.
Multi-Task Training
We evaluate our model in a partially-observable 2D multi-task environment called Minigrid, similar to the one introduced in [6]. The environment is a two-dimensional grid with a single agent, impassable walls, and many objects scattered in the environment. The agent is provided with a natural language string that specifies the task that the agent needs to complete. The setup is partially observable and the agent only gets the small, egocentric view of the grid (along with the natural language task description). We consider three tasks here: the Pickup task (A), where the agent is required to pick up an object specified by the goal string, the Unlock task (B) where the agent needs to unlock the door (there could be multiple keys in the environment and the agent needs to use the key which matches the color of the door) and the UnlockPickup task (C), where the agent first needs to unlock a door that leads to another room. In this room, the agent needs to find and pick up the object specified by the goal string. Additional implementation details of the environment are provided in appendix D.
Details on the agent model can be found in appendix D.3.
We train agents with varying numbers of primitives on various tasks -concurrently, as well as in transfer settings. The different experiments are summarized in Figs. 3 and 5. An advantage of the multi-task setting is that it allows for quantitative interpretability as to when and which primitives are being used. The results indicate that a system composed of multiple primitives generalizes more easily to a new task, as compared to a single policy. We further demonstrate that several primitives can be combined dynamically and that the individual primitives respond to stimuli from new environments when trained on related environments.
Do Learned Primitives Help in Transfer Learning?
We now evaluate our approach in the settings where the adaptation to the changes in the task is vital. The argument in the favor of modularity is that it enables better knowledge transfer between related task. This transfer is more effective when the tasks are closely related as the model would only have to learn how to compose the already learned primitives. In general, it is difficult to determine how "closely" related two tasks are and the inductive bias of modularity could be harmful if the two tasks are quite different. In such cases, we could add new primitives (which would have to be learned) and still obtain a sample-efficient transfer as some part of the task structure would already have been captured by the pretrained primitives. This approach can be extended by adding primitives during training which provides a seamless way to combine knowledge about different tasks to solve more complex tasks. We investigate here the transfer properties of a primitive trained in one environment and transferred to a different one.
Continuous control for ant maze We evaluate the transfer performance of pretrained primitives on the cross maze environment [15]. Here, a quadrupedal robot must walk to the different goals along the different paths (see Appendix G for details). The goal is randomly chosen from a set of available goals at the start of each episode. We pretrain a policy (see model details in Appendix G.1) with a motion reward in an environment which does not have any walls (similar to [15]), and then transfer the policy to the second task where the ant has to navigate to a random goal chosen from one of the 3 (or 10) available goal options. For our model, we make four copies of the pretrained policies and then finetune the model using the pretrained policies as primitives. We compare to both MLSH [13] and option-critic [4]. All these baselines have been pretrained in the same manner. As evident from Figure 5, our method outperforms the other approaches. The fact that the initial policies successfully adapt to the transfer environment underlines the flexibility of our approach. ) This is more sample efficient than other strong baselines, such as [13,4]. Zero-Shot Generalization: A set of primitives is trained on C, and zero-shot generalization to A and B is evaluated. The primitives learn a form of spatial decomposition which allows them to be active in both target tasks, A and B.
Learning Ensembles of Functional Primitives
We evaluate our approach on a number of RL environments to show that we can indeed learn sets of primitive policies focusing on different aspects of a task and collectively solving it.
Motion Imitation. To test the scalability of the proposed method, we present a series of tasks from the motion imitation domain. In these tasks, we train a simulated 2D biped character to perform a variety of highly dynamic skills by imitating motion capture clips recorded from human actors. Each mocap clip is represented by a target state trajectory τ * = {s * 0 , s * 1 , ..., s * T }, where s * t denotes the target state at timestep t. The input to the policy is augmented with a goal g t = {s * t+1 , s * t+2 }, which specifies the the target states for the next two timesteps. Both the state s t and goal g t are then processed by the encoder p enc (z t |s t , g t ). The repertoire of skills consists of 8 clips depicting different types of walks, runs, jumps, and flips. The motion imitation approach closely follows Peng et al. [29].
Snapshots of some of the learned motions are shown in Figure 6. 4 To analyze the specialization of the various primitives, we computed 2D embeddings of states and goals which each primitive is active in, and the actions proposed by the primitives. Figure 7 illustrates the embeddings computed with t-SNE [41]. The embeddings show distinct clusters for the primitives, suggesting a degree of specialization of each primitive to certain states, goals, and actions.
Summary and Discussion
We present a framework for learning an ensemble of primitive policies which can collectively solve tasks in a decentralized fashion. Rather than relying on a centralized, learned meta-controller, the selection of active primitives is implemented through an information-theoretic mechanism. The learned primitives can be flexibly recombined to solve more complex tasks. Our experiments show that, on a partially observed "Minigrid" task and a continuous control "ant maze" walking task, our method can enable better transfer than flat policies and hierarchical RL baselines, including the Meta-learning Shared Hierarchies model and the Option-Critic framework. On Minigrid, we show how primitives trained with our method can transfer much more successfully to new tasks and on the ant maze, we show that primitives initialized from a pretrained walking control can learn to walk to different goals in a stochastic, multi-modal environment with nearly double the success rate of a more conventional hierarchical RL approach, which uses the same pretraining but a centralized high-level policy.
The proposed framework could be very attractive for continual learning settings, where one could add more primitive policies over time. Thereby, the already learned primitives would keep their focus on particular aspects of the task, and newly added ones could specialize on novel aspects.
A Interpretation of the regularization term
The regularization term is given by
L reg = k α k L k , where α k = e L k / j e Lj ,
and thus log α k = L k − log j e Lj , or L k = log α k + LSE(L 1 , . . . , L K ) , where LSE(L 1 , . . . , L K ) = log j e Lj is independent of k.
Plugging this in, and using α k = 1, we get
L reg = k α k log α k + LSE(L 1 , . . . , L K ) = −H(α) + LSE(L 1 , . . . , L K ) .
Information-theoretic interpretation Notably, L reg also represents an upper bound to the KLdivergence of a mixture of the currently active primitives and a prior,
L reg ≥ D KL ( k α k p enc (Z k |S)||N (0, 1)) ,
and thus can be regarded as a term limiting the information content of the mixture of all active primitives. This arises from the convexity properties of the KL divergence, which directly lead to
D KL ( k α k f k ||g) ≤ k α k D KL (f k ||g) .
B Additional Results
B.1 2D Bandits Environment
In order to test if our approach can learn distinct primitives, we used the 2D moving bandits tasks (introduced in [14]). In this task, the agent is placed in a 2D world and is shown the position of two randomly placed points. One of these points is the goal point but the agent does not know which. We use the sparse reward setup where the agent receives the reward of 1 if it is within a certain distance of the goal point and 0 at all other times. Each episode lasts for 50 steps and to get the reward, the learning agent must reach near the goal point in those 50 steps. The agent's action space consists of 5 actions -moving in one of the four cardinal directions (top, down, left, right) and staying still.
B.1.1 Results for 2D Bandits
We want to answer the following questions:
1. Can our proposed approach learn primitives which remain active throughout training? 2. Can our proposed approach learn primitives which can solve the task?
We train two primitives on the 2D Bandits tasks and evaluate the relative frequency of activation of the primitives throughout the training. It is important that both the primitives remain active. If only 1 primitive is acting most of the time, its effect would be the same as training a flat policy. We evaluate the effectiveness of our model by comparing the success rate with a flat A2C baseline. Figure 8 shows that not only do both the primitives remain active throughout training, our approach also outperforms the baseline approach.
B.2 Four-rooms Environment
We consider the Four-rooms gridworld environment [37] where the agent has to navigate its way through a grid of four interconnected rooms to reach a goal position within the grid. The agent can perform one of the following four actions: move up, move down, move left, move right. The environment is stochastic and with 1/3 probability, the agent's chosen action is ignored and a new action (randomly selected from the remaining 3 actions) is executed ie the agent's selected action is executed with a probability of only 2/3 and the agent takes any of the 3 remaining actions with a probability of 1/9 each.
B.2.1 Task distribution for the Four-room Environment
In the Four-room environment, the agent has to navigate to a goal position which is randomly selected from a set of goal positions. We can use the size of this set of goal positions to define a curriculum of task distributions. Since the environment does not provide any information about the goal state, the larger the goal set, harder is the task as the now goal could be any element from a larger set. The choice of the set of goal states and the choice of curriculum does not affect the environment dynamics. Specifically, we consider three tasks -Fourroom-v0, Fourroom-v1 and Fourroom-v2 with the set of 2, 4 and 8 goal positions respectively. The set of goal positions for each task is fixed but not known to the learning agent. We expect, and empirically verify, that the Fourroom-v0 environment requires the least number of samples to be learned, followed by the Fourroom-v1 and the Fourroom-v2 environment ( figure 6 in the paper).
B.2.2 Results for Four-rooms environment
We want to answer the following questions:
1. Can our proposed approach learn primitives that remain active when training the agent over a sequence of tasks?
2. Can our proposed approach be used to improve the sample efficiency of the agent over a sequence of tasks?
To answer these questions, we consider two setups. In the baseline setup, we train a flat A2C policy on Fourrooms-v0 till it achieves a 100 % success rate during evaluation. Then we transfer this policy to Fourrooms-v1 and continue to train till it achieves a 100 % success rate during the evaluation on Fourrooms-v1. We transfer the policy one more time to Fourrooms-v2 and continue to train the policy until it reaches a 60% success rate. In the last task(Fourrooms-v2), we do not use 100% as the threshold as the models do not achieve 100% for training even after training for 10M frames. We use 60% as the baseline models generally converge around this value.
In the second setup, we repeat this exercise of training on one task and transferring to the next task with our proposed model. Note that even though our proposed model converges to a higher value than 60% in the last task(Fourrooms-v2), we compare the number of samples required to reach 60% success rate to provide a fair comparison with the baseline.
C Implementation Details
In this section, we describe the implementation details which are common for all the models. Other task-specific details are covered in the respective task sections.
1. All the models (proposed as well as the baselines) are implemented in Pytorch 1.1 unless stated otherwise. [27].
2. For Meta-Learning Shared Hierarchies [14] and Option-Critic [4], we adapted the author's implementations 5 for our environments.
3. During the evaluation, we use 10 processes in parallel to run 500 episodes and compute the percentage of times the agent solves the task within the prescribed time limit. This metric is referred to as the "success rate". 4. The default time limit is 500 steps for all the tasks unless specified otherwise.
5. All the feedforward networks are initialized with the orthogonal initialization where the input tensor is filled with a (semi) orthogonal matrix.
6. For all the embedding layers, the weights are initialized using the unit-Gaussian distribution.
D MiniGrid Environment
We use the MiniGrid environment [6] which is an open-source, grid-world environment package 6 . It provides a family of customizable reinforcement learning environments that are compatible with the OpenAI Gym framework [5]. Since the environments can be easily extended and modified, it is straightforward to control the complexity of the task (eg controlling the size of the grid, the number of rooms or the number of objects in the grid, etc). Such flexibility is very useful when experimenting with curriculum learning or testing for generalization.
D.1 The World
In MiniGrid, the world (environment for the learning agent) is a rectangular grid of size say M xN . Each tile in the grid contains either zero or one object. The possible object types are wall, floor, lava, door, key, ball, box and goal. Each object has an associated string (which denote the object type) and an associated discrete color (could be red, green, blue, purple, yellow and grey). By default, walls are always grey and goal squares are always green. Certain objects have special effects. For example, a key can unlock a door of the same color.
D.1.1 Reward Function
We consider the sparse reward setup where the agent gets a reward (of 1) only if it completes the task and 0 at all other time steps. We also apply a time limit of 500 steps on all the tasks ie the agent must complete the task in 500 steps. A task is terminated either when the agent solves the task or when the time limit is reached -whichever happens first.
D.1.2 Action Space
The agent can perform one of the following seven actions per timestep: turn left, turn right, move forward, pick up an object, drop the object being carried, toggle, done (optional action).
The agent can use the turn left and turn right actions to rotate around and face one of the 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The toggle actions enable the agent to interact with other objects in the world. For example, the agent can use the toggle action to open the door if they are right in front of it and have the key of matching color.
D.1.3 Observation Space
The MiniGrid environment provides partial and egocentric observations. For all our experiments, the agent sees the view of a square of 4x4 tiles in the direction it is facing. The view includes the tile on which the agent is standing. The observations are provided as a tensor of shape 4x4x3. However, note that this tensor does not represent RGB images. The first two channels denote the view size and the third channel encodes three integer values. The first integer value describes the type of the object, the second value describes the color of the object and the third value describes if the doors are open or closed. The benefit of using this encoding over the RGB encoding is that this encoding is more space-efficient and enables faster training. For human viewing, the fully observable, RGB image view of the environments is also provided and we use that view as an example in the paper.
Additionally, the environment also provides a natural language description of the goal. An example of the goal description is: "Unlock the door and pick up the red ball". The learning agent and the environment use a shared vocabulary where different words are assigned numbers and the environment provides a number-encoded goal description along with each observation. Since different instructions can be of different lengths, the environment pads the goal description with <unk> tokens to ensure that the sequence length is the same. When encoding the instruction, the agent ignores the padded sub-sequence in the instruction.
D.2 Tasks in MiniGrid Environment
We consider the following tasks in the MiniGrid environment:
1. Fetch: In the Fetch task, the agent spawns at an arbitrary position in a 8 × 8 grid ( figure 9 ). It is provided with a natural language goal description of the form "go fetch a yellow box". The agent has to navigate to the object being referred to in the goal description and pick it up.
2. Unlock: In the Unlock task, the agent spawns at an arbitrary position in a two-room grid environment. Each room is 8 × 8 square (figure 10 ). It is provided with a natural language 6 https://github.com/maximecb/gym-minigrid Consider an agent training on any task in the MiniGrid suite of environments. At the beginning of an episode, the learning agent spawns at a random position. At each step, the environment provides observations in two modalities -a 4 × 4 × 3 tensor x t (an egocentric view of the state of the environment) and a variable length goal description g. We describe the design of the learning agent in terms of an encoder-decoder architecture.
D.3.2 Encoder Architecture
The agent's encoder network consists of two models -a CNN+GRU based observation encoder and a GRU [7] based goal encoder
Observation Encoder:
It is a three layer CNN with the output channel sizes set to 16, 16 and 32 respectively (with ReLU layers in between) and kernel size set to 2 × 2 for all the layers. The output of the CNN is flattened and fed to a GRU model (referred to as the observation-rnn) with 128-dimensional hidden state. The output from the observation-rnn represents the encoding of the observation.
Goal Encoder:
It comprises of an embedding layer followed by a unidirectional GRU model. The dimension of the embedding layer and the hidden and the output layer of the GRU model are all set to 128.
The concatenated output of the observation encoder and the goal encoder represents the output of the encoder.
D.3.3 Decoder
The decoder network comprises the action network and the critic network -both of which are implemented as feedforward networks. We now describe the design of these networks.
D.3.4 Value Network
1. Two-layer feedforward network with the tanh non-linearity.
2. Input: Concatenation of z and the current hidden state of the observation-rnn.
3. Size of the input to the first layer and the second layer of the policy network are 320 and 64 respectively.
4. Produces a scalar output.
D.4 Components specific to the proposed model
The components that we described so far are used by both the baselines as well as our proposed model. We now describe the components that are specific to our proposed model. Our proposed model consists of an ensemble of primitives and the components we describe apply to each of those primitives.
D.4.1 Information Bottleneck
Given that we want to control and regularize the amount of information that the encoder encodes, we compute the KL divergence between the output of the action-feature encoder network and a diagonal unit Gaussian distribution. More is the KL divergence, more is the information that is being encoded with respect to the Gaussian prior and vice-versa. Thus we regularize the primitives to minimize the KL divergence. The 2D bandits task provides a 6-dimensional flat observation. The first two dimensions correspond to the (x, y) coordinates of the current position of the agent and the remaining four dimensions correspond to the (x, y) coordinates of the two randomly chosen points.
D.4.2 Hyperparameters
E.1 Model Architecture
E.1.1 Training Setup
Consider an agent training on the 2D bandits tasks. The learning agent spawns at a fixed position and is randomly assigned two points. At each step, the environmental observation provides the current position of the agent as well the position of the two points. We describe the design of the learning agent in terms of an encoder-decoder architecture.
E.1.2 Encoder Architecture
The agent's encoder network consists of a GRU-based recurrent model (referred as the observationrnn) with a hidden state size of 128. The 6-dimensional observation from the environment is the input to the GRU model. The output from the observation-rnn represents the encoding of the observation.
E.2 Hyperparameters
The implementation details for the 2D Bandits environment are the same as that for MiniGrid environment and are described in detail in section D. 4 In the Four-rooms setup, the world (environment for the learning agent) is a square grid of say 11 × 11. The grid is divided into 4 rooms such that each room is connected with two other rooms via hallways. The layout of the rooms is shown in figure 12. The agent spawns at a random position and has to navigate to a goal position within 500 steps.
Reward Function
We consider the sparse reward setup where the agent gets a reward (of 1) only if it completes the task (and reaches the goal position) and 0 at all other time steps. We also apply a time limit of 300 steps on all the tasks ie the agent must complete the task in 300 steps. A task is terminate either when the agent solves the task or when the time limit is reached -whichever happens first.
F.1.2 Observation Space
The environment is a 11 × 11 grid divided into 4 interconnected rooms. As such, the environment has a total of 104 states (or cells) that can be occupied. These states are mapped to integer identifiers. At any time t, the environment observation is a one-hot representation of the identifier corresponding to the state (or the cell) the agent is in right now. ie the environment returns a vectors of zeros with only one entry being 1 and the index of this entry gives the current position of the agent. The environment does not return any information about the goal state.
F.2 Model Architecture for Four-room Environment
F.2.1 Training Setup
Consider an agent training on any task in the Four-room suite of environments. At the beginning of an episode, the learning agent spawns at a random position and the environment selects a goal position for the agent. At each step, the environment provides a one-hot representation of the agent's current position (without including any information about the goal state). We describe the design of the learning agent in terms of an encoder-decoder architecture.
F.3 Encoder Architecture
The agent's encoder network consists of a GRU-based recurrent model (referred as the observationrnn with a hidden state size of 128. The 104-dimensional one-hot input from the environment is fed to the GRU model. The output from the observation-rnn represents the encoding of the observation.
The implementation details for the Four-rooms environment are the same as that for MiniGrid environment and are described in detail in section D.4.2.
G Ant Maze Environment
We use the Mujoco-based quadruple ant [40] to evaluate the transfer performance of our approach on the cross maze environment [15]. The training happens in two phases. In the first phase, we train the ant to walk on a surface using a motion reward and using just 1 primitive. In the second phase, we make 4 copies of this trained policy and train the agent to navigate to a goal position in a maze ( Figure 13). The goal position is chosen from a set of 3 (or 10) goals. The environment is a continuous control environment and the agent can directly manipulate the movement of joints and limbs. We describe the design of the learning agent in terms of an encoder-decoder architecture.
G.1.2 Encoder Architecture
The agent's encoder network consists of a GRU-based recurrent model (referred as the observationrnn with a hidden state size of 128. The real-valued state vector from the environment is fed to the GRU model. The output from the observation-rnn represents the encoding of the observation. Note that in the case of phase 1 vs phase 2, only the size of the input to the observation-rnn changes and the encoder architecture remains the same.
G.1.3 Decoder
The decoder network comprises the action network and the critic network. All these networks are implemented as feedforward networks. The design of these networks is very similar to that of the decoder model for the MiniGrid environment as described in section D.3.3 with just one difference. In this case, the action space is continuous so the action-feature decoder network produces the mean and log-standard-deviation for a diagonal Gaussian policy. This is used to sample a real-valued action to execute in the environment.
| 6,958 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
Video object detection tracking is a task in ILSVRC 2017 @cite_19 , where the winning entries are optimized for accuracy rather than speed. @cite_18 adopts flow aggregation @cite_38 to improve the detection accuracy. @cite_17 combines flow-based @cite_4 and object tracking-based @cite_3 tubelet generation @cite_31 . THU-CAS @cite_19 considers flow-based tracking @cite_23 , object tracking @cite_34 and data association @cite_20 .
|
{
"abstract": [
"Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong single-frame baselines in ImageNet VID, especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The proposed method, together with Deep Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The code is available at this https URL.",
"",
"",
"We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.",
"",
"Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib.",
"Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.",
"Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps.",
"Detection and learning based appearance feature play the central role in data association based multiple object tracking (MOT), but most recent MOT works usually ignore them and only focus on the hand-crafted feature and association algorithms. In this paper, we explore the high-performance detection and deep learning based appearance feature, and show that they lead to significantly better MOT results in both online and offline setting. We make our detection and appearance feature publicly available (https: drive.google.com open?id=0B5ACiy41McAHMjczS2p0dFg3emM). In the following part, we first summarize the detection and appearance feature, and then introduce our tracker named Person of Interest (POI), which has both online and offline version (We use POI to denote our online tracker and KDNT to denote our offline tracker in submission.).",
""
],
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_34",
"@cite_20",
"@cite_17"
],
"mid": [
"2604445072",
"",
"",
"1857884451",
"",
"2335901184",
"2590174509",
"2340000481",
"2534578893",
""
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global post-processing such as seq-NMS @cite_41 and tubelet NMS @cite_15 which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
|
{
"abstract": [
"Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3rd in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).",
"Compared with object detection in static images, object detection in videos is more challenging due to degraded image qualities. An effective way to address this problem is to exploit temporal contexts by linking the same object across video to form tubelets and aggregating classification scores in the tubelets. In this paper, we focus on obtaining high quality object linking results for better classification. Unlike previous methods that link objects by checking boxes between neighboring frames, we propose to link in the same frame. To achieve this goal, we extend prior methods in following aspects: (1) a cuboid proposal network that extracts spatio-temporal candidate cuboids which bound the movement of objects; (2) a short tubelet detection network that detects short tubelets in short video segments; (3) a short tubelet linking algorithm that links temporally-overlapping short tubelets to form long tubelets. Experiments on the ImageNet VID dataset show that our method outperforms both the static image detector and the previous state of the art. In particular, our method improves results by 8.8 over the static image detector for fast moving objects."
],
"cite_N": [
"@cite_41",
"@cite_15"
],
"mid": [
"2282391807",
"2952877559"
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset @cite_25 . @cite_23 @cite_31 propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. @cite_22 proposes to detect frames sparsely and propagates features with optical flow. @cite_38 proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, @cite_21 proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, @cite_37 proposes to learn detection and tracking using a single network with a multi-task objective. @cite_39 proposes to propagate the sparsely detected results through a space-time lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
|
{
"abstract": [
"Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong single-frame baselines in ImageNet VID, especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The proposed method, together with Deep Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The code is available at this https URL.",
"Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.",
"",
"There has been significant progresses for image object detection in recent years. Nevertheless, video object detection has received little attention, although it is more challenging and more important in practical scenarios. Built upon the recent works, this work proposes a unified approach based on the principle of multi-frame end-to-end learning of features and cross-frame motion. Our approach extends prior works with three new techniques and steadily pushes forward the performance envelope (speed-accuracy tradeoff), towards high performance video object detection.",
"High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6 at 20 fps, or 79.0 at 62 fps as a performance speed tradeoff.",
"Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib.",
"Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements."
],
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_22",
"@cite_21",
"@cite_39",
"@cite_23",
"@cite_31",
"@cite_25"
],
"mid": [
"2604445072",
"2756784878",
"",
"2772982658",
"2797831031",
"2335901184",
"2590174509",
"2117539524"
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections @cite_42 . Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem @cite_26 @cite_13 , a continuous energy minimization problem @cite_16 or a graph cut problem @cite_10 @cite_36 . Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. @cite_14 formulates data association as a Markov decision process. @cite_8 @cite_32 employs recurrent neural networks (RNNs) for feature representation and data association.
|
{
"abstract": [
"We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.",
"",
"Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth death and appearance disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method.",
"We present a novel approach to online multi-target tracking based on recurrent neural networks (RNNs). Tracking multiple objects in real-world scenes involves many challenges, including a) an a-priori unknown and time-varying number of targets, b) a continuous state estimation of all present targets, and c) a discrete combinatorial problem of data association. Most previous methods involve complex models that require tedious tuning of parameters. Here, we propose for the first time, an end-to-end learning approach for online multi-target tracking. Existing deep learning methods are not designed for the above challenges and cannot be trivially applied to the task. Our solution addresses all of the above points in a principled way. Experiments on both synthetic and real data show promising results obtained at 300 Hz on a standard CPU, and pave the way towards future research in this direction.",
"",
"We present a novel method for multiple people tracking that leverages a generalized model for capturing interactions among individuals. At the core of our model lies a learned dictionary of interaction feature strings which capture relationships between the motions of targets. These feature strings, created from low-level image features, lead to a much richer representation of the physical interactions between targets compared to hand-specified social force models that previous works have introduced for tracking. One disadvantage of using social forces is that all pedestrians must be detected in order for the forces to be applied, while our method is able to encode the effect of undetected targets, making the tracker more robust to partial occlusions. The interaction feature strings are used in a Random Forest framework to track targets according to the features surrounding them. Results on six publicly available sequences show that our method outperforms state-of-the-art approaches in multiple people tracking.",
"The majority of existing solutions to the Multi-Target Tracking (MTT) problem do not combine cues over a long period of time in a coherent fashion. In this paper, we present an online method that encodes long-term temporal dependencies across multiple cues. One key challenge of tracking methods is to accurately track occluded targets or those which share similar appearance properties with surrounding objects. To address this challenge, we present a structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple cues over a temporal window. Our method allows to correct data association errors and recover observations from occluded states. We demonstrate the robustness of our data-driven approach by tracking multiple targets using their appearance, motion, and even interactions. Our method outperforms previous works on multiple publicly available datasets including the challenging MOT benchmark.",
"Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets.",
"In (2015), we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend (2015) in three ways: (1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. (2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. (3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of (2015). Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel pairwise feature, it eliminates the need for the intermediate tracklet representation of (2015). We demonstrate the effectiveness of our overall approach on the MOT16 benchmark ( 2016), achieving state-of-art performance."
],
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_14",
"@cite_8",
"@cite_36",
"@cite_42",
"@cite_32",
"@cite_16",
"@cite_10"
],
"mid": [
"2016135469",
"",
"2225887246",
"2339473870",
"",
"2020209171",
"2579024533",
"2083049794",
"2508815980"
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge @cite_33 . However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
|
{
"abstract": [
"In the recent past, the computer vision community has developed centralized benchmarks for the performance evaluation of a variety of tasks, including generic object and pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state of the art in the respective area. Interestingly, there has been rather limited work on the standardization of quantitative benchmarks for multiple target tracking. One of the few exceptions is the well-known PETS dataset, targeted primarily at surveillance applications. Despite being widely used, it is often applied inconsistently, for example involving using different subsets of the available data, different ways of training the models, or differing evaluation scripts. This paper describes our work toward a novel multiple object tracking benchmark aimed to address such issues. We discuss the challenges of creating such a framework, collecting existing and new data, gathering state-of-the-art methods to be tested on the datasets, and finally creating a unified evaluation system. With MOTChallenge we aim to pave the way toward a unified evaluation framework for a more meaningful quantification of multi-target tracking."
],
"cite_N": [
"@cite_33"
],
"mid": [
"1521019969"
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05340
|
2952464165
|
State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection tracking.
|
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. @cite_21 proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. @cite_39 proposes an to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. @cite_6 @cite_27 learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
|
{
"abstract": [
"In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4 mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2 mIoU on the same dataset. DVSNet is also able to reduce up to 95 of the computational workloads.",
"There has been significant progresses for image object detection in recent years. Nevertheless, video object detection has received little attention, although it is more challenging and more important in practical scenarios. Built upon the recent works, this work proposes a unified approach based on the principle of multi-frame end-to-end learning of features and cross-frame motion. Our approach extends prior works with three new techniques and steadily pushes forward the performance envelope (speed-accuracy tradeoff), towards high performance video object detection.",
"Recent years have seen remarkable progress in semantic segmentation. Yet, it remains a challenging task to apply segmentation techniques to video-based applications. Specifically, the high throughput of video streams, the sheer cost of running fully convolutional networks, together with the low-latency requirements in many real-world applications, e.g. autonomous driving, present a significant challenge to the design of the video segmentation framework. To tackle this combined challenge, we develop a framework for video semantic segmentation, which incorporates two novel components: (1) a feature propagation module that adaptively fuses features over time via spatially variant convolution, thus reducing the cost of per-frame computation: and (2) an adaptive scheduler that dynamically allocate computation based on accuracy prediction. Both components work together to ensure low latency while maintaining high segmentation quality. On both Cityscapes and CamVid, the proposed framework obtained competitive performance compared to the state of the art, while substantially reducing the latency, from 360 ms to 119 ms.",
"High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6 at 20 fps, or 79.0 at 62 fps as a performance speed tradeoff."
],
"cite_N": [
"@cite_27",
"@cite_21",
"@cite_6",
"@cite_39"
],
"mid": [
"2796020336",
"2772982658",
"2963917006",
"2797831031"
]
}
|
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
|
Convolutional neural network (CNN)-based methods have achieved significant progress in computer vision tasks such as object detection (Ren et al. 2015;Dai et al. 2016;Tang et al. 2018b) and tracking (Held, Thrun, and Savarese 2016;Bertinetto et al. 2016;Nam and Han 2016;Bhat et al. 2018). Following the tracking-by-detection paradigm, most state-of-the-art trackers can be viewed as a local detector of a specified object. Consequently, trackers are generally more efficient than detectors and can obtain precise bounding boxes in subsequent frames if the specified bounding box is accurate. However, as evaluated commonly on benchmark datasets such as OTB (Wu, Lim, and Yang 2015) and VOT (Kristan et al. 2017), trackers are encouraged to track as long as possible. It is non-trivial for trackers to be stopped once they are not confident, although heuristics, such as a threshold of the maximum response value, can be applied. Therefore, trackers bear the risk of drifting.
Besides object detection and tracking, there have been recently a series of studies on video object detection Kang et al. 2017;Feichtenhofer, Pinz, and Zisserman 2017;Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018;Chen et al. 2018). Beyond the baseline to detect each frame individually, state-of-the-art approaches consider the temporal consistency of the detection results via tubelet proposals Kang et al. 2017), optical flow (Zhu et al. 2017b;Zhu et al. 2017a;Zhu et al. 2018) and regression-based trackers (Feichtenhofer, Pinz, and Zisserman 2017). These approaches, however, are optimized for the detection accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming. This paper is motivated by the constraints from practical video analytics scenarios such as autonomous driving and video surveillance. We argue that algorithms applied to these scenarios should be:
• capable of associating an object appearing in different frames, such that the trajectory or velocity of the object can be further inferred. • in realtime (e.g., over 30 fps) and as fast as possible, such that the deployment cost can be further reduced. • with low latency, which means to produce results once a frame in a video stream has been processed. Considering these constraints, we focus in this paper on the task of video object detection/tracking (Russakovsky et al. 2017). The task is to detect objects in each frame (similar to the goal of video object detection), with an additional goal of associating an object appearing in different frames.
In order to handle this task under the realtime and low latency constraint, we propose a detect or track (DorT) framework. In this framework, object detection/tracking of a video sequence is formulated as a sequential decision problem -a scheduler network makes a detection/tracking decision for every incoming frame, and then these frames are processed with the detector/tracker accordingly. The architecture is illustrated in Figure 1.
The scheduler network is the most unique part of our framework. It should be light-weight but be able to determine to detect or track. Rather than using heuristic rules (e.g., thresholds of tracking confidence values), we formulate the scheduler as a small CNN by assessing the tracking quality. It is shown to be a generalization of Siamese trackers and a special case of reinforcement learning (RL).
The contributions are summarized as follows: • We propose the DorT framework, in which the object detection/tracking of a video sequence is formulated as a Figure 1: Detect or track (DorT) framework. The scheduler network compares the current frame t + τ with the keyframe t by evaluating the tracking quality, and determines to detect or track frame t + τ : either frame t + τ is detected by a single-frame detector, or bounding boxes are tracked to frame t + τ from the keyframe t. If detect is chosen, frame t + τ is assigned as the new keyframe, and the boxes in frame t + τ and frame t + τ − 1 are associated by the widely-used Hungarian algorithm (not shown in the figure for conciseness).
sequential decision problem, while being in realtime and with low latency. • We propose a light-weight but effective scheduler network, which is shown to be a generalization of Siamese trackers and a special case of RL. • The proposed DorT framework is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset (Russakovsky et al. 2015) in video object detection/tracking.
Video Object Detection/Tracking
Video object detection/tracking is a task in ILSVRC 2017 (Russakovsky et al. 2017), where the winning entries are optimized for accuracy rather than speed. adopts flow aggregation (Zhu et al. 2017a) to improve the detection accuracy. (Wei et al. 2017) combines flowbased (Ilg et al. 2017) and object tracking-based (Nam and Han 2016) tubelet generation (Kang et al. 2017 Nevertheless, these methods combine multiple cues (e.g., flow aggregation in detection, and flow-based and object tracking-based tubelet generation) which are complementary but time-consuming. Moreover, they apply global postprocessing such as seq-NMS ) and tubelet NMS (Tang et al. 2018a) which greatly improve the accuracy but are not suitable for a realtime and low latency scenario.
Video Object Detection
Approaches to video object detection have been developed rapidly since the introduction of the ImageNet VID dataset (Russakovsky et al. 2015). Kang et al. 2017) propose a framework that consists of per-frame proposal generation, bounding box tracking and tubelet re-scoring. (Zhu et al. 2017b) proposes to detect frames sparsely and propagates features with optical flow. (Zhu et al. 2017a) proposes to aggregate features in nearby frames along the motion path to improve the feature quality. Futhermore, (Zhu et al. 2018) proposes a high-performance approach by considering feature aggregation, partial feature updating and adaptive keyframe scheduling based on optical flow. Besides, (Feichtenhofer, Pinz, and Zisserman 2017) proposes to learn detection and tracking using a single network with a multi-task objective. (Chen et al. 2018) proposes to propagate the sparsely detected results through a spacetime lattice. All the methods above focus on the accuracy of each individual frame. They either do not associate the presence of an object in different frames as a tracklet, or associate after performing object detection on each frame, which is time-consuming.
Multiple Object Tracking
Multiple object tracking (MOT) focuses on data association: finding the set of trajectories that best explains the given detections (Leal-Taixé et al. 2014). Existing approaches to MOT fall into two categories: batch and online mode. Batch mode approaches pose data association as a global optimization problem, which can be a min-cost max-flow problem (Zhang, Li, and Nevatia 2008;Pirsiavash, Ramanan, and Fowlkes 2011), a continuous energy minimization problem (Milan, Roth, and Schindler 2014) or a graph cut problem (Tang et al. 2016;. Contrarily, online mode approaches are only allowed to solve the data association problem with the present and past frames. (Xiang, Alahi, and Savarese 2015) formulates data association as a Markov decision process. (Milan et al. 2017;Sadeghian, Alahi, and Savarese 2017) employs recurrent neural networks (RNNs) for feature representation and data association.
State-of-the-art MOT approaches aim to improve the data association performance given publicly-available detections since the introduction of the MOT challenge (Leal-Taixé et al. 2015). However, we focus on the sequential decision problem of detection or tracking. Although the widely-used Hungarian algorithm is adopted for simplicity and fairness in the experiments, we believe the incorporation of existing MOT approaches can further enhance the accuracy.
Keyframe Scheduler
Researchers have proposed approaches to adaptive keyframe scheduling beyond regular frame skipping in video analytics. (Zhu et al. 2018) proposes to estimate the quality of optical flow, which relies on the time-consuming flow network. (Chen et al. 2018) proposes an easiness measure to consider the size and motion of small objects, which is hand-crafted and more importantly, it is a detect-then-schedule paradigm but cannot determine to detect or track. (Li, Shi, and Lin 2018;Xu et al. 2018) learn to predict the discrepancy between the segmentation map of the current frame and the keyframe, which are only applicable to segmentation tasks.
All the methods above, however, solve an auxiliary task (e.g., flow quality, or discrepancy of segmentation maps) but do not answer the question directly in a classification perspective -is the current frame a keyframe or not? In contrast, we pose video object detection/tracking as a sequential decision problem, and learn directly whether the current frame is a keyframe by assessing the tracking quality. Our formulation is further shown as a generalization of Siamese trackers and a special case of RL.
The DorT Framework
Video object detection/tracking is formulated as follows. Given a sequence of video frames
F = {f 1 , f 2 , . . . , f N }, the aim is to obtain bounding boxes B = {b 1 , b 2 , . . . , b M }, where b i = {rect i , f id i , score i , id i }, rect i denotes the 4- dim
bounding box coordinates and f id i , score i and id i are scalars denoting respectively the frame ID, the confidence score and the object ID.
Considering the realtime and low latency constraint, we formulate video object detection/tracking as a sequential decision problem, which consists of four modules: singleframe detector, multi-box tracker, scheduler network and data association. An algorithm summary follows the introduction of the four modules.
Single-Frame Detector
We adopt R-FCN (Dai et al. 2016) as the detector following deep feature flow (DFF) (Zhu et al. 2017b). Our framework, however, is compatible with all single-frame detectors.
Efficient Multi-Box Tracker via RoI Convolution
The SiamFC tracker (Bertinetto et al. 2016) is adopted in our framework. It learns a deep feature extractor during training such that an object is similar to its deformations but different from the background. During testing, the nearby patch with the highest confidence is selected as the tracking result. The tracker is reported to run at 86 fps in the original paper.
Despite its efficiency, there are usually 30 to 50 detected boxes in a frame outputted by R-FCN. It is a natural idea to track only the high-confidence ones and ignore the rest. Such an approach, however, results in a drastic decrease in mAP Siamese network Current frame t + τ Keyframe t Figure 2: RoI convolution. Given targets in keyframe t and search regions in frame t + τ , the corresponding RoIs are cropped from the feature maps and convolved to obtain the response maps. Solid boxes denote detected objects in keyframe t and dashed boxes denote the corresponding search region in frame t + τ . A star denotes the center of its corresponding bounding box. The center of a dashed box is copied from the tracking result in frame t + τ − 1.
since R-FCN detection is not perfect and many true positives with low confidence scores are discarded. We therefore need to track all the detected boxes. It is time-consuming to track 50 boxes without optimization (about 3 fps). In order to speed up the tracking process, we propose to share the feature extraction network of multiple boxes and propose an RoI convolution layer in place of the original cross-correlation layer in SiamFC. Figure 2 is an illustration. Through cropping and convolving on the feature maps, the proposed tracker is over 10x faster than the timeconsuming baseline while obtaining comparable accuracy.
Notably, there is no learnable parameter in the RoI convolution layer, and thus we can train the SiamFC tracker following the original settings in (Bertinetto et al. 2016).
Scheduler Network
The scheduler network is the core of DorT, as our task is formulated as a sequential decision problem. It takes as input the current frame f t+τ and its keyframe f t , and determines to detect or track, denoted as Scheduler(f t , f t+τ ). We will elaborate this module in the next section.
Data Association
Once the scheduler network determines to detect the current frame, there is a need to associate the previous tracked boxes and the current detected boxes. Hence, a data association algorithm is required. For simplicity and fairness in the paper, the widely-used Hungarian algorithm is adopted. Although it is possible to improve the accuracy by incorporating more advanced data association techniques (Xiang, Alahi, and Savarese 2015;Sadeghian, Alahi, and Savarese 2017), it is not the focus in the paper. The overall architecture of the DorT framework is shown in Figure 1. More details are summarized in Algorithm 1.
The Scheduler Network in DorT
The scheduler network in DorT aims to determine to detect or track given a new frame by estimating the quality of the tracked boxes. It should be efficient itself. Rather than training a network from scratch, we propose to reuse part of the tracking network. Firstly, the l-th layer convolutional feature map of frame t and frame t + τ , denoted respectively as x t l Algorithm 1 The Detect or Track (DorT) Framework
Input: A sequence of video frames F = {f1, f2, . . . , fN }. Output: Bounding boxes B = {b1, b2, . . . , bM } with ID, where bi = {recti, f idi, scorei, idi}. 1: B ← {} 2: t ← 1
t is the index of keyframe 3: Detect f1 with the single-frame detector. 4: Assign new ID to the detected boxes. 5: Add the detected boxes in f1 to B. 6: for i ← 2 to N do 7:
d ← Scheduler(ft, fi) decision of scheduler 8:
if d = detect then 9:
Detect fi with single-frame detector. 10:
Match boxes in fi and fi−1 using Hungarian. 11:
Assign new ID to unmatched boxes in fi.
12:
Assign corresponding ID to matched boxes in fi. Figure 3: Scheduler network. The output feature map of the correlation layer is followed by two convolutional layers and a fully-connected layer with a 2-way softmax. As discussed later, this structure is a generalization of the SiamFC tracker. and x t+τ l , are fed into a correlation layer which performs point-wise feature comparison
x t,t+τ corr (i, j, p, q) = x t l (i, j), x t+τ l (i + p, j + q)(1)
where −d ≤ p ≤ d and −d ≤ q ≤ d are offsets to compare features in a neighbourhood around the locations (i, j) in the feature map, defined by the maximum displacement d.
Hence, the output of the correlation layer is a feature map of size x corr ∈ R H l ×W l ×(2d+1) 2 , where H l and W l denote respectively the height and width of the l-th layer feature map. The correlation feature map x corr is then passed through two convolutional layers and a fully-connected layer with a 2-way softmax. The final output of the network is a classification score indicating the probability to detect the current frame. Figure 3 is an illustration of the scheduler network.
Training Data Preparation
Existing groundtruth in the ImageNet VID dataset (Russakovsky et al. 2015) does not contain an indicator of the tracking quality. In this paper, we simulate the tracking process between two sampled frames and label it as detect (0) or track (1) in a strict protocol.
As we have sampled frame t and frame t+τ from the same sequence, we track all the groundtruth bounding boxes using SiamFC from frame t to frame t + τ . If all the groundtruth boxes in frame t + τ are matched with the tracked boxes (e.g., IOU over 0.8), the frame is labeled as track; otherwise, it is labeled as detect. Any emerging or disappearing objects indicates a detect. Several examples are shown in Figure 4.
We have also tried to learn a scheduler for each tracker, but found it difficult to handle high-confidence false detections and non-trivial to merge the decisions of all the trackers. In contrast, the proposed approach to learning a single scheduler is an elegant solution which directly learns the decision rather than an auxiliary target such as the fraction of pixels at which the semantic segmentation labels differ (Li, Shi, and Lin 2018), or the fraction of low-quality flow estimation (Zhu et al. 2018).
Relation to the SiamFC Tracker
The proposed scheduler network can be seen as a generalization of the original SiamFC (Bertinetto et al. 2016). In the correlation layer of SiamFC, the target feature (6×6×128) is convolved with the search region feature (22×22×128) and obtains the response map (17 × 17 × 1, which can be equivalently written as 1 × 1 × 17 2 ). Similarly, we can view the correlation layer of the proposed scheduler network (see Eq. 1) as convolutions between multiple target features in the keyframe and their respective nearby search regions in the current frame. The size of a target equals the receptive field of the input feature map of our scheduler. Figure 5 shows several examples of targets. Actually, however, targets include all possible patches in a sliding window manner.
In this sense, the output feature map of the correlation layer x corr ∈ R H l ×W l ×(2d+1) 2 can be regarded as a set of H l × W l SiamFC tracking tasks, where the response map of each is 1 ×1 ×(2d + 1) 2 . The correlation feature map is then fed into a small CNN consisting of two convolutional layers and a fully-connected layer.
In summary, the generalization of the proposed scheduler network over SiamFC lies in two fold: • SiamFC correlates a target feature with its nearby search region, while our scheduler extends the number of tasks from one to many. • SiamFC directly picks the highest value in the correlation feature map as the result, whereas the proposed scheduler fuses the multiple response maps with a CNN. The validity of the proposed scheduler network is hence clear -it first convolves patches in frame t (examples shown in Figure 5) with their respective nearby regions in frame t+τ , and then fuses the response maps with a CNN, in order to measure the difference between the two frames, and more importantly, to assess the tracking quality. The scheduler is also resistant to small perturbations by inheriting SiamFC's robustness to object deformation.
Relation to Reinforcement Learning
The sequential decision problem can also be formulated in a RL framework, where the action, state, state transition function and reward need to be defined. The size of a target equals the receptive field of the input feature map of the scheduler. As shown, a target patch might be an object, a part of an object, or totally background. The "tracking" results of these targets will be fused later. It should be noted that targets include all possible patches in a sliding window manner, but not just the three boxes shown above.
Action. The action space A contains two types of actions: {detect, track}. If the decision is detect, object detector is applied to the current frame; otherwise, boxes tracked from the keyframe are taken as the results.
State. The state s t,τ is defined as a tuple (x t l , x t+τ l ), where x t l and x t+τ l denote the l-th layer convolutional feature map of frame t and frame t + τ , respectively. Frame t is the keyframe on which object detector is applied, and frame t+τ is the current frame on which actions are to be determined.
State transition function. After the decision of action a t,τ in state s t,τ . The next state is obtained upon the action: • detect. The next state is s t+τ,1 = (x t+τ l , x t+τ +1 l ). Frame t + τ is fed to the object detector and is set as the new keyframe.
• track. The next state is s t,τ +1 = (x t l , x t+τ +1 l ). Bounding boxes tracked from the keyframe are taken as the results in frame t + τ . The keyframe t remains unchanged. As shown above, no matter whether the keyframe is t or t + τ , the task in the next state is to determine the action in frame t + τ + 1.
Reward. The reward function is defined as r(s, a) since it is determined by both the state s and the action a. As illustrated in Figure 4, a labeling mechanism is proposed to obtain the groundtruth label of the tracking quality between two frames (i.e., a certain state s). We denote the groundtruth label as GT (s), which is either detect or track. Hence, the reward function can be defined as follows:
r(s, a) = 1, GT (s) = a 0, GT (s) = a(2)
which is based on the consistency between the groundtruth label and the action taken. After defining all the above, the RL problem can be solved via a deep Q network (DQN) (Mnih et al. 2015) with a discount factor γ, penalizing the reward from future time steps. However, training stability is always an issue in RL algorithms (Anschel, Baram, and Shimkin 2017). In this paper, we set γ = 0 such that the agent only cares about the reward from the next time step. Therefore, the DQN becomes a regression network -pushing the predicted action to be the same as the GT action, and the scheduler network is a special case of RL. We empirically observe that the training procedure becomes easier and more stable by setting γ = 0.
Experiments
The DorT framework is evaluated on the ImageNet VID dataset (Russakovsky et al. 2015) in the task of video object detection/tracking. For completeness, we also report results in video object detection.
Experimental Setup
Dataset description. All experiments are conducted on the ImageNet VID dataset (Russakovsky et al. 2015). Im-ageNet VID is split into a training set of 3862 videos and a test set of 555 videos. There are per-frame bounding box annotations for each video. Furthermore, the presences of a certain target across different frames in a video are assigned with the same ID.
Evaluation metric. The evaluation metric for video object detection is the extensively used mean average precision (mAP), which is based on a sorted list of bounding boxes in descending order of their scores. A predicted bounding box is considered correct if its IOU with a groundtruth box is over a threshold (e.g., 0.5).
In contrast to the standard mAP which is based on bounding boxes, the mAP for video object detection/tracking is based on a sorted list of tracklets (Russakovsky et al. 2017). A tracklet is a set of bounding boxes with the same ID. Similarly, a tracklet is considered correct if its IOU with a groundtruth tracklet is over a threshold. Typical choices of IOU thresholds for tracklet matching and per-frame bounding box matching are both 0.5. The score of a tracklet is the average score of all its bounding boxes.
Implementation details. Following the settings in (Zhu et al. 2017b), R-FCN (Dai et al. 2016) is trained with a ResNet-101 backbone (He et al. 2016) on the training set.
SiamFC is trained following the original paper (Bertinetto et al. 2016). Instead of training from scratch, however, we initialize the first four convolutional layers with the pretrained parameters from AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and change Conv5 from 3 × 3 to 1 × 1 with the Xavier initializer. Parameters of the first four convolutional layers are fixed during training (He et al. 2018). We only search for one scale and discard the upsampling step in the original SiamFC for efficiency. All images being fed into SiamFC are resized to 300 × 500. Moreover, the confidence score of a tracked box (for evaluation) is equal to its corresponding detected box in the keyframe.
The scheduler network takes as input the Conv5 feature of our trained SiamFC. The SGD optimizer is adopted with a learning rate 1e-2, momentum 0.9 and weight decay 5e-4. The batch size is set to 32. During testing, we raise the decision threshold of track to δ = 0.97 (i.e., the scheduler outputs track if the predicted confidence of track is over δ) to ensure conservativeness of the scheduler. Furthermore, since nearby frames look similar, the scheduler is applied every σ frames (where σ is a tunable parameter) to reduce unnecessary computation.
All experiments are conducted on a workstation with an Intel Core i7-4790k CPU and a Titan X GPU. We empirically observe that the detection network and the tracking/scheduler network run at 8.33 fps and 100fps, respectively. This is because the ResNet-101 backbone is much heavier than AlexNet. Moreover, the speed of the Hungarian algorithm is as high as 667 fps.
Video Object Detection/Tracking
To our knowledge, the most closely related work to ours is (Lan et al. 2016), which handles cost-effective face detection/tracking. Since face is much easier to track and is with less deformation, the paper achieves success by utilizing non-deep learning-based detectors and trackers. However, we aim at general object detection/tracking in video, which is much more challenging. We demonstrate the effectiveness of the proposed DorT framework against several strong baselines.
Effectiveness of scheduler. The scheduler network is a core component of our DorT framework. Since SiamFC tracking is more efficient than R-FCN detection, the scheduler should predict track when it is safe for the trackers and be conservative enough to predict detect when there is sufficient change to avoid track drift. We compare our DorT framework with a frame skipping baseline, namely a "fixed scheduler" -R-FCN is performed every σ frames and SiamFC is adopted to track for the frames in between. As aforementioned, our scheduler can also be applied every σ frames to improve efficiency. Moreover, there could be an oracle scheduler -predicting the groundtruth label (detect or track) as shown in Figure 4 during testing. The oracle scheduler is a 100% accurate scheduler in our setting. The results are shown in Figure 6.
We can observe that the frame rate and mAP vary as σ changes. Interestingly, the curves are not monotonic -as the frame rate decreases, the accuracy in mAP is not necessarily higher. In particular, detectors are applied frequently when σ = 1 (the leftmost point of each curve). Associating boxes using the Hungarian algorithm is generally less reliable (given missed detections and false detections) than tracking boxes between two frames. It is also a benefit of the scheduler network -applying tracking only when confident, and thus most boxes are reliably associated. Hence, the curve of the scheduler network is on the upper-right side of that of the fixed scheduler as shown in Figure 6.
However, it can be also observed that there is certain distance between the curve of the scheduler network and that of the oracle scheduler. Given that the oracle scheduler is a 100% accurate classifier, we analyze the classification accuracy of the scheduler network in Figure 7. Let us take the Red, blue and green boxes denote groundtruth, detected boxes and tracked boxes, respectively. The first row: R-FCN is applied in the keyframe. The second row: the scheduler determines to track since it is confident. The third row: the scheduler predicts to track in the first image although the red panda moves; however, the scheduler determines to detect in the second image as the cat moves significantly and is unable to be tracked. σ = 10 case as an example. Although the classification accuracy is only 32.3%, the false positive rate (i.e., misclassifying a detect case as track) is as low as 1.9%. Because we empirically find that the mAP drops drastically if the scheduler mistakenly predicts track, our scheduler network is made conservative -track only when confident and detect if unsure. Figure 8 shows some qualitative results.
Effectiveness of RoI convolution. Trackers are optimized for the crop-and-resize case (Bertinetto et al. 2016) -the target and search region are cropped and resized to a fixed size before matching. It is a nice choice since the tracking algorithm is not affected by the original size of the target. It is, however, slow in multi-box case and we propose RoI convolution as an efficient approximation. As shown in Figure 6, crop-and-resize SiamFC is even slower than detection -the overall running time is 3 fps. Notably, its mAP is 56.5%, which is roughly the same as that of our DorT framework empowered with RoI convolution. Our DorT framework, however, runs at 54 fps when σ = 10. RoI convolution obtains over 10x speed boost while retaining mAP.
Comparison with existing methods. Deep feature flow (Zhu et al. 2017b) focuses on video object detection without tracking. We can, however, associate its predicted bounding boxes with per frame data association using the Hungarian algorithm. The results are shown in Figure 6. It can be observed that our framework performs significantly better than deep feature flow in video object detection/tracking.
Concurrent works that deal with video object detection/tracking are the submitted entries in ILSVRC 2017 Wei et al. 2017;Russakovsky et al. 2017). As discussed in the Related Work section, these methods aim only to improve the mAP by adopting complicated methods and post processing, leading to inefficient solutions without guaranteeing low latency. Their reported results on the test set ranges from 51% to 65% mAP. Our proposed DorT, notably, achieves 57% mAP on the validation set, which is comparable to the existing methods in magnitude, but is much more principled and efficient.
Video Object Detection
We also evaluate our DorT framework in video object detection for completeness, by removing the predicted object ID. Our DorT framework is compared against deep feature flow (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), high performance video object detection (VOD) (Zhu et al. 2018) and ST-Lattice (Chen et al. 2018). The results are shown in Figure 9. It can be observed that D&T and high performance VOD manage to achieve a speed-accuracy balance. They obtain higher results but cannot fit into realtime (over 30 fps) scenarios. ST-Lattice, although being fast and accurate, adopts detection results in future frames and is thus not suitable in a low latency scenario. As compared with deep feature flow, our DorT framework performs significantly faster with comparable performance (no more than 1% mAP loss). Although our aim is not the video object detection task, the results in Figure 9 demonstrate the effectiveness of our approach.
Conclusion and Future Work
We propose a DorT framework for cost-effective video object detection/tracking, which is in realtime and with low latency. Object detection/tracking of a video sequence is formulated as a sequential decision problem in the framework. Notably, a light-weight but effective scheduler network is proposed, which is shown to be a generalization of Siamese trackers and a special case of RL. The DorT framework turns out to be effective and strikes a good balance between speed and accuracy. The framework can still be improved in several aspects.
The SiamFC tracker can search for multiple scales to improve performance as in the original paper. More advanced data association methods can be applied by resorting to the state-of-the-art MOT algorithms. Furthermore, there is room to improve the training of the scheduler network to approach the oracle scheduler. These are left as future work.
| 5,204 |
1811.05416
|
2900965757
|
Action monitoring in a home environment provides important information for health monitoring and may serve as input into a smart home environment. Visual analysis using cameras can recognise actions in a complex scene, such as someones living room. However, although there the huge potential benefits and importance, specifically for health, cameras are not widely accepted because of privacy concerns. This paper recognises human activities using a sensor that retains privacy. The sensor is not only different by being thermal, but it is also of low resolution: 8x8 pixels. The combination of the thermal imaging, and the low spatial resolution ensures the privacy of individuals. We present an approach to recognise daily activities using this sensor based on a discrete cosine transform. We evaluate the proposed method on a state-of-the-art dataset and experimentally confirm that our approach outperforms the baseline method. We also introduce a new dataset, and evaluate the method on it. Here we show that the sensor is considered better at detecting the occurrence of falls and Activities of Daily Living. Our method achieves an overall accuracy of 87.50 across 7 activities with a fall detection sensitivity of 100 and specificity of 99.21 .
|
An infrared sensor array is a device composed of a small number of discrete infrared sensors. It represents the spatial distribution of temperature as a low-resolution image. Unlike colour cameras, infrared sensor arrays only capture the shape of the human body, therefore making individual identification harder. Additionally, the low spatial resolution also makes identification of individuals difficult. As this is more comfortable for users, it becomes more acceptable for installation in residential environments. Such infrared sensor arrays can be applied in many scenarios. A 4x4 sensor array has been used to recognise hand motion directions @cite_5 , although the extremely low resolution of this sensor renders it unsuitable for more complex visual tasks. A 8x8 pixel sensor array has been successfully used to detect, count and track people indoors @cite_6 . Human movements has also be inferred by using the subject's location and moving trajectory using a 16x16 sensor array @cite_10 . Most recently, a multi-sensor system has been designed for human movement detection and activity recognition @cite_1 , which our method will be compared against.
|
{
"abstract": [
"",
"In this paper we present our work towards a hand gesture recognition system realised with a passive thermal infrared sensor array. In contrast with the majority of recent research activities into gesture recognition, which focus on the complex analysis of video sequences, our approach shows that the functionality of a simple pyroelectric movement sensor can be expanded to detect differing hand gestures at short range. We show that blob detection from a hand waving over a 16 element passive infrared sensor array provides sufficient information to discriminate four directions of hand stroke. This sensor system is unique and lends itself to low cost, low profile and low power applications. Keywords-touchless input device; dynamic hand gesture; infrared motion sensor; infrared sensor array; pyroelectricity;",
"We propose a human body tracking method using a far-infrared sensor array. A far-infrared sensor array captures the spatial distribution of temperature as a low-resolution image. Since it is difficult to identify a person from the low-resolution thermal image, we can avoid privacy issues. Therefore, it is expected to be applied for the analysis of human behaviors in various places. However, it is difficult to accurately track humans because of the lack of information sufficient to describe the feature of the target human body in the low-resolution thermal image. In order to solve this problem, we propose a thermo-spatial sensitive histogram suitable to represent the target in the low-resolution thermal image. Unlike the conventional histograms, in case of the thermo-spatial sensitive histogram, a voting value is weighted depending on the distance to the target’s position and the difference from the target’s temperature. This histogram allows the accurate tracking by representing the target with multiple histograms and reducing the influence of the background pixels. Based on this histogram, the proposed method tracks humans robustly to occlusions, pose variations, and background clutters. We demonstrate the effectiveness of the method through an experiment using various image sequences.",
"Low Resolution Thermal Array Sensors are widely used in several applications in indoor environments. In particular, one of these cheap, small and unobtrusive sensors provides a low-resolution thermal image of the environment and, unlike cameras; it is capable to detect human heat emission even in dark rooms. The obtained thermal data can be used to monitor older seniors while they are performing daily activities at home, to detect critical situations such as falls. Most of the studies in activity recognition using Thermal Array Sensors require human detection techniques to recognize humans passing in the sensor field of view. This paper aims to improve the accuracy of the algorithms used so far by considering the temperature environment variation. This method leverages an adaptive background estimation and a noise removal technique based on Kalman Filter. In order to properly validate the system, a novel installation of a single sensor has been implemented in a smart environment: the obtained results show an improvement in human detection accuracy with respect to the state of the art, especially in case of disturbed environments."
],
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_6"
],
"mid": [
"2899374650",
"2287367556",
"957840212",
"2601917592"
]
}
|
Home Activity Monitoring using Low Resolution Infrared Sensor Array
|
The U.K., like many other countries, is facing an explosion of long term health conditions. In England alone, 15.4 million people have at least one chronic medical condition, such as, dementia, stroke, cardiovascular or musculoskeletal disease [3]. In such cases, continuous management and medical treatment may be required for many years outside of hospital. Currently this requires 70% of the total National Health Services (NHS) budget [7]. The huge costs involved could eventually make the NHS unsustainable unless a better solution can be found to reduce costs, while also giving those with long-term conditions better care and an improved quality of life.
For these reasons, developing a reliable home monitoring system has drawn much attention in recent years due to the growing demands for improved health-care and patient well-being. Current home monitoring systems often include environmental sensors, wearable inertial sensors and visual sensors. Such systems can enable various types of applications in healthcare provision, such as to help diagnose and manage health and well-being conditions [17].
Wearable sensor based techniques have emerged over recent years with a focus on coarse categorisations of activity that offer low cost, low energy consumption, and data simplicity [10]. Among these, tri-axial accelerometers are the most broadly used inertial sensors to recognise ambulation activities [13]. However, despite rapid developments in wearable sensor technology, issues surrounding missed communications, limited battery life, irregular wearing, and poor comfort remain problematic.
Contactless sensors, such as wireless passive sensing systems and visual sensors, have the potential to address several limitations of the wearable sensors. Because of the ubiquitous availability of Wi-Fi signals in indoor areas, wireless signals have been exploited to detect human movement [12] -that is, when a person engages in an activity, the body movement affects the wireless signals. This technology has shown capabilities in assisted living and residential care [11]. However, Wi-Fi sensing systems still suffer from low accuracy, single-user capability and signal source dependency problems. On the other hand, visual sensors can capture rich data and multiple events simultaneously. Recent advances in computer vision allow for a fine-grained analysis of human activity that have now opened up the possibility of integrating these devices seamlessly into home monitoring systems [16]. However, visual sensors have not been widely integrated. This is largely due to the ongoing privacy issue.
With this in mind, in this paper, we proposed a home activity monitoring system using an 8 × 8 infrared sensor array. The sensor provides 64 pixels of thermal data and can be used to offer coarse-grained activity recognition while, importantly, preserving privacy for the users. This is a new form of sensor applied in the home monitoring system context, where only limited publicly available datasets are available. We evaluate our method on the Coventry-Activity dataset [8], and compare against a baseline method. We also introduce a new dataset, Infra-ADL2018, for monitoring activities of daily living and to detect occurrence of falls. The dataset contains 7 daily activities performed by 8 subjects. The infrared sensor itself is mounted on the ceiling to give an overhead view. In summary, the major contributions of this paper are, (a)Testing the availability of a low resolution thermal sensor to recognise basic human daily activities, (b) exploration of a low resolution thermal sensor in a healthcare scenario to preserve user privacy, and (c)Demonstration that the proposed method is able to achieve high recognition results, especially for detecting fall events.
Proposed method
We propose a home activity monitoring pipeline to recognise basic daily activities using thermal images captured by an infrared sensor array. The pipeline of the proposed method is shown in Figure 1. Given a set of raw thermal images, a background subtraction is applied first to reduce the affect of noise in background pixels, and then the sequences are re-sampled to the same length. The Discrete Cosine Transform (DCT) based temporal and spatial features are extracted from each sequence. Using these features, the proposed method is able to classify activities and detect falls robustly and accurately.
Sequence pre-processing
The raw data collected by the infrared sensor array is noisy, especially in the background pixels. A sequence of background scene images with F frames is taken without human subjects in it. The background image B i is formed by averaging of all i th pixels along the sequence
B i = 1 F F f =1 B f i .
The processed background subtracted image is extracted by subtracting the corresponding pixel at the background image, P i =P i − B i , where P i andP i are the processed image and raw image, respectively. Figure 2 shows examples of the background subtraction with and without human subjects present in the image . Subtracting the average background pronounces the spatial energy of an individual when the individual is present and inhibits energy in the image when an individual is absent from the scene.
The length of the sequences are different. This is because the duration of some actions are longer than others, e.g. sitting down is often a faster action to complete than walking across a room. It may have been possible to design the data collection to reduce this problem for analysis; perhaps to have a fixed time to record all actions, or to process the data differently i.e. crop all actions to be of the same length as the fastest action. This however, although important, is outside the scope of this paper; instead we are presenting a home monitoring approach using a low resolution sensor. The data has been pre-processed so that each action has an equal number of frames. This was achieved by sampling at equal intervals.
Temporal and spatial feature extraction
First, a one-dimensional DCT is performed on the time signals of a series of images, and the temporal feature vector is created by the DCT coefficients. The advantage of using the DCT is the ability to compactly represent an activity sequence using a fixed number of coefficients. Then, a two-dimensional DCT is performed for each image.
A temporal domain feature is used to represent a time series P i = {P 1 i , P 2 i , ..., P F i } for i th pixel in the frequency domain by the 1-D DCT, where the feature takes the absolute value of the k lowest frequency components of the frequency coefficients,
F 1D (P i ) = |X 1:k P i | (1)
where X is the discrete cosine transformation matrix. The temporal feature for all N pixels can be written as {F 1D (P 1 ), F 1D (P 2 ), ..., F 1D (P N )}. For our experiments, we use the first 5 lowest frequency components of the frequency coefficients.
A two-dimensional DCT for each image is calculated to form spatial domain features. This results in a matrix of 8 × 8 coefficients. A subset of these values is taken to construct the feature vector, where the low-frequency components within the processed image are chosen. Similar to a 1-D DCT, the spatial DCT feature for each frame is,
F 2D (P f ) = |Y 1:k,1:k P f | (2)
where Y is the 2-D discrete cosine transformation matrix. The spatial feature for all F frames can be written as {F 2D (P 1 ), F 2D (P 2 ), ..., F 2D (P F )}. In our experiment, we use a set of 3 × 3 coefficients located in the upper left corner of the coefficients matrix. The activity is then inferred via a multi-class linear Support Vector Machine using the concatenation of the temporal and spatial features.
Experimental results
In our experiments, all the data are collected from the Grid-EYE 8 × 8 infrared sensor array developed by Panasonic [1].
Datasets
The Coventry-Activity dataset [8] is designed for evaluation of activity recognition under a multi-sensor setting. The three sensors are placed 1.5 meters away from the subject as follows: in front, to the left and to the right. 8 activities are collected with one subject in the scene. The dataset contains 3 subjects performing each activity 10 times. We evaluate our activity recognition method on this dataset and compare against the baseline method [8]. The Infra-ADL2018 dataset is introduced in this paper for monitoring home activities and detect occurrence of falls. The dataset is generated over 24 sessions by 8 subjects containing 7 activity categories per session: fall, sit still, stand still, sit to stand, stand to sit, walking from left to right, and walking from right to left.
Quantitative evaluation
We first compare the proposed method against the baseline method presented in [8] on the Coventry-Activity dataset. To follow the same setting, we perform the 10-fold cross validation to the dataset, and test using three sensors solely and also when fusing them together. Sensor 1(S1), Sensor 2 (S2) and Sensor 3 (S3) are placed on the right side, in front and on left side of the subject, respectively. The results are shown in Figure 3 where it can be seen that the proposed method significantly outperforms the baseline when S2 and S3 are used solely, and also when the three sensors are used together. Table 1 shows the average recognition accuracy for each activity when the three sensors are used together -with the best results for each activity highlighted. We note that the proposed method achieves high recognition accuracy throughout all activities. Unlike the baseline method which produces very low accuracy for some activities, e.g. 50% for move left to right and 57% for move forward and backward.
We then test the proposed method on our new dataset Infra-ADL2018. We perform a leave-one-subject-out cross validation where final recognition results reported are averaged over all subjects to remove any bias. The confusion matrix of the activity recognition is shown in Figure 4. In general, the overall recognition rate of the proposed method is 87.50%. More precisely, it can reach 100% sensitivity for detecting the occurrence of fall, and 99.21% specificity, indicating very few false fall alarms. The only action that is misclassified as a fall is stand to sit. This is likely to be because some subjects perform the action very fast, in which case the action looks similar to a fall. The most confused actions are the sit still and stand still due to the sensor position of the system. The sensor is mounted at on the ceiling so that the sit still and stand still look very similar in the thermal images, where the same few pixels are shown higher in temperature throughout the sequence. The use of a multi-sensor setting is one way to discriminate between these by recognising the shape of human when performing different actions.
Conclusion and Future work
In this paper, we proposed a home activity monitoring method using an infrared sensor array. The proposed method uses a temporal and spatial Discrete Cosine Transform that is suitable for representing human activity in very low-resolution thermal images. Given that the sensor would be used in home environments, potential future directions include a multi-sensor system that comprises multiple viewing angles that can deal with view-invariance and occlusion.
| 1,849 |
1811.05416
|
2900965757
|
Action monitoring in a home environment provides important information for health monitoring and may serve as input into a smart home environment. Visual analysis using cameras can recognise actions in a complex scene, such as someones living room. However, although there the huge potential benefits and importance, specifically for health, cameras are not widely accepted because of privacy concerns. This paper recognises human activities using a sensor that retains privacy. The sensor is not only different by being thermal, but it is also of low resolution: 8x8 pixels. The combination of the thermal imaging, and the low spatial resolution ensures the privacy of individuals. We present an approach to recognise daily activities using this sensor based on a discrete cosine transform. We evaluate the proposed method on a state-of-the-art dataset and experimentally confirm that our approach outperforms the baseline method. We also introduce a new dataset, and evaluate the method on it. Here we show that the sensor is considered better at detecting the occurrence of falls and Activities of Daily Living. Our method achieves an overall accuracy of 87.50 across 7 activities with a fall detection sensitivity of 100 and specificity of 99.21 .
|
The visual trace of human activity in video forms a spatio-temporal pattern. Here the salient features are well-developed for images captured by conventional visible-light RGB cameras @cite_0 . However, the majority of well developed features, such as histogram of oriented gradients or optical flow, are not appropriate and applicable for very low resolution images such as those captured in this study, i.e. for 8x8 pixel resolution images.
|
{
"abstract": [
"Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas."
],
"cite_N": [
"@cite_0"
],
"mid": [
"1983705368"
]
}
|
Home Activity Monitoring using Low Resolution Infrared Sensor Array
|
The U.K., like many other countries, is facing an explosion of long term health conditions. In England alone, 15.4 million people have at least one chronic medical condition, such as, dementia, stroke, cardiovascular or musculoskeletal disease [3]. In such cases, continuous management and medical treatment may be required for many years outside of hospital. Currently this requires 70% of the total National Health Services (NHS) budget [7]. The huge costs involved could eventually make the NHS unsustainable unless a better solution can be found to reduce costs, while also giving those with long-term conditions better care and an improved quality of life.
For these reasons, developing a reliable home monitoring system has drawn much attention in recent years due to the growing demands for improved health-care and patient well-being. Current home monitoring systems often include environmental sensors, wearable inertial sensors and visual sensors. Such systems can enable various types of applications in healthcare provision, such as to help diagnose and manage health and well-being conditions [17].
Wearable sensor based techniques have emerged over recent years with a focus on coarse categorisations of activity that offer low cost, low energy consumption, and data simplicity [10]. Among these, tri-axial accelerometers are the most broadly used inertial sensors to recognise ambulation activities [13]. However, despite rapid developments in wearable sensor technology, issues surrounding missed communications, limited battery life, irregular wearing, and poor comfort remain problematic.
Contactless sensors, such as wireless passive sensing systems and visual sensors, have the potential to address several limitations of the wearable sensors. Because of the ubiquitous availability of Wi-Fi signals in indoor areas, wireless signals have been exploited to detect human movement [12] -that is, when a person engages in an activity, the body movement affects the wireless signals. This technology has shown capabilities in assisted living and residential care [11]. However, Wi-Fi sensing systems still suffer from low accuracy, single-user capability and signal source dependency problems. On the other hand, visual sensors can capture rich data and multiple events simultaneously. Recent advances in computer vision allow for a fine-grained analysis of human activity that have now opened up the possibility of integrating these devices seamlessly into home monitoring systems [16]. However, visual sensors have not been widely integrated. This is largely due to the ongoing privacy issue.
With this in mind, in this paper, we proposed a home activity monitoring system using an 8 × 8 infrared sensor array. The sensor provides 64 pixels of thermal data and can be used to offer coarse-grained activity recognition while, importantly, preserving privacy for the users. This is a new form of sensor applied in the home monitoring system context, where only limited publicly available datasets are available. We evaluate our method on the Coventry-Activity dataset [8], and compare against a baseline method. We also introduce a new dataset, Infra-ADL2018, for monitoring activities of daily living and to detect occurrence of falls. The dataset contains 7 daily activities performed by 8 subjects. The infrared sensor itself is mounted on the ceiling to give an overhead view. In summary, the major contributions of this paper are, (a)Testing the availability of a low resolution thermal sensor to recognise basic human daily activities, (b) exploration of a low resolution thermal sensor in a healthcare scenario to preserve user privacy, and (c)Demonstration that the proposed method is able to achieve high recognition results, especially for detecting fall events.
Proposed method
We propose a home activity monitoring pipeline to recognise basic daily activities using thermal images captured by an infrared sensor array. The pipeline of the proposed method is shown in Figure 1. Given a set of raw thermal images, a background subtraction is applied first to reduce the affect of noise in background pixels, and then the sequences are re-sampled to the same length. The Discrete Cosine Transform (DCT) based temporal and spatial features are extracted from each sequence. Using these features, the proposed method is able to classify activities and detect falls robustly and accurately.
Sequence pre-processing
The raw data collected by the infrared sensor array is noisy, especially in the background pixels. A sequence of background scene images with F frames is taken without human subjects in it. The background image B i is formed by averaging of all i th pixels along the sequence
B i = 1 F F f =1 B f i .
The processed background subtracted image is extracted by subtracting the corresponding pixel at the background image, P i =P i − B i , where P i andP i are the processed image and raw image, respectively. Figure 2 shows examples of the background subtraction with and without human subjects present in the image . Subtracting the average background pronounces the spatial energy of an individual when the individual is present and inhibits energy in the image when an individual is absent from the scene.
The length of the sequences are different. This is because the duration of some actions are longer than others, e.g. sitting down is often a faster action to complete than walking across a room. It may have been possible to design the data collection to reduce this problem for analysis; perhaps to have a fixed time to record all actions, or to process the data differently i.e. crop all actions to be of the same length as the fastest action. This however, although important, is outside the scope of this paper; instead we are presenting a home monitoring approach using a low resolution sensor. The data has been pre-processed so that each action has an equal number of frames. This was achieved by sampling at equal intervals.
Temporal and spatial feature extraction
First, a one-dimensional DCT is performed on the time signals of a series of images, and the temporal feature vector is created by the DCT coefficients. The advantage of using the DCT is the ability to compactly represent an activity sequence using a fixed number of coefficients. Then, a two-dimensional DCT is performed for each image.
A temporal domain feature is used to represent a time series P i = {P 1 i , P 2 i , ..., P F i } for i th pixel in the frequency domain by the 1-D DCT, where the feature takes the absolute value of the k lowest frequency components of the frequency coefficients,
F 1D (P i ) = |X 1:k P i | (1)
where X is the discrete cosine transformation matrix. The temporal feature for all N pixels can be written as {F 1D (P 1 ), F 1D (P 2 ), ..., F 1D (P N )}. For our experiments, we use the first 5 lowest frequency components of the frequency coefficients.
A two-dimensional DCT for each image is calculated to form spatial domain features. This results in a matrix of 8 × 8 coefficients. A subset of these values is taken to construct the feature vector, where the low-frequency components within the processed image are chosen. Similar to a 1-D DCT, the spatial DCT feature for each frame is,
F 2D (P f ) = |Y 1:k,1:k P f | (2)
where Y is the 2-D discrete cosine transformation matrix. The spatial feature for all F frames can be written as {F 2D (P 1 ), F 2D (P 2 ), ..., F 2D (P F )}. In our experiment, we use a set of 3 × 3 coefficients located in the upper left corner of the coefficients matrix. The activity is then inferred via a multi-class linear Support Vector Machine using the concatenation of the temporal and spatial features.
Experimental results
In our experiments, all the data are collected from the Grid-EYE 8 × 8 infrared sensor array developed by Panasonic [1].
Datasets
The Coventry-Activity dataset [8] is designed for evaluation of activity recognition under a multi-sensor setting. The three sensors are placed 1.5 meters away from the subject as follows: in front, to the left and to the right. 8 activities are collected with one subject in the scene. The dataset contains 3 subjects performing each activity 10 times. We evaluate our activity recognition method on this dataset and compare against the baseline method [8]. The Infra-ADL2018 dataset is introduced in this paper for monitoring home activities and detect occurrence of falls. The dataset is generated over 24 sessions by 8 subjects containing 7 activity categories per session: fall, sit still, stand still, sit to stand, stand to sit, walking from left to right, and walking from right to left.
Quantitative evaluation
We first compare the proposed method against the baseline method presented in [8] on the Coventry-Activity dataset. To follow the same setting, we perform the 10-fold cross validation to the dataset, and test using three sensors solely and also when fusing them together. Sensor 1(S1), Sensor 2 (S2) and Sensor 3 (S3) are placed on the right side, in front and on left side of the subject, respectively. The results are shown in Figure 3 where it can be seen that the proposed method significantly outperforms the baseline when S2 and S3 are used solely, and also when the three sensors are used together. Table 1 shows the average recognition accuracy for each activity when the three sensors are used together -with the best results for each activity highlighted. We note that the proposed method achieves high recognition accuracy throughout all activities. Unlike the baseline method which produces very low accuracy for some activities, e.g. 50% for move left to right and 57% for move forward and backward.
We then test the proposed method on our new dataset Infra-ADL2018. We perform a leave-one-subject-out cross validation where final recognition results reported are averaged over all subjects to remove any bias. The confusion matrix of the activity recognition is shown in Figure 4. In general, the overall recognition rate of the proposed method is 87.50%. More precisely, it can reach 100% sensitivity for detecting the occurrence of fall, and 99.21% specificity, indicating very few false fall alarms. The only action that is misclassified as a fall is stand to sit. This is likely to be because some subjects perform the action very fast, in which case the action looks similar to a fall. The most confused actions are the sit still and stand still due to the sensor position of the system. The sensor is mounted at on the ceiling so that the sit still and stand still look very similar in the thermal images, where the same few pixels are shown higher in temperature throughout the sequence. The use of a multi-sensor setting is one way to discriminate between these by recognising the shape of human when performing different actions.
Conclusion and Future work
In this paper, we proposed a home activity monitoring method using an infrared sensor array. The proposed method uses a temporal and spatial Discrete Cosine Transform that is suitable for representing human activity in very low-resolution thermal images. Given that the sensor would be used in home environments, potential future directions include a multi-sensor system that comprises multiple viewing angles that can deal with view-invariance and occlusion.
| 1,849 |
1811.05416
|
2900965757
|
Action monitoring in a home environment provides important information for health monitoring and may serve as input into a smart home environment. Visual analysis using cameras can recognise actions in a complex scene, such as someones living room. However, although there the huge potential benefits and importance, specifically for health, cameras are not widely accepted because of privacy concerns. This paper recognises human activities using a sensor that retains privacy. The sensor is not only different by being thermal, but it is also of low resolution: 8x8 pixels. The combination of the thermal imaging, and the low spatial resolution ensures the privacy of individuals. We present an approach to recognise daily activities using this sensor based on a discrete cosine transform. We evaluate the proposed method on a state-of-the-art dataset and experimentally confirm that our approach outperforms the baseline method. We also introduce a new dataset, and evaluate the method on it. Here we show that the sensor is considered better at detecting the occurrence of falls and Activities of Daily Living. Our method achieves an overall accuracy of 87.50 across 7 activities with a fall detection sensitivity of 100 and specificity of 99.21 .
|
Several features have been investigated specifically for low resolution infrared sensors, most notably @cite_15 . Here, connected component analysis was used to evaluate the number of individuals present in the scene, which subsequently led to motion tracking of the individuals; however this method was sensitive to background noise. A thermo-spatial sensitive histogram feature approach was able to reduce the noise from background pixels @cite_10 . Although counting and tracking of individuals is a non-trivial task, here we are concerned with the activity of each individual. Intuitively, this would appear to require finer detail, and this poses a difficult task given the low spatial resolution of the image.
|
{
"abstract": [
"Indoor tracking has all-pervasive applications beyond mere surveillance, for example in education, health monitoring, marketing, energy management and so on. Image and video based tracking systems are intrusive. Thermal array sensors on the other hand can provide coarse-grained tracking while preserving privacy of the subjects. The goal of the project is to facilitate motion detection and group proxemics modeling using an 8 x 8 infrared sensor array. Each of the 8 x 8 pixels is a temperature reading in Fahrenheit. We refer to each 8 x 8 matrix as a scene. We collected approximately 902 scenes with different configurations of human groups and different walking directions. We infer direction of motion of a subject across a set of scenes as left-to-right, right-to-left, up-to-down and down-to-up using cross-correlation analysis. We used features from connected component analysis of each background subtracted scene and performed Support Vector Machine classification to estimate number of instances of human subjects in the scene.",
"We propose a human body tracking method using a far-infrared sensor array. A far-infrared sensor array captures the spatial distribution of temperature as a low-resolution image. Since it is difficult to identify a person from the low-resolution thermal image, we can avoid privacy issues. Therefore, it is expected to be applied for the analysis of human behaviors in various places. However, it is difficult to accurately track humans because of the lack of information sufficient to describe the feature of the target human body in the low-resolution thermal image. In order to solve this problem, we propose a thermo-spatial sensitive histogram suitable to represent the target in the low-resolution thermal image. Unlike the conventional histograms, in case of the thermo-spatial sensitive histogram, a voting value is weighted depending on the distance to the target’s position and the difference from the target’s temperature. This histogram allows the accurate tracking by representing the target with multiple histograms and reducing the influence of the background pixels. Based on this histogram, the proposed method tracks humans robustly to occlusions, pose variations, and background clutters. We demonstrate the effectiveness of the method through an experiment using various image sequences."
],
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2180097222",
"957840212"
]
}
|
Home Activity Monitoring using Low Resolution Infrared Sensor Array
|
The U.K., like many other countries, is facing an explosion of long term health conditions. In England alone, 15.4 million people have at least one chronic medical condition, such as, dementia, stroke, cardiovascular or musculoskeletal disease [3]. In such cases, continuous management and medical treatment may be required for many years outside of hospital. Currently this requires 70% of the total National Health Services (NHS) budget [7]. The huge costs involved could eventually make the NHS unsustainable unless a better solution can be found to reduce costs, while also giving those with long-term conditions better care and an improved quality of life.
For these reasons, developing a reliable home monitoring system has drawn much attention in recent years due to the growing demands for improved health-care and patient well-being. Current home monitoring systems often include environmental sensors, wearable inertial sensors and visual sensors. Such systems can enable various types of applications in healthcare provision, such as to help diagnose and manage health and well-being conditions [17].
Wearable sensor based techniques have emerged over recent years with a focus on coarse categorisations of activity that offer low cost, low energy consumption, and data simplicity [10]. Among these, tri-axial accelerometers are the most broadly used inertial sensors to recognise ambulation activities [13]. However, despite rapid developments in wearable sensor technology, issues surrounding missed communications, limited battery life, irregular wearing, and poor comfort remain problematic.
Contactless sensors, such as wireless passive sensing systems and visual sensors, have the potential to address several limitations of the wearable sensors. Because of the ubiquitous availability of Wi-Fi signals in indoor areas, wireless signals have been exploited to detect human movement [12] -that is, when a person engages in an activity, the body movement affects the wireless signals. This technology has shown capabilities in assisted living and residential care [11]. However, Wi-Fi sensing systems still suffer from low accuracy, single-user capability and signal source dependency problems. On the other hand, visual sensors can capture rich data and multiple events simultaneously. Recent advances in computer vision allow for a fine-grained analysis of human activity that have now opened up the possibility of integrating these devices seamlessly into home monitoring systems [16]. However, visual sensors have not been widely integrated. This is largely due to the ongoing privacy issue.
With this in mind, in this paper, we proposed a home activity monitoring system using an 8 × 8 infrared sensor array. The sensor provides 64 pixels of thermal data and can be used to offer coarse-grained activity recognition while, importantly, preserving privacy for the users. This is a new form of sensor applied in the home monitoring system context, where only limited publicly available datasets are available. We evaluate our method on the Coventry-Activity dataset [8], and compare against a baseline method. We also introduce a new dataset, Infra-ADL2018, for monitoring activities of daily living and to detect occurrence of falls. The dataset contains 7 daily activities performed by 8 subjects. The infrared sensor itself is mounted on the ceiling to give an overhead view. In summary, the major contributions of this paper are, (a)Testing the availability of a low resolution thermal sensor to recognise basic human daily activities, (b) exploration of a low resolution thermal sensor in a healthcare scenario to preserve user privacy, and (c)Demonstration that the proposed method is able to achieve high recognition results, especially for detecting fall events.
Proposed method
We propose a home activity monitoring pipeline to recognise basic daily activities using thermal images captured by an infrared sensor array. The pipeline of the proposed method is shown in Figure 1. Given a set of raw thermal images, a background subtraction is applied first to reduce the affect of noise in background pixels, and then the sequences are re-sampled to the same length. The Discrete Cosine Transform (DCT) based temporal and spatial features are extracted from each sequence. Using these features, the proposed method is able to classify activities and detect falls robustly and accurately.
Sequence pre-processing
The raw data collected by the infrared sensor array is noisy, especially in the background pixels. A sequence of background scene images with F frames is taken without human subjects in it. The background image B i is formed by averaging of all i th pixels along the sequence
B i = 1 F F f =1 B f i .
The processed background subtracted image is extracted by subtracting the corresponding pixel at the background image, P i =P i − B i , where P i andP i are the processed image and raw image, respectively. Figure 2 shows examples of the background subtraction with and without human subjects present in the image . Subtracting the average background pronounces the spatial energy of an individual when the individual is present and inhibits energy in the image when an individual is absent from the scene.
The length of the sequences are different. This is because the duration of some actions are longer than others, e.g. sitting down is often a faster action to complete than walking across a room. It may have been possible to design the data collection to reduce this problem for analysis; perhaps to have a fixed time to record all actions, or to process the data differently i.e. crop all actions to be of the same length as the fastest action. This however, although important, is outside the scope of this paper; instead we are presenting a home monitoring approach using a low resolution sensor. The data has been pre-processed so that each action has an equal number of frames. This was achieved by sampling at equal intervals.
Temporal and spatial feature extraction
First, a one-dimensional DCT is performed on the time signals of a series of images, and the temporal feature vector is created by the DCT coefficients. The advantage of using the DCT is the ability to compactly represent an activity sequence using a fixed number of coefficients. Then, a two-dimensional DCT is performed for each image.
A temporal domain feature is used to represent a time series P i = {P 1 i , P 2 i , ..., P F i } for i th pixel in the frequency domain by the 1-D DCT, where the feature takes the absolute value of the k lowest frequency components of the frequency coefficients,
F 1D (P i ) = |X 1:k P i | (1)
where X is the discrete cosine transformation matrix. The temporal feature for all N pixels can be written as {F 1D (P 1 ), F 1D (P 2 ), ..., F 1D (P N )}. For our experiments, we use the first 5 lowest frequency components of the frequency coefficients.
A two-dimensional DCT for each image is calculated to form spatial domain features. This results in a matrix of 8 × 8 coefficients. A subset of these values is taken to construct the feature vector, where the low-frequency components within the processed image are chosen. Similar to a 1-D DCT, the spatial DCT feature for each frame is,
F 2D (P f ) = |Y 1:k,1:k P f | (2)
where Y is the 2-D discrete cosine transformation matrix. The spatial feature for all F frames can be written as {F 2D (P 1 ), F 2D (P 2 ), ..., F 2D (P F )}. In our experiment, we use a set of 3 × 3 coefficients located in the upper left corner of the coefficients matrix. The activity is then inferred via a multi-class linear Support Vector Machine using the concatenation of the temporal and spatial features.
Experimental results
In our experiments, all the data are collected from the Grid-EYE 8 × 8 infrared sensor array developed by Panasonic [1].
Datasets
The Coventry-Activity dataset [8] is designed for evaluation of activity recognition under a multi-sensor setting. The three sensors are placed 1.5 meters away from the subject as follows: in front, to the left and to the right. 8 activities are collected with one subject in the scene. The dataset contains 3 subjects performing each activity 10 times. We evaluate our activity recognition method on this dataset and compare against the baseline method [8]. The Infra-ADL2018 dataset is introduced in this paper for monitoring home activities and detect occurrence of falls. The dataset is generated over 24 sessions by 8 subjects containing 7 activity categories per session: fall, sit still, stand still, sit to stand, stand to sit, walking from left to right, and walking from right to left.
Quantitative evaluation
We first compare the proposed method against the baseline method presented in [8] on the Coventry-Activity dataset. To follow the same setting, we perform the 10-fold cross validation to the dataset, and test using three sensors solely and also when fusing them together. Sensor 1(S1), Sensor 2 (S2) and Sensor 3 (S3) are placed on the right side, in front and on left side of the subject, respectively. The results are shown in Figure 3 where it can be seen that the proposed method significantly outperforms the baseline when S2 and S3 are used solely, and also when the three sensors are used together. Table 1 shows the average recognition accuracy for each activity when the three sensors are used together -with the best results for each activity highlighted. We note that the proposed method achieves high recognition accuracy throughout all activities. Unlike the baseline method which produces very low accuracy for some activities, e.g. 50% for move left to right and 57% for move forward and backward.
We then test the proposed method on our new dataset Infra-ADL2018. We perform a leave-one-subject-out cross validation where final recognition results reported are averaged over all subjects to remove any bias. The confusion matrix of the activity recognition is shown in Figure 4. In general, the overall recognition rate of the proposed method is 87.50%. More precisely, it can reach 100% sensitivity for detecting the occurrence of fall, and 99.21% specificity, indicating very few false fall alarms. The only action that is misclassified as a fall is stand to sit. This is likely to be because some subjects perform the action very fast, in which case the action looks similar to a fall. The most confused actions are the sit still and stand still due to the sensor position of the system. The sensor is mounted at on the ceiling so that the sit still and stand still look very similar in the thermal images, where the same few pixels are shown higher in temperature throughout the sequence. The use of a multi-sensor setting is one way to discriminate between these by recognising the shape of human when performing different actions.
Conclusion and Future work
In this paper, we proposed a home activity monitoring method using an infrared sensor array. The proposed method uses a temporal and spatial Discrete Cosine Transform that is suitable for representing human activity in very low-resolution thermal images. Given that the sensor would be used in home environments, potential future directions include a multi-sensor system that comprises multiple viewing angles that can deal with view-invariance and occlusion.
| 1,849 |
1811.05416
|
2900965757
|
Action monitoring in a home environment provides important information for health monitoring and may serve as input into a smart home environment. Visual analysis using cameras can recognise actions in a complex scene, such as someones living room. However, although there the huge potential benefits and importance, specifically for health, cameras are not widely accepted because of privacy concerns. This paper recognises human activities using a sensor that retains privacy. The sensor is not only different by being thermal, but it is also of low resolution: 8x8 pixels. The combination of the thermal imaging, and the low spatial resolution ensures the privacy of individuals. We present an approach to recognise daily activities using this sensor based on a discrete cosine transform. We evaluate the proposed method on a state-of-the-art dataset and experimentally confirm that our approach outperforms the baseline method. We also introduce a new dataset, and evaluate the method on it. Here we show that the sensor is considered better at detecting the occurrence of falls and Activities of Daily Living. Our method achieves an overall accuracy of 87.50 across 7 activities with a fall detection sensitivity of 100 and specificity of 99.21 .
|
A large amount of research is underway in the development of a smart sensing system to detect falls in home environments. However, the use of thermal infrared arrays for fall detection has to date not been widely investigated. Although a real-time system to recognise fall and non-fall events has been presented in @cite_13 , their study overlooks the complexity of non-fall actions, where some actions, such as sitting down and inactivity, can be confused with falling @cite_3 . Taking this into consideration, various non-fall activities are specifically incorporated in our dataset, including those most likely to be confused with falling.
|
{
"abstract": [
"This paper presents new approach for unobtrusive indoor fall detection by an IR thermal array sensor. Unlike existing methods that run fall detection at server and require high communication and processing rates, we perform fall detection within the sensor node by a computationally inexpensive algorithm that signals the server only when a fall occurs. Experiments with prototype design show that such formulation provides robust and real-time fall detection even in a noisy environment.",
"Fall detection is a major challenge in the public health care domain, especially for the elderly, and reliable surveillance is a necessity to mitigate the effects of falls. The technology and products related to fall detection have always been in high demand within the security and the health-care industries. An effective fall detection system is required to provide urgent support and to significantly reduce the medical care costs associated with falls. In this paper, we give a comprehensive survey of different systems for fall detection and their underlying algorithms. Fall detection approaches are divided into three main categories: wearable device based, ambience device based and vision based. These approaches are summarised and compared with each other and a conclusion is derived with some discussions on possible future work."
],
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"2771763340",
"2076068958"
]
}
|
Home Activity Monitoring using Low Resolution Infrared Sensor Array
|
The U.K., like many other countries, is facing an explosion of long term health conditions. In England alone, 15.4 million people have at least one chronic medical condition, such as, dementia, stroke, cardiovascular or musculoskeletal disease [3]. In such cases, continuous management and medical treatment may be required for many years outside of hospital. Currently this requires 70% of the total National Health Services (NHS) budget [7]. The huge costs involved could eventually make the NHS unsustainable unless a better solution can be found to reduce costs, while also giving those with long-term conditions better care and an improved quality of life.
For these reasons, developing a reliable home monitoring system has drawn much attention in recent years due to the growing demands for improved health-care and patient well-being. Current home monitoring systems often include environmental sensors, wearable inertial sensors and visual sensors. Such systems can enable various types of applications in healthcare provision, such as to help diagnose and manage health and well-being conditions [17].
Wearable sensor based techniques have emerged over recent years with a focus on coarse categorisations of activity that offer low cost, low energy consumption, and data simplicity [10]. Among these, tri-axial accelerometers are the most broadly used inertial sensors to recognise ambulation activities [13]. However, despite rapid developments in wearable sensor technology, issues surrounding missed communications, limited battery life, irregular wearing, and poor comfort remain problematic.
Contactless sensors, such as wireless passive sensing systems and visual sensors, have the potential to address several limitations of the wearable sensors. Because of the ubiquitous availability of Wi-Fi signals in indoor areas, wireless signals have been exploited to detect human movement [12] -that is, when a person engages in an activity, the body movement affects the wireless signals. This technology has shown capabilities in assisted living and residential care [11]. However, Wi-Fi sensing systems still suffer from low accuracy, single-user capability and signal source dependency problems. On the other hand, visual sensors can capture rich data and multiple events simultaneously. Recent advances in computer vision allow for a fine-grained analysis of human activity that have now opened up the possibility of integrating these devices seamlessly into home monitoring systems [16]. However, visual sensors have not been widely integrated. This is largely due to the ongoing privacy issue.
With this in mind, in this paper, we proposed a home activity monitoring system using an 8 × 8 infrared sensor array. The sensor provides 64 pixels of thermal data and can be used to offer coarse-grained activity recognition while, importantly, preserving privacy for the users. This is a new form of sensor applied in the home monitoring system context, where only limited publicly available datasets are available. We evaluate our method on the Coventry-Activity dataset [8], and compare against a baseline method. We also introduce a new dataset, Infra-ADL2018, for monitoring activities of daily living and to detect occurrence of falls. The dataset contains 7 daily activities performed by 8 subjects. The infrared sensor itself is mounted on the ceiling to give an overhead view. In summary, the major contributions of this paper are, (a)Testing the availability of a low resolution thermal sensor to recognise basic human daily activities, (b) exploration of a low resolution thermal sensor in a healthcare scenario to preserve user privacy, and (c)Demonstration that the proposed method is able to achieve high recognition results, especially for detecting fall events.
Proposed method
We propose a home activity monitoring pipeline to recognise basic daily activities using thermal images captured by an infrared sensor array. The pipeline of the proposed method is shown in Figure 1. Given a set of raw thermal images, a background subtraction is applied first to reduce the affect of noise in background pixels, and then the sequences are re-sampled to the same length. The Discrete Cosine Transform (DCT) based temporal and spatial features are extracted from each sequence. Using these features, the proposed method is able to classify activities and detect falls robustly and accurately.
Sequence pre-processing
The raw data collected by the infrared sensor array is noisy, especially in the background pixels. A sequence of background scene images with F frames is taken without human subjects in it. The background image B i is formed by averaging of all i th pixels along the sequence
B i = 1 F F f =1 B f i .
The processed background subtracted image is extracted by subtracting the corresponding pixel at the background image, P i =P i − B i , where P i andP i are the processed image and raw image, respectively. Figure 2 shows examples of the background subtraction with and without human subjects present in the image . Subtracting the average background pronounces the spatial energy of an individual when the individual is present and inhibits energy in the image when an individual is absent from the scene.
The length of the sequences are different. This is because the duration of some actions are longer than others, e.g. sitting down is often a faster action to complete than walking across a room. It may have been possible to design the data collection to reduce this problem for analysis; perhaps to have a fixed time to record all actions, or to process the data differently i.e. crop all actions to be of the same length as the fastest action. This however, although important, is outside the scope of this paper; instead we are presenting a home monitoring approach using a low resolution sensor. The data has been pre-processed so that each action has an equal number of frames. This was achieved by sampling at equal intervals.
Temporal and spatial feature extraction
First, a one-dimensional DCT is performed on the time signals of a series of images, and the temporal feature vector is created by the DCT coefficients. The advantage of using the DCT is the ability to compactly represent an activity sequence using a fixed number of coefficients. Then, a two-dimensional DCT is performed for each image.
A temporal domain feature is used to represent a time series P i = {P 1 i , P 2 i , ..., P F i } for i th pixel in the frequency domain by the 1-D DCT, where the feature takes the absolute value of the k lowest frequency components of the frequency coefficients,
F 1D (P i ) = |X 1:k P i | (1)
where X is the discrete cosine transformation matrix. The temporal feature for all N pixels can be written as {F 1D (P 1 ), F 1D (P 2 ), ..., F 1D (P N )}. For our experiments, we use the first 5 lowest frequency components of the frequency coefficients.
A two-dimensional DCT for each image is calculated to form spatial domain features. This results in a matrix of 8 × 8 coefficients. A subset of these values is taken to construct the feature vector, where the low-frequency components within the processed image are chosen. Similar to a 1-D DCT, the spatial DCT feature for each frame is,
F 2D (P f ) = |Y 1:k,1:k P f | (2)
where Y is the 2-D discrete cosine transformation matrix. The spatial feature for all F frames can be written as {F 2D (P 1 ), F 2D (P 2 ), ..., F 2D (P F )}. In our experiment, we use a set of 3 × 3 coefficients located in the upper left corner of the coefficients matrix. The activity is then inferred via a multi-class linear Support Vector Machine using the concatenation of the temporal and spatial features.
Experimental results
In our experiments, all the data are collected from the Grid-EYE 8 × 8 infrared sensor array developed by Panasonic [1].
Datasets
The Coventry-Activity dataset [8] is designed for evaluation of activity recognition under a multi-sensor setting. The three sensors are placed 1.5 meters away from the subject as follows: in front, to the left and to the right. 8 activities are collected with one subject in the scene. The dataset contains 3 subjects performing each activity 10 times. We evaluate our activity recognition method on this dataset and compare against the baseline method [8]. The Infra-ADL2018 dataset is introduced in this paper for monitoring home activities and detect occurrence of falls. The dataset is generated over 24 sessions by 8 subjects containing 7 activity categories per session: fall, sit still, stand still, sit to stand, stand to sit, walking from left to right, and walking from right to left.
Quantitative evaluation
We first compare the proposed method against the baseline method presented in [8] on the Coventry-Activity dataset. To follow the same setting, we perform the 10-fold cross validation to the dataset, and test using three sensors solely and also when fusing them together. Sensor 1(S1), Sensor 2 (S2) and Sensor 3 (S3) are placed on the right side, in front and on left side of the subject, respectively. The results are shown in Figure 3 where it can be seen that the proposed method significantly outperforms the baseline when S2 and S3 are used solely, and also when the three sensors are used together. Table 1 shows the average recognition accuracy for each activity when the three sensors are used together -with the best results for each activity highlighted. We note that the proposed method achieves high recognition accuracy throughout all activities. Unlike the baseline method which produces very low accuracy for some activities, e.g. 50% for move left to right and 57% for move forward and backward.
We then test the proposed method on our new dataset Infra-ADL2018. We perform a leave-one-subject-out cross validation where final recognition results reported are averaged over all subjects to remove any bias. The confusion matrix of the activity recognition is shown in Figure 4. In general, the overall recognition rate of the proposed method is 87.50%. More precisely, it can reach 100% sensitivity for detecting the occurrence of fall, and 99.21% specificity, indicating very few false fall alarms. The only action that is misclassified as a fall is stand to sit. This is likely to be because some subjects perform the action very fast, in which case the action looks similar to a fall. The most confused actions are the sit still and stand still due to the sensor position of the system. The sensor is mounted at on the ceiling so that the sit still and stand still look very similar in the thermal images, where the same few pixels are shown higher in temperature throughout the sequence. The use of a multi-sensor setting is one way to discriminate between these by recognising the shape of human when performing different actions.
Conclusion and Future work
In this paper, we proposed a home activity monitoring method using an infrared sensor array. The proposed method uses a temporal and spatial Discrete Cosine Transform that is suitable for representing human activity in very low-resolution thermal images. Given that the sensor would be used in home environments, potential future directions include a multi-sensor system that comprises multiple viewing angles that can deal with view-invariance and occlusion.
| 1,849 |
1811.05008
|
2963321544
|
We provide a framework for modeling social network formation through conditional multinomial logit models from discrete choice and random utility theory, in which each new edge is viewed as a “choice” made by a node to connect to another node, based on (generic) features of the other nodes available to make a connection. This perspective on network formation unifies existing models such as preferential attachment, triadic closure, and node fitness, which are all special cases, and thereby provides a flexible means for conceptualizing, estimating, and comparing models. The lens of discrete choice theory also provides several new tools for analyzing social network formation; for example, the significance of node features can be evaluated in a statistically rigorous manner, and mixtures of existing models can be estimated by adapting known expectation-maximization algorithms. We demonstrate the flexibility of our framework through examples that analyze a number of synthetic and real-world datasets. For example, we provide rigorous methods for estimating preferential attachment models and show how to separate the effects of preferential attachment and triadic closure. Non-parametric estimates of the importance of degree show a highly linear trend, and we expose the importance of looking carefully at nodes with degree zero. Examining the formation of a large citation graph, we find evidence for an increased role of degree when accounting for age.
|
There is a long history of parameter estimation in network formation processes @cite_67 @cite_19 . Probably the most studied formation model is preferential attachment @cite_52 , which relates the likelihood of new nodes connecting to a node @math to @math 's degree. More references on PA, since we claim it is the most studied? The original justification of the model was based on the degree distribution of the resulting graph qualitatively matching those of several early large-scale empirical networks. Shortly after, a number of authors proposed non-parametric methods to measure preferential attachment by counting what the degrees of the nodes getting edges are and normalizing them by the relative likelihood of those degrees @cite_41 @cite_7 @cite_73 . There are several generalizations of preferential attachment, where node @math is chosen proportional to some function @math of the degree @math of node @math (here, @math is standard preferential attachment). TODO: Need references if claiming there are several generalizations Much of this research has focused on the case when @math , i.e., attachment is preferential to some (latent) power @math of the degree @cite_17 .
|
{
"abstract": [
"",
"A key ingredient of many current models proposed to capture the topological evolution of complex networks is the hypothesis that highly connected nodes increase their connectivity faster than their less connected peers, a phenomenon called preferential attachment. Measurements on four networks, namely the science citation network, Internet, actor collaboration and science coauthorship network indicate that the rate at which nodes acquire links depends on the node's degree, offering direct quantitative support for the presence of preferential attachment. We find that for the first two systems the attachment rate depends linearly on the node degree, while for the last two the dependence follows a sublinear power law.",
"We study empirically the time evolution of scientific collaboration networks in physics and biology. In these networks, two scientists are considered connected if they have coauthored one or more papers together. We show that the probability of scientists collaborating increases with the number of other collaborators they have in common, and that the probability of a particular scientist acquiring new collaborators increases with the number of his or her past collaborators. These results provide experimental evidence in favor of previously conjectured mechanisms for clustering and power-law degree distributions in networks.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"",
"Publicly available data reveal long-term systematic features about citation statistics and how papers are referenced. The data also tell fascinating citation histories of individual articles.",
"A solution for the time- and age-dependent connectivity distribution of a growing random network is presented. The network is built by adding sites that link to earlier sites with a probability A(k) which depends on the number of preexisting links k to that site. For homogeneous connection kernels, A(k) approximately k(gamma), different behaviors arise for gamma 1, and gamma = 1. For gamma 1, a single site connects to nearly all other sites. In the borderline case A(k) approximately k, the power law N(k) approximately k(-nu) is found, where the exponent nu can be tuned to any value in the range 2<nu<infinity."
],
"cite_N": [
"@cite_67",
"@cite_7",
"@cite_41",
"@cite_52",
"@cite_19",
"@cite_73",
"@cite_17"
],
"mid": [
"",
"1966342871",
"1556758605",
"2008620264",
"",
"1983454294",
"2032516423"
]
}
|
Choosing to Grow a Graph: Modeling Network Formation as Discrete Choice
|
Understanding how networks form and evolve is an essential component of understanding their structure, which in turn underlies the basis for understanding the broad range of processes that occur on networks. Models of social network formation can largely be decomposed into node formation and edge formation. In this work, we argue that edge formation can be effectively modeled as a choice made by an actor (or actors) in the network to instantiate a connection to another node. The diverse research on network formation has led to many models and mechanisms of edge formation, including preferential attachment [2], uniform attachment [12], triadic closure [31], random walks [65,78], homophily [55], copying edges from existing nodes [35,39], latent space structures [22,41,55], inherent node fitness [7,11], and combinations of all of these [28,40,43]. Here, we frame edge formation as a discrete choice process and derive a family of discrete choice models [47,74] that subsume a wide range of existing models in a unified framework and also naturally opens up a host of powerful extensions.
Discrete choice models are commonly employed in economics, social psychology, and statistics as a way to model how individuals make choices from a slate of discrete alternatives [1]. Typically, the alternatives have associated features, and statistical models of discrete choice make it possible to estimate the relative importance of such features. Such models have been used to answer questions such as how consumers choose goods [67], how people choose where they live [46], how students choose what college to attend [21], and how commuters choose between different modes of transportation [75]. Discrete choice analysis is also used to understand how choices vary depending on the context in which they are framed: in online commerce, this could be how web layouts lead to different purchasing priorities [26]; for choosing colleges, this could be incorporating the effect of the national economy. In this paper, we demonstrate how discrete choice models can similarly help us understand the factors driving social network evolution.
The starting point for the present work is the observation that edge formation events in social networks are naturally viewed as discrete choices. For simplicity, consider a directed graph where edges are formed one by one, where we can think of the formation of a directed edge (i, j) as i "choosing" to connect with j, where the set of alternatives available to i is the set of all other nodes. (While undirected graph models are common in social network analysis, the underlying formation procedure is almost always asymmetric. For example, the Facebook friendship graph is typically modeled as an undirected graph [77], but the friendships are proposed by one of the two nodes in an edge.) The key modeling question is easy to state: why did i choose j? This question has long been the informal subject of network formation modeling and at the same time the exact question that discrete choice models and analysis have been designed to answer. However, up to this point, network formation models have largely been decoupled from discrete choice theory.
In employing discrete choice analysis, we focus on the conditional multinomial logit model, commonly called the conditional logit model for short, which is a foundational workhorse of discrete choice modeling. The model belongs to the family of random utility models, where choices are interpretable as those of a rational actor selecting the alternative with the largest "utility" sampled from random variables that decompose into the inherent utility of the alternative and a noise term. With the conditional logit model, we can use existing optimization routines to estimate model parameters and existing statistical methods to asses the uncertainty of the estimates. Discrete choice models can also easily restrict the set of available alternatives, where it might not be reasonable to assume that the entire set of nodes is available for friendship. For example, sometimes only "friends of friends" are considered [24,28,40].
In this paper, we first show that many popular network formation mechanisms can be rewritten as conditional logit models, including preferential attachment, uniform attachment, node fitness, latent space models, and models of homophily. However, the real power of discrete choice models for social network analysis is the ability to combine different features (e.g., node degree and node age), as well as different mechanisms (e.g., triadic closure and preferential attachment) and estimate their relative roles. Social networks are enormously varied in their structure [27], but existing methods often do a poor job at modeling this diversity. Thus, beyond unifying the network formation and discrete choice literature, we also develop several new tools for social network analysis. For example, we show how to estimate models to distinguish the effects of preferential attachment and triadic closure. We demonstrate these tools by analyzing the formation of the Flickr social network and the formation of a citation network. We find on Flickr that accounting for triadic closure greatly reduces the estimated role of degree in choosing who to connect to, and that nodes with degree zero have a remarkably high utility. Our estimates of preferential attachment in the citation network are similar to those observed in prior studies. When accounting for the age of a paper, we find evidence for linear preferential attachment. However, for a fixed degree, we find that age is negatively correlated with the likelihood of a new citation (i.e., older papers are less likely to be cited).
The key assumption underlying our framework is that the available data actually captures edge formation events (either through edge timestamps or other sequential information). In contrast, many existing approaches to understanding network formation focus on observing only the structural properties of a network at a single point of observation, e.g., its degree distribution, and initiating a deductive process to try and understand how variations in edge formation would lead to different outcomes [2,7,28,43]. This approach leads to tidy analyses and easy-to-characterize asymptotic properties, but model selection in this context is strongly dependent on what properties are compared. Different underlying formation processes can lead to graphs with indistinguishable properties. For example, many different formation processes result in the same heavy-tailed degree distributions [52]. Thus, when "fitting" outcome measurements in this way, one has to know (or posit), e.g., the relative rates of node formation and edge formation. However, when temporal or sequential data is available [25,56], our framework overcomes these limitations by incorporating this structure.
Additional related work. There is a strong connection between our work and work on link prediction and missing data methods using network features to predict edges [15,42]. A network formation model implicitly makes claims about what edges are most likely to form next, and thus can be evaluated by the same metrics as link prediction algorithms [44]. We use predictive accuracy as a measure of goodness of fit, but our primary concern is interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
In sociology, stochastic actor-oriented models (SAOMs) employ a similar logit choice [69,70]; however, these models are targeted towards data collected as a few snapshots rather than edge-by-edge formation. SAOMs also model the rate at which nodes form new relationships, whereas we condition on the node initiating the new edge, providing better estimates of model parameters. There are also sociological models such as relational event models [10] and dynamic network actor models [71] that use fine-grained temporal information, yet these also do not condition on the initiator node as we do. While these sociological models can incorporate notions of network formation (e.g., preferential attachment), our conditional logit framework actually cleanly subsumes a wide range of models as special cases.
Finally, estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. The latter question is often pursued with so-called exponential random graph models (ERGMs) [63,79,81]. However, these models do not consider individual edge events, are hard to estimate, and have known pathologies [13,66].
DISCRETE CHOICE AND EDGE FORMATION
We now develop network formation through the lens of discrete choice. Throughout this paper, we assume that the networks are directed. Again, while undirected graphs are common in social network analysis, the actual edge formation process often has directed initiation. In the common setting of "growing graphs, " nodes arrive one at a time and form edges when arriving in a network. In these cases, the newly arriving node is considered to be the node initiating the connection; such analysis is standard with, e.g., classical preferential attachment models [2]. When modeling the directed formation of an edge (i, j), two processes need to be distinguished, roughly corresponding to the questions "who is i?" (the chooser) and "who is j?" (the chosen). In this paper, we focus on understanding the latter, i.e., the formation of (i, j) as the selection of j conditional on knowing that i is ready to form an edge. Thus, our discrete choice models of edge formation can be readily estimated from data that implicitly or explicitly contains a record of initiating i nodes and used for subsequent analysis, as we show in Sections 3 and 4. Beyond the scope of this work, our model of "j conditional on i" can be paired with a model of "initiations by i" for a full generative model of network formation.
Edge formation as discrete choice
With the above formalisms in place, we now develop network formation from a discrete choice perspective. We begin by showing how several well-known models can be conveniently expressed as conditional logit models, with a summary given in Table 1. All models are designed to grow simple graphs (i.e., without multi-edges), and the choice set C excludes any nodes to which the chooser i is already connected. Every item is represented by its features that, importantly, can evolve over time. The features x j,t of node j at time t are thus always time-indexed, but we often suppress the t to reduce notational clutter.
Preferential attachment. We start with the generalized Barabási-Albert model [2,8,36], also known as the generalized Price model [59], one of the most studied models in the network formation literature. It is typically stated as a growth model of a time-evolving graph G t = (V t , E t ), t = 1, 2, 3, . . ., and when a new node arrives it connects to m distinct existing nodes j with a probability proportional to a power of their degree d j,t at time t,
P(j, V t ) = d α j,t ℓ ∈V t d α ℓ,t .(2)
The exponent parameter α controls the relative importance of degree [36]. The case where α = 1 is called linear preferential attachment, and produces networks that can mimic a range of structural properties observed in empirical networks. If we represent each potential neighbor j with the time-indexed one-dimensional "feature vector" x j,t = log d j,t and employ a conditional logit model as in Equation (1), we obtain a utility of j for i at time t of u i, j,t = θ log d j,t . Here the choice model parameter θ plays the exact role of α, since e θ log d j, t = d θ j,t . Table 1: Network formation models framed as utility functions for a conditional logit. Where appropriate, we use the traditional notation for the parameters of each process.
Process u i, j C Uniform attachment [12] 1 V Preferential attachment [2,36] α log d j V Non-parametric PA [54,58,62] θ d j V Triadic closure [61] 1 {j : FoF i, j } FoF attachment [31,65,78] α log η i, j V PA, FoFs only α log d j {j : FoF i, j } Individual node fitness [11] θ j V PA with fitness [6,53] α log d j + θ j V Latent space [22,41,55] β · d(i, j) V Stochastic block model [33] ω д i ,д j V Homophily [48] h · 1{д i = д j } V Given a growing network G t , we can construct a choice dataset D from this network by extracting the node j t , node sets V t , and degree sequence (d 1,t , . . . , d |V t |,t ) at each time-step. The preferential attachment model has only one parameter, θ = α. The loglikelihood for that parameter given a dataset is then:
l(α; D) = (j,C)∈D log exp(α log d j ) ℓ ∈C exp(α log d ℓ ) = (j,C)∈D α log d j − log ℓ ∈C exp(α log d ℓ ) .
We've suppressed the time-index t from the features log d ℓ to reduce clutter, but emphasize that d ℓ is the degree at the time of the choice.
Non-parametric preferential attachment. The above model assumes an attachment kernel of a particular parametric form. From a discrete choice perspective, one can also estimate the role of degree in edge formation non-parametrically by estimating a coefficient θ k for each degree k = 0, . . . , n − 1 individually. This approach has the added benefit of being able to assign positive probability to choosing nodes with degree zero. Under this model, the log-likelihood of the parameters θ = (θ 0 , ..., θ n−1 ) given the dataset is:
l(θ ; D) = (j,C)∈D log exp θ d j ℓ ∈C exp θ d ℓ = (j,C)∈D θ d j − log ℓ ∈C exp θ d ℓ .
Again we've suppressed time-indexing to simplify the presentation. Pham et al. [58] previously described a version of the above likelihood as a means of measuring the attachment kernel using maximum likelihood, albeit without making the connection to discrete choice.
Uniform attachment. A simple edge formation model is to sample a new neighbor uniformly at random from all nodes [12]. There are no parameters in this model, but we can still write down the likelihood of the model given a dataset, which will be useful when 3
we later combine this model with others within a mixture model:
l(D) = (j,C)∈ D log exp (1) ℓ ∈C exp (1) = (j,C)∈ D − log |C |.
Triadic closure. A variant of uniform attachment is for i to attach to new neighbors uniformly at random from the set of their friendsof-friends, as opposed to the set of all nodes. This process effectively models triadic closure [61]. It has the same simple functional form of the uniform model, but now the choice set C varies with each choice, namely, the choice set is restricted to be only the friends of friends of node i (the chooser) to which i is not already connected. This change in choice set can also be achieved by assuming the utility of j to i at time t is u i, j,t = log(1{FoF i, j,t }), where 1{FoF i, j,t } is a boolean indicating whether i and j are friends of friends at time t, and then letting the choice set revert to the full node set. An additional model that naturally combines the ideas of preferential attachment and befriending friends-of-friends takes the number of friends in common between i and j as a feature. We could define this feature as η i, j,t = |{k : e i,k,t ∧ e k, j,t }|, where e i,k,t indicates whether there is an edge between i and k at time t. The corresponding utility would be u i, j,t = α log η i, j,t . This model is similar (but not equivalent) to random walk-based formation models [31,65,78], which emphasize formation within a local neighborhood.
Node fitness. Another line of formation models that is subsumed by the discrete choice framework are those involving fitness. In this work, nodes choose to connect to others based on some intrinsic latent fitness score. Certain distributions of fitness values lead to a scale-free degree distribution [11], providing an alternative explanation to preferential attachment for modeling such degree distributions. We can express the node fitness model by a conditional logit model with separate fixed effect θ j for each node j (so the feature of a node is an indicator vector of its identity). The likelihood of the fitness parameters θ given the data is then:
l(θ ; D) = (j,C)∈ D log exp θ j ℓ ∈C exp θ ℓ = (j,C)∈ D θ j − log ℓ ∈C exp θ ℓ .
This formation model is equivalent to the classic Bradley-Terry-Luce model of discrete choice for estimating the quality of alternatives [45]. Alternatively, one could replace the individual fixed effects with surrogate features of node fitness such as an auxiliary measure of gregariousness (in the case of social networks), or the impact factor of a paper's journal (in the case of citations networks).
A related model proposes selection probabilities proportional to the product of node fitness and degree [6,53]. This model can be written as a conditional logit model with u i, j,t = α log d j,t + θ j .
Latent space models. Another class of network formation models postulates the existence of a latent space that drives connections between nodes. Examples of latent spaces include Euclidean space [22], hyperbolic space [37], a tree [41], a circle [55], or a set of discrete classes [23]. While the conditional logit model in the form that we describe it does not facilitate finding the best-fitting latent space assignment to explain the data, it can be used to estimate the relative importance of a known latent space given a distance function d(i, j). As one example from the family of latent space models, in the community-guided attachment (CGA) model [41] all nodes have a distance derived from the height h(i, j) of common parents in a latent tree structure situating all nodes i and j. Given this tree as known, a node connects to another proportionally to c −h(i, j) for some scalar c > 0. As a conditional logit model, the corresponding utility function is u i, j = −h(i, j) · log(c). The parameter vector θ = log c can be retrieved by fitting a conditional logit with a known h(i, j) as the only variable and transforming the estimated parameter with c = exp(θ ). Assuming that the latent space representation is given is a strong assumption, and fitting such a model while estimating the latent space representation (e.g. as done by Hoff et al. [22] in Euclidean space) is much more difficult.
Additional models. Conditional logit models are very flexible and can deal with multiple features and interactions between them. Any number of features can be added, including node covariates and structural features like a node's clustering coefficient [3] or age [12,40]. Conditional logit models can also be used to investigate the role of homophily [48] in edge formation, by adding a binary feature indicating whether nodes i and j are part of the same class. Table 1 summarizes how several network formation models fit within the discrete choice framework via their corresponding utility functions and choice sets. A major advantage of this framework is that different features can easily be combined into a single model and jointly estimated. Or, when suitable, one can employ a mixture of conditional logit models, as we show in the next section.
Combining modes using Mixed Logit
So far we have written a range of existing and new edge formation models as conditional logit models, a specific type of discrete choice model. But several existing edge formation models that do not fit neatly into the conditional logit framework, meanwhile, align exactly with the use of mixture models in discrete choice modeling. Following our success formulating edge formation models as conditional logit models, in this subsection we develop mixed conditional logit formulations of several additional models.
A common proposal to make network formation models more flexible is to augment an existing model by allowing nodes to pick neighbors uniformly at random with some probability 1 − p, while running the ordinary model with probability p [17,35,39,43]. This augmentation increases flexibility because it enables the model to explain edge events that may otherwise have probability zero. Within discrete choice, this approach is precisely a mixed logit model where one of the mixture modes is uniform attachment.
While the conditional logit estimates a single parameter vector representing average preferences as shared by all agents, the mixed logit model is often used to account for differences in preferences across various types of agents. In its most general form, the mixed logit is expressed using a probability distribution f over different instantiations of the parameter vector θ :
P i (j, C) = ∫ exp θ T x j l ∈C exp θ T x l f (θ ) dθ .
Process Modes
Copy model [35] Uniform, PA Node types [38] New node, PA, none Local search [24,28] Uniform, Uniform FoF (r , p)-model Uniform, PA, Uniform FoF, PA FoF
In this work, we will only consider discrete mixtures of M logits, also called a latent class model [32]:
P i (j, C) = M m=1 π m exp θ T m x j l ∈C exp θ T m x l ,
where M m=1 π m = 1 and the weights π 1 , . . . , π M model the relative prevalence of each mode.
Copy model. The copy model is a classic formation process that can be written as a mixed logit with two modes. In the first mode, new edges connect proportional to degree with probability p, while in the second mode they connect uniformly at random with probability 1 − p [17,43]. As a conditional logit model, the utilities of the two modes are u (1) x = log d x and u (2) x = 1, respectively, and the class probabilities are (π 1 , π 2 ) = (p, 1 − p). (This is a special case of the original copy model where d edges are copied from a sampled vertex [39]; the model here is when d = 1, which is often used for analysis [19].) The connection between relaxations of preferential attachment and mixture models was also recently observed by Medina et al. [49].
Local search model. Another example of a model with multiple modes is the Jackson-Rogers model of edge formation as a mixture of uniform attachment and triadic closure [24,28]. The original model is based on a relative rate r * between edges forming at random and edges formed locally. It also has edges form based on respective acceptance probabilities. We describe a simplified version of this model, which we'll call the local search model, where edges connect to nodes selected uniformly at random from the full node set with probability r and uniformly at random from the set of friends-of-friends with probability 1 − r . 1 We can represent this simplified process with a two-mode mixed logit model. In this case the mixture parameters are (π 1 , π 2 ) = (r , 1 − r ) and both modes have the same utility function u x = 1 but their choice sets differ so that the second mode only considers friends-of-friends. 2 Table 2 overviews the mixture model formulations described above, as well as a new model-the (r, p)-model-that we use in Section 4.2 to analyze preferential attachment effects. 1 Since the r * parameter in the original presentation is actually the rate of uniform attachment, we can relate it to our r through r = r * 1+r * . For example, if the rate between random and friend-of-friend edges is one to one (r * = 1), then r = 0.5. 2 A model with a restricted choice set, for example to only friends-of-friends, gives a likelihood of zero to choices outside the choice set.
ESTIMATION AND INFERENCE
To learn a discrete choice model of network formation from data, we assume that we have access to a sequence of directed edges, in chronological order. This sequence of edges needs to be recast as choice data in order to fit a choice model. For every formed edge (i, j), we create a data point consisting of the choice j, the choice set of candidates nodes at the time, and the features of each candidate node at the time.
Given a data set and a conditional logit model, one can write out the log-likelihood, as shown in Section 2.2. For any conditional logit model with a linear utility u i, j = θ T x j , the likelihood function is convex with respect to the variables θ and can be efficiently maximized using standard gradient-based optimization (e.g., BFGS). The functional form of the logit leads to straightforward gradients. For example, for preferential attachment, the gradient is
∂ ∂α l(α; D) = (x,C)∈D log d x − y ∈C log d y · exp(α log d y ) y ∈C exp(α log d y ) ,
where the time-dependence of the features (degrees) have been suppressed to reduce clutter. Gradients for the other choice models in Section 2.2 are omitted but straightforward. One advantage of likelihood-based model fitting is that we can compute standard errors and confidence intervals of the parameters. In particular, the standard errors can be computed with
√ H −1 [74],
where H is the Hessian matrix of second derivatives of the loglikelihood at the parameters.
Mixture models and expectation-maximization. For mixed conditional logit models, the log-likelihood is no longer convex in general, making optimization more difficult. To maximize the likelihood of mixed models we turn to expectation maximization (EM) techniques [18,73]. We briefly summarize the procedure described in Train's book [74,Chapter 14.3.2]. Assume that we have a model with M modes (i.e., mixture components), where every mode starts with initial parameter values ì θ m (usually initiated at 1). Choices (x k , C k ) ∈ D are again indexed with k, so that k ∈ {1, . . . , n} and n = |D|. The EM algorithm runs through the following steps:
(1) Initiate class probabilities uniformly with π m = 1/M and initial class responsibilities γ m k = 1/M for each data point. (2) For every data point k and every mode m, compute the class responsibility given by the relative individual likelihood:
γ m k = π m · L m (θ m ; (x k , C k )) M ℓ=1 π ℓ · L ℓ (θ ℓ ; (x k , C k ))
.
(3) For every mode m, update the total class probability with
π m = 1 N N k =1 γ m k .(4)
For every mode m, update the parameters ì θ m using standard optimization for fitting a single model, weighing each choice set with its class responsibility γ m k . (5) Repeat steps 2-4 until some convergence or stopping criteria.
The total likelihood of the parameters and class probabilities is:
l(θ ; D) = M m=1 l m (θ m ; π m ; D) = M m=1 N k =1 log L m (θ m ; (x k , C k )) ·π m 5
We monitor the convergence of the iterative procedure using the change in this total likelihood between iterations.
Even though EM is theoretically an efficient estimator [82], there are cases when alternatives are appropriate. For example, if one has reasonable bounds or priors on the parameter values, then direct likelihood maximization could be used, and if the search space is low-dimensional, a grid search might be appropriate. Recent theoretical work has also developed algorithms for learning mixtures of two multinomial logit modes with theoretical guarantees assuming a separation between the modes [14].
Negative sampling. Every time an edge is formed by some node i, each node not yet connected to i is a candidate choice. For large sparse graphs, the full choice set of all nodes can become large and the gradients of the log-likelihood expensive to compute. To speed up this computation, s negative/non-chosen examples can be sampled uniformly at random to create a (random) reduced dataset with smaller choice sets. For each choice (j, C), one forms a smaller random choice set out of the positive choice and the negative samples, C ⊂ C with |C | = s + 1, and replaces the original choice data with (j,C). As long as the negative examples are sampled uniformly at random, parameter estimates on a dataset with negatively sampled choice sets are unbiased and consistent for the estimates on the on the full set [29,46,74]. Practically, there is a trade-off between feature computation and storage on the one hand, and the ability to estimate coefficients for rare features on the other.
Typical likelihood surface. In Figure 1 we show the representative likelihood surface of a copy model to illustrate its properties. We generated a synthetic graph on n = 10, 000 nodes according to the copy model with m = 4 edges per node and degree-attachment probability π 1 = 0.5. We fit a two-mode mixed logit model to this data with u (1) j = α log d j,t and u (2) j = 1. We use s = 10 negative samples. There are two free parameters in this model: the degree exponent α and the mixture probability π 1 . We plot the log-likelihood across a reasonable range of values to show that surface is generally well behaved. We see that it is hard to distinguish between data generated under a copy model (α = 1) with probability π 1 = 0.5 from data generated from no-mixture (π 1 = 0) preferential attachment with α = 0.5, and there is a general trade-off between the exponent α and the mixture probability π 1 .
Model comparison and the likelihood-ratio test. Another advantage of our discrete choice framework is that we can employ standard statistical methods for model selection. Specifically, when one model is a special case of another, their relative quality can be compared using the likelihood ratio test. In the case of the conditional logit, a model with additional features can be compared to one without them because the latter is a special case of the former with the coefficients of the additional features being set to 0. Or, in the case of the mixed logit, one can define a model with multiple modes and manually set some of their class probabilities to zero.
As a concrete example, suppose we wanted to know whether including the age of a node in a preferential attachment model results in a statistically significantly better model. To do so, we would first estimate the parameters θ 1 of the more complex model, u (1) j = θ 1,1 log(d j ) + θ 1,2 log(age). We would then estimate the parameters θ 0 of the simpler model u L 0 be the likelihoods of the two models with parametersθ 1 andθ 0 . We can compute the likelihood ratio λ = L 0 /L 1 . Under the null hypothesis of the simpler model, with some regularity conditions, −2 log λ is asymptotically distributed χ 2 1 (χ 2 k where k is the number of additional degrees of freedom in the more complex model) [
APPLICATIONS
We now demonstrate how to use our conditional logit framework to analyze network formation processes. We first consider synthetic data and show how our tools can be used to better analyze preferential attachment mechanisms. We then analyze two empirical datasets that demonstrate how to integrate different structural features of the network or integrate node covariates. In both cases, our framework provides novel insights into the network formation processes. We provide code for processing data (converting edge lists to choice data) and for model fitting (with negative sampling), available here: https://github.com/janovergoor/choose2grow/.
Measuring preferential attachment
The question of whether and when preferential attachment is an important driver of network formation is widely debated [2,3,9,11,12,24,28,54,54,65,78]. Most prior research focuses on estimating the shape of the attachment kernel by comparing the degree of chosen nodes to the distribution of available degrees [30,54,62]. However, recent work by Pham et al. shows that previous measures are biased [58]. In particular, the bias comes from the assumption that the distribution of available nodes of varying degrees is constant throughout the formation process, but this distribution clearly changes as the network grows.
To estimate the exponent α of an attachment kernel, Pham et al. propose fitting something akin to a conditional logit with a separate coefficient for each degree, and then estimating α via a weighted least squares fit over the degree coefficients [58]. Compared to this method, fitting a log-degree logit directly is much simpler. In fact, it is the maximum likelihood estimator for α, and thus consistent and efficient. 6 Figure 2: Attachment kernel fits for a synthetic preferential attachment graph. The Newman measure computes the relative likelihood of selecting a node of that degree, as compared to the likelihood of selecting the lowest degree, but it is biased for higher degrees. The non-parametric logit is consistent but noisy for higher degrees.
To illustrate, we generate a graph with pure preferential attachment (n = 2, 000, m = 1 edges per node, α = 1) and estimate the attachment kernel by the methods of Newman [54] and Pham et al. [58]. The maximum degree of this graph was 102, and the results of the different estimation procedures are shown in Figure 2. The non-parametric estimates are similar for lower degrees, but for higher degrees the Newman measure incorrectly drops, illustrating the bias that Pham et al. have previously documented. Fitting α directly using a log-degree conditional logit gives an estimate of α = 0.987. The Pham et al. least squares fit,α LS = 1.012, is close to the MLE but may deviate considerably in more difficult instances.
Disentangling preferential attachment from triadic closure
Many models exhibit similar outcomes to preferential attachment [11,24,28,36,52,78], but there are few principled ways to rigorously test the relative validity of these models. In this section, we show how to use the discrete choice framework to estimate the relative importance of preferential attachment while accounting for other dynamics. To this end, we generate data according to a known generative process and fit various (possibly mis-specified) formation models. Our generative process is a hybrid between the copy model of preferential attachment (i.e., choose nodes proportional to degree) and the Jackson-Rogers local search model (i.e., connecting to friends-of-friends). The process, which we call the (r , p)-model, is parametrized by r ∈ (0, 1] and p ∈ (0, 1]. When a new edge is formed, with probability p it is formed uniformly at random and with probability 1 − p it is formed with linear preferential attachment (α = 1). Meanwhile, the choice set is determined by the second parameter r : with probability r , the choice set is all nodes not yet connected to i, while with probability 1 −r , the choice set is limited to available friends-of-friends of i. With r = 1 this model reduces to the copy-model and with p = 1 it reduces to the simplified local search model; the (r , p)-model thus subsumes two popular models in a single, simple discrete choice framework. For a growth process on directed graphs, it is necessary that p > 0 and r > 0, otherwise new nodes will never be selected. With this general model, we investigate how estimating parameters of one of the more specific models goes awry when the true data generating process in fact comes from an instance of the more general model. For a range of values of p and r , we generated graphs using the following growth process. New nodes arrive, each creating m = 4 edges. For every edge, we sample the mode of the model (according to r and p) independently. If an edge is supposed to be a friend-of-friend edge, but no friends-of-friends are available (for example, i's first edge), then the process reverts to uniformly random formation across the full node set. 3 Sweeping through combination of p and r parameter values, for each set of parameters we generated 10 undirected graphs with n = 20, 000 nodes each.
Degree distributions. The local search and copy models both produce graphs with power-law degree distributions. Therefore, fitting a mis-specified model on a degree distribution can lead to misleading results. To illustrate, we fit a power-law distribution p(x) ∝ x −γ to the degree distribution of graphs generated from (r , p)-models using maximum likelihood estimation [16], with estimates for γ in Figure 3. In theory, an undirected graph formed with the copy model process with probability parameter p leads to a degree distribution with power law exponent γ = (3 − p)/(1 − p) [8,52] (for directed graphs, γ = (2 − p)/(1 − p)). As p increases, the degree distribution looks more like a random graph without preferential attachment. However, as r goes down (increasing the relative role of friend-of-friends), the parameter estimate looks like the estimates for the copy model, even when p = 1.
To summarize, it is not recommended to estimate a formation model from an observed degree distribution. The parameter estimates are sensitive to small deviations in the generative process. Figure 4: The log-likelihood of varying the class probabilities of the copy model (r = 1, p free) or the local search model (r free, p = 1) for two different synthetic graphs. In both cases the true model is the most likely. On the left we see a large difference in the log-likelihood between optima, while on the right we see a smaller difference. In both cases a likelihood ratio test is highly significant (P-values < 10 −16 ).
cases. As a first case, we generate graphs with r = 0.5 and p = 1, so half the edges are formed to friends-of-friends with no utility from degree. The likelihood under a local search model (r free, p = 1) as a mixed logit is maximized at r = 0.45, while for the copy model (r = 1, p free) it is maximized at p = 0.54. The former is a much better fit than the latter (P-value < 10 −16 ), and the copy model erroneously thinks that preferential attachment is driving 45% of the edges. As a second case, we look at a graph generated with r = 1 and p = 0.5, so half the edges are due to preferential attachment, and friend-of-friending plays no role. In this case, both models are correctly maximized at their relative values. Again, the correct model has a higher likelihood (P-value < 10 −16 ).
Choosing to follow on Flickr
We now apply our framework to examine a real-world network formation dataset capturing the growth of the Flickr social network. We find that incorporating a Friend-of-Friend feature beyond preferential attachment and link-reciprocation features substantially improves both likelihood and test accuracy and furthermore that the inclusion of this feature significantly reduces preference for degree-based attachment. However, omitting preferential attachment entirely leads to a worse model. We also find a preference for nodes with zero degree over low degree nodes. This hints that such nodes play a special role in the network formation process, even though they would be ignored in preferential attachment models.
Data. We use a scrape of the Flickr social network collected daily between October 2006 and May 2007 [50,51]. Users of Flickr can choose to follow other users and the "following" (but not the "followed by") connections are publicly accessible. The data was gathered using a breadth-first search crawl, which means that only the connected components reachable from the seed profiles are represented in the data. Since a full crawl was performed daily, the timing of new edges can be identified at the granularity of a day. The graph contains 3.2 million nodes and 33.1 million edges. Note: *p<0.01
As described in the original papers, this data is consistent with both preferential attachment, as inferred from the in-degree distribution, and local search, as inferred from the over-representation of edges to nodes that are close to the linking node [50]. Fitting a power law to the distribution of in-degrees givesγ = 1.741, which would indicate super-linear preferential attachment. We can test the relative importance of triadic closure by fitting a Jackson-Rogers model using the degree distribution matching procedure described in [28]. This results inr = 0.252, estimating that three out of four edges are formed through triadic closure.
Discrete choice analysis. We fit a series of conditional logit models to further investigate the network formation process. We isolated a sample of 20,000 edge formation events occurring around the same date, 4 to avoid time heterogeneity affecting the estimates. We fit several models, displayed in Table 3. Not-chosen alternatives are negatively sampled with s = 24. We log-transform in-degree (representing the number of followers), but in order to account for nodes with degree zero, we add a "has degree" feature for having a positive degree and use a modified version of log that returns 0 for input 0. 5 In the first column, we fit a model using just these two degree-related features, and a reciprocity feature capturing whether the target node is already following the chooser. Reciprocity is a common phenomenon, with 60% of edges being followed back [50]. The estimateα (the coefficient for "log Followers") for this model is significantly larger than 1, again consistent with super-linear preferential attachment.
In the second model, we test the effect of the target node being a friend-of-friend of the choosing node. In the case of Flickr, this means that the choosing user already follows someone that follows the target user, which evidently is strongly correlated with following that user. However, combining these two features in a third model (column 3) leads to both estimated parameters dropping substantially. Most remarkable is the 40% drop in the estimate of α, which paints a very different picture about the role of degree.
In the fourth model, we measure network proximity as in the original paper, by counting the number of "hops" (path length) from i to the target before an edge was made. We integrate the hops as categorical variables to show the relative impact of each additional "hop". Being two hops away is equivalent to being a friend-of-friend, and thus has strongly positive coefficient. Every additional hop corresponds to a sharp decrease in choosing that node. Being five hops away is slightly worse than there not being a path at all. This could be an artifact of the way the data was gathered, so that new regions of the graph only get "discovered" when there is at least one link to them, or this could be due to path length not being an accurate measure of distance for newer nodes. Since the number of hops is co-linear with being a friend-of-friend, we can't test them both at the same time.
In Figure 5 we visually show the effect of different specifications on the estimate ofα. The first model of the Flickr data looks like super-linear preferential attachment, while the role of degree in the other two is significantly reduced. However, fitting a nonparametric model shows that the estimated coefficients for individual degrees are remarkably linear, suggesting that the functional form of d α j is a good fit for this network. One important point is the role of zero-degree nodes. In most descriptions of preferential attachment, nodes with degree zero are not considered. However, in the Flickr data set, zero-degree nodes have a higher utility than positive low degree nodes, which could again be an artifact of the data collection process, or point to the special role of new nodes in the network. Either way, our framework allows one to find these kinds of patterns, and investigate them further.
Choosing to cite
We now turn to citation network data to show how a discrete choice framework facilitates the testing of network formation hypotheses. Previous analyses of citation networks have observed linear preferential attachment with respect to degree [62] and bias towards citing more recent work [62]. Here, we find consistent results that older papers are less likely to be cited but that accounting for age actually increases the importance of degree (i.e., after accounting for age, higher degree nodes are more likely to be cited). Flickr q q q q q q q q q q q qqq qqqqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure 5: The probability of being chosen by degree, as compared to a node with degree 1. We show the fits of parametric (lines) and non-parametric (points) conditional logit models of the Flickr and citation networks. The legend references model numbers in Table 3 and Table 4. The estimate for degree 0 is inserted for comparison. Dashed reference lines illustrate what exact linear preferential attachment (α = 1) would look like.
Data. We use the Microsoft Academic Graph 6 dataset and focus on a representative subgraph of 459,000 "Climatology" papers. We focus on the subgraph of a single field to simplify the analysis since citations are predominantly within the same field of study (our analysis was similar on other subgraphs). We construct a graph out of this data by adding an edge each time a paper in our dataset cites another paper in our dataset. For our analysis of Climatology publications, 45% of edges are within the domain and citations to papers that are not labeled are excluded, leaving 3 million edges. We sample 10,000 citation events uniformly at random from papers published after 2010 and apply negative sampling (s = 24). This processing results in 10,000 choices with 25 alternatives in each choice set. For each possible choice, we compute four features: the number of citations at the time of citation, whether the paper shares authors with the citing paper, the age of the paper in years at the time of citation, and the maximum number of publications by any one of the authors at the time of publication. This last feature is a proxy for node fitness [11].
Discrete choice analysis. We fit conditional logit choice models relating these features to the likelihood of citation (Table 4). The first model (first column) is a simple log-degree model. We find that the estimateα (the coefficient for "log Citations") is substantially lower than one, consistent with sub-linear preferential attachment. Apart from the log-likelihood of the models, we also report the predictive accuracy (defined as the share of instances predicted correctly) on a holdout test set of 2,000 examples. Just relying on prior degree already gives an accuracy of 36%, which is high for a classification task with 25 classes. In model two (second column), we add a covariate for whether a paper shares an author with the citing paper. As expected, this has a strongly positive coefficient. For the third model we add a covariate for the age of the paper in log years (years is always at least one). Older papers are less likely to get cited (accounting for degree), but accounting for age increases the relative importance of degree significantly. This expanded model also increases the accuracy to 53%, indicating that these feature weights do capture substantially more predictive power. Finally, in model four we add the "max papers by authors" feature as a proxy for fitness. The coefficient is small but positive. Accounting for fitness slightly reduces the estimated relative importance of degree, but theα estimate is still close to 1. Adding this feature does not improve the log-likelihood or predictive accuracy; a better proxy for fitness may explain the data better. Looking back to the visual display of α for the citation models in Figure 5, the non-parametric coefficients are highly linear. In this data, zero-degree nodes are significantly less attractive than nodes with degree one. As with any regression, the identifying causal effects from model fit depends on the design of the study. The estimates we provide here, as is the case with most analyses of observational data, are descriptive and not meant to describe causal processes. The point is that discrete choice models provide a flexible framework to easily test and compare different hypotheses around network formation.
DISCUSSION
When modeling network formation, the majority of the literature analyzes networks that grow "externally, " with new nodes arriving and choosing who to connect to, and this setting has also been our main focus here. External growth leads to convenient models that are relatively easy to analyze, with citation networks and patent networks as examples of empirical networks that follow this generative process reasonably closely. However, in many (especially social) networks, pairs of older nodes often form edges as well, edges that are "internal" to the existing set of nodes. An extreme example is the social networks of schools or classrooms, which have a fixed node population and "grow" purely through an internal growth process. A major advantage of modeling network formation as discrete choice is that it does not require any model of edge event initiation and simply conditions on the sequence of decisions to initiate, focusing the modeling on the choices made by the initiator. Discrete choice can therefore easily be used to model internal growth as well.
Another major advantage of discrete choice modeling is that it connects the analysis of large-scale network datasets to statistical methods (fitting generalized linear models) that are tremendously scalable. As we show in this work, additional techniques (e.g., negative sampling) makes it possible to efficiently scale the estimation process to very large network datasets.
Since the conditional logit model of discrete choice is a random utility model, the estimated parameters can be interpreted as the marginal utility of each feature. This allows one to question the functional form of features. For example, we show that preferential attachment is equivalent to the logarithmic utility of degree. Given that degree is commonly heavy-tailed, this is a natural functional form, but we point out that the conditional logit allows one to flexibly compare different specifications.
Our discrete choice perspective has implications for how network data is best collected and analyzed. It is useful to consider and record notions of directionality, even if edges can otherwise be considered to be undirected. With information about the choice set associated with each choice, we can see what each node j looked like at the time the choice was made. Datasets that record the exact time of all edge formation events, as opposed to lumping edge events at the granularity of days or years, makes it possible to further analyze the formation process in more detail.
There are a couple limitations to our proposed methodology. First, we cannot model purely undirected edges without some notion of direction. Second, even though the conditional logit and mixed logit models allow one to model similar mechanisms, the interpretations of their estimates are different. The estimates of a conditional logit are more akin to those of a linear regression model, where one estimates the expected change in an outcome from varying a covariate. A mixture model is a probabilistic combination of constituent modes, so the class probabilities indicate the relative importance to each mode, which makes it harder to compare the roles of individual features within or across modes. However, many traditional models of network formation are equivalent to mixture models, which motivated our consideration of them in this work.
By making foundational connections between network formation and discrete choice, we are hopeful that many further tools from discrete choice theory can be applied to the study of network formation. For example, there can be bias in network formation, e.g., men are more likely to cite themselves than women [34]. Our discrete choice framework can help study these cases more rigorously. For another example, discrete choice models of subset selection [5,20] could be applied to understand possible substitution and complementarity effects in network formation. And discrete choice interpretations of machine learning embeddings techniques [64] can likely help unpack the behavior of recent embedding-based network representation methods such as DeepWalk [57]. Networks fundamentally represent interactions between discrete entities, and it is therefore natural that methods for modeling and analyzing discrete choice should enable many contributions.
| 8,860 |
1811.05008
|
2963321544
|
We provide a framework for modeling social network formation through conditional multinomial logit models from discrete choice and random utility theory, in which each new edge is viewed as a “choice” made by a node to connect to another node, based on (generic) features of the other nodes available to make a connection. This perspective on network formation unifies existing models such as preferential attachment, triadic closure, and node fitness, which are all special cases, and thereby provides a flexible means for conceptualizing, estimating, and comparing models. The lens of discrete choice theory also provides several new tools for analyzing social network formation; for example, the significance of node features can be evaluated in a statistically rigorous manner, and mixtures of existing models can be estimated by adapting known expectation-maximization algorithms. We demonstrate the flexibility of our framework through examples that analyze a number of synthetic and real-world datasets. For example, we provide rigorous methods for estimating preferential attachment models and show how to separate the effects of preferential attachment and triadic closure. Non-parametric estimates of the importance of degree show a highly linear trend, and we expose the importance of looking carefully at nodes with degree zero. Examining the formation of a large citation graph, we find evidence for an increased role of degree when accounting for age.
|
Recently, @cite_56 proposed a principled method for estimating the attachment kernel. Their proposal corresponds precisely to maximum likelihood estimation of the attachment function as a conditional logit model with a parameter for every degree. Then, they find the corresponding @math using least squares, instead of doing with so maximum likelihood also. The same authors also proposed an extension of their work that includes node fitness @cite_45 . Other recent work takes a maximum likelihood approach to estimating a mixture probability @math between uniform attachment and preferential attachment @cite_51 . With their focus on characterizing individual formation models, these prior works fall short of the full potential of discrete choice modeling. For example, they don't consider the arbitrary combinations Can we take this thought one step further? Why do they fall short?
|
{
"abstract": [
"",
"Our work introduces an approach for estimating the contribution of attachment mechanisms to the formation of growing networks. We present a generic model in which growth is driven by the continuous attachment of new nodes according to random and preferential linkage with a fixed probability. Past approaches apply likelihood analysis to estimate the probability of occurrence of each mechanism at a particular network instance, exploiting the concavity of the likelihood function at each point in time. However, the probability of connecting to existing nodes, and consequently the likelihood function itself, varies as networks grow. We establish conditions under which applying likelihood analysis guarantees the existence of a local maximum of the time-varying likelihood function and prove that a expectation maximization algorithm provides a convergent estimate. Furthermore, the in-degree distributions of the nodes in the growing networks is analytically characterized. Simulations show that, under the proposed conditions, expectation maximization and maximum-likelihood accurately estimate the actual contribution of each mechanism, and in-degree distributions converge to a stationary distributions.",
"Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman’s original method which had evidently gone unnoticed since its publication over a decade ago."
],
"cite_N": [
"@cite_45",
"@cite_51",
"@cite_56"
],
"mid": [
"",
"2891925796",
"2204454685"
]
}
|
Choosing to Grow a Graph: Modeling Network Formation as Discrete Choice
|
Understanding how networks form and evolve is an essential component of understanding their structure, which in turn underlies the basis for understanding the broad range of processes that occur on networks. Models of social network formation can largely be decomposed into node formation and edge formation. In this work, we argue that edge formation can be effectively modeled as a choice made by an actor (or actors) in the network to instantiate a connection to another node. The diverse research on network formation has led to many models and mechanisms of edge formation, including preferential attachment [2], uniform attachment [12], triadic closure [31], random walks [65,78], homophily [55], copying edges from existing nodes [35,39], latent space structures [22,41,55], inherent node fitness [7,11], and combinations of all of these [28,40,43]. Here, we frame edge formation as a discrete choice process and derive a family of discrete choice models [47,74] that subsume a wide range of existing models in a unified framework and also naturally opens up a host of powerful extensions.
Discrete choice models are commonly employed in economics, social psychology, and statistics as a way to model how individuals make choices from a slate of discrete alternatives [1]. Typically, the alternatives have associated features, and statistical models of discrete choice make it possible to estimate the relative importance of such features. Such models have been used to answer questions such as how consumers choose goods [67], how people choose where they live [46], how students choose what college to attend [21], and how commuters choose between different modes of transportation [75]. Discrete choice analysis is also used to understand how choices vary depending on the context in which they are framed: in online commerce, this could be how web layouts lead to different purchasing priorities [26]; for choosing colleges, this could be incorporating the effect of the national economy. In this paper, we demonstrate how discrete choice models can similarly help us understand the factors driving social network evolution.
The starting point for the present work is the observation that edge formation events in social networks are naturally viewed as discrete choices. For simplicity, consider a directed graph where edges are formed one by one, where we can think of the formation of a directed edge (i, j) as i "choosing" to connect with j, where the set of alternatives available to i is the set of all other nodes. (While undirected graph models are common in social network analysis, the underlying formation procedure is almost always asymmetric. For example, the Facebook friendship graph is typically modeled as an undirected graph [77], but the friendships are proposed by one of the two nodes in an edge.) The key modeling question is easy to state: why did i choose j? This question has long been the informal subject of network formation modeling and at the same time the exact question that discrete choice models and analysis have been designed to answer. However, up to this point, network formation models have largely been decoupled from discrete choice theory.
In employing discrete choice analysis, we focus on the conditional multinomial logit model, commonly called the conditional logit model for short, which is a foundational workhorse of discrete choice modeling. The model belongs to the family of random utility models, where choices are interpretable as those of a rational actor selecting the alternative with the largest "utility" sampled from random variables that decompose into the inherent utility of the alternative and a noise term. With the conditional logit model, we can use existing optimization routines to estimate model parameters and existing statistical methods to asses the uncertainty of the estimates. Discrete choice models can also easily restrict the set of available alternatives, where it might not be reasonable to assume that the entire set of nodes is available for friendship. For example, sometimes only "friends of friends" are considered [24,28,40].
In this paper, we first show that many popular network formation mechanisms can be rewritten as conditional logit models, including preferential attachment, uniform attachment, node fitness, latent space models, and models of homophily. However, the real power of discrete choice models for social network analysis is the ability to combine different features (e.g., node degree and node age), as well as different mechanisms (e.g., triadic closure and preferential attachment) and estimate their relative roles. Social networks are enormously varied in their structure [27], but existing methods often do a poor job at modeling this diversity. Thus, beyond unifying the network formation and discrete choice literature, we also develop several new tools for social network analysis. For example, we show how to estimate models to distinguish the effects of preferential attachment and triadic closure. We demonstrate these tools by analyzing the formation of the Flickr social network and the formation of a citation network. We find on Flickr that accounting for triadic closure greatly reduces the estimated role of degree in choosing who to connect to, and that nodes with degree zero have a remarkably high utility. Our estimates of preferential attachment in the citation network are similar to those observed in prior studies. When accounting for the age of a paper, we find evidence for linear preferential attachment. However, for a fixed degree, we find that age is negatively correlated with the likelihood of a new citation (i.e., older papers are less likely to be cited).
The key assumption underlying our framework is that the available data actually captures edge formation events (either through edge timestamps or other sequential information). In contrast, many existing approaches to understanding network formation focus on observing only the structural properties of a network at a single point of observation, e.g., its degree distribution, and initiating a deductive process to try and understand how variations in edge formation would lead to different outcomes [2,7,28,43]. This approach leads to tidy analyses and easy-to-characterize asymptotic properties, but model selection in this context is strongly dependent on what properties are compared. Different underlying formation processes can lead to graphs with indistinguishable properties. For example, many different formation processes result in the same heavy-tailed degree distributions [52]. Thus, when "fitting" outcome measurements in this way, one has to know (or posit), e.g., the relative rates of node formation and edge formation. However, when temporal or sequential data is available [25,56], our framework overcomes these limitations by incorporating this structure.
Additional related work. There is a strong connection between our work and work on link prediction and missing data methods using network features to predict edges [15,42]. A network formation model implicitly makes claims about what edges are most likely to form next, and thus can be evaluated by the same metrics as link prediction algorithms [44]. We use predictive accuracy as a measure of goodness of fit, but our primary concern is interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
In sociology, stochastic actor-oriented models (SAOMs) employ a similar logit choice [69,70]; however, these models are targeted towards data collected as a few snapshots rather than edge-by-edge formation. SAOMs also model the rate at which nodes form new relationships, whereas we condition on the node initiating the new edge, providing better estimates of model parameters. There are also sociological models such as relational event models [10] and dynamic network actor models [71] that use fine-grained temporal information, yet these also do not condition on the initiator node as we do. While these sociological models can incorporate notions of network formation (e.g., preferential attachment), our conditional logit framework actually cleanly subsumes a wide range of models as special cases.
Finally, estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. The latter question is often pursued with so-called exponential random graph models (ERGMs) [63,79,81]. However, these models do not consider individual edge events, are hard to estimate, and have known pathologies [13,66].
DISCRETE CHOICE AND EDGE FORMATION
We now develop network formation through the lens of discrete choice. Throughout this paper, we assume that the networks are directed. Again, while undirected graphs are common in social network analysis, the actual edge formation process often has directed initiation. In the common setting of "growing graphs, " nodes arrive one at a time and form edges when arriving in a network. In these cases, the newly arriving node is considered to be the node initiating the connection; such analysis is standard with, e.g., classical preferential attachment models [2]. When modeling the directed formation of an edge (i, j), two processes need to be distinguished, roughly corresponding to the questions "who is i?" (the chooser) and "who is j?" (the chosen). In this paper, we focus on understanding the latter, i.e., the formation of (i, j) as the selection of j conditional on knowing that i is ready to form an edge. Thus, our discrete choice models of edge formation can be readily estimated from data that implicitly or explicitly contains a record of initiating i nodes and used for subsequent analysis, as we show in Sections 3 and 4. Beyond the scope of this work, our model of "j conditional on i" can be paired with a model of "initiations by i" for a full generative model of network formation.
Edge formation as discrete choice
With the above formalisms in place, we now develop network formation from a discrete choice perspective. We begin by showing how several well-known models can be conveniently expressed as conditional logit models, with a summary given in Table 1. All models are designed to grow simple graphs (i.e., without multi-edges), and the choice set C excludes any nodes to which the chooser i is already connected. Every item is represented by its features that, importantly, can evolve over time. The features x j,t of node j at time t are thus always time-indexed, but we often suppress the t to reduce notational clutter.
Preferential attachment. We start with the generalized Barabási-Albert model [2,8,36], also known as the generalized Price model [59], one of the most studied models in the network formation literature. It is typically stated as a growth model of a time-evolving graph G t = (V t , E t ), t = 1, 2, 3, . . ., and when a new node arrives it connects to m distinct existing nodes j with a probability proportional to a power of their degree d j,t at time t,
P(j, V t ) = d α j,t ℓ ∈V t d α ℓ,t .(2)
The exponent parameter α controls the relative importance of degree [36]. The case where α = 1 is called linear preferential attachment, and produces networks that can mimic a range of structural properties observed in empirical networks. If we represent each potential neighbor j with the time-indexed one-dimensional "feature vector" x j,t = log d j,t and employ a conditional logit model as in Equation (1), we obtain a utility of j for i at time t of u i, j,t = θ log d j,t . Here the choice model parameter θ plays the exact role of α, since e θ log d j, t = d θ j,t . Table 1: Network formation models framed as utility functions for a conditional logit. Where appropriate, we use the traditional notation for the parameters of each process.
Process u i, j C Uniform attachment [12] 1 V Preferential attachment [2,36] α log d j V Non-parametric PA [54,58,62] θ d j V Triadic closure [61] 1 {j : FoF i, j } FoF attachment [31,65,78] α log η i, j V PA, FoFs only α log d j {j : FoF i, j } Individual node fitness [11] θ j V PA with fitness [6,53] α log d j + θ j V Latent space [22,41,55] β · d(i, j) V Stochastic block model [33] ω д i ,д j V Homophily [48] h · 1{д i = д j } V Given a growing network G t , we can construct a choice dataset D from this network by extracting the node j t , node sets V t , and degree sequence (d 1,t , . . . , d |V t |,t ) at each time-step. The preferential attachment model has only one parameter, θ = α. The loglikelihood for that parameter given a dataset is then:
l(α; D) = (j,C)∈D log exp(α log d j ) ℓ ∈C exp(α log d ℓ ) = (j,C)∈D α log d j − log ℓ ∈C exp(α log d ℓ ) .
We've suppressed the time-index t from the features log d ℓ to reduce clutter, but emphasize that d ℓ is the degree at the time of the choice.
Non-parametric preferential attachment. The above model assumes an attachment kernel of a particular parametric form. From a discrete choice perspective, one can also estimate the role of degree in edge formation non-parametrically by estimating a coefficient θ k for each degree k = 0, . . . , n − 1 individually. This approach has the added benefit of being able to assign positive probability to choosing nodes with degree zero. Under this model, the log-likelihood of the parameters θ = (θ 0 , ..., θ n−1 ) given the dataset is:
l(θ ; D) = (j,C)∈D log exp θ d j ℓ ∈C exp θ d ℓ = (j,C)∈D θ d j − log ℓ ∈C exp θ d ℓ .
Again we've suppressed time-indexing to simplify the presentation. Pham et al. [58] previously described a version of the above likelihood as a means of measuring the attachment kernel using maximum likelihood, albeit without making the connection to discrete choice.
Uniform attachment. A simple edge formation model is to sample a new neighbor uniformly at random from all nodes [12]. There are no parameters in this model, but we can still write down the likelihood of the model given a dataset, which will be useful when 3
we later combine this model with others within a mixture model:
l(D) = (j,C)∈ D log exp (1) ℓ ∈C exp (1) = (j,C)∈ D − log |C |.
Triadic closure. A variant of uniform attachment is for i to attach to new neighbors uniformly at random from the set of their friendsof-friends, as opposed to the set of all nodes. This process effectively models triadic closure [61]. It has the same simple functional form of the uniform model, but now the choice set C varies with each choice, namely, the choice set is restricted to be only the friends of friends of node i (the chooser) to which i is not already connected. This change in choice set can also be achieved by assuming the utility of j to i at time t is u i, j,t = log(1{FoF i, j,t }), where 1{FoF i, j,t } is a boolean indicating whether i and j are friends of friends at time t, and then letting the choice set revert to the full node set. An additional model that naturally combines the ideas of preferential attachment and befriending friends-of-friends takes the number of friends in common between i and j as a feature. We could define this feature as η i, j,t = |{k : e i,k,t ∧ e k, j,t }|, where e i,k,t indicates whether there is an edge between i and k at time t. The corresponding utility would be u i, j,t = α log η i, j,t . This model is similar (but not equivalent) to random walk-based formation models [31,65,78], which emphasize formation within a local neighborhood.
Node fitness. Another line of formation models that is subsumed by the discrete choice framework are those involving fitness. In this work, nodes choose to connect to others based on some intrinsic latent fitness score. Certain distributions of fitness values lead to a scale-free degree distribution [11], providing an alternative explanation to preferential attachment for modeling such degree distributions. We can express the node fitness model by a conditional logit model with separate fixed effect θ j for each node j (so the feature of a node is an indicator vector of its identity). The likelihood of the fitness parameters θ given the data is then:
l(θ ; D) = (j,C)∈ D log exp θ j ℓ ∈C exp θ ℓ = (j,C)∈ D θ j − log ℓ ∈C exp θ ℓ .
This formation model is equivalent to the classic Bradley-Terry-Luce model of discrete choice for estimating the quality of alternatives [45]. Alternatively, one could replace the individual fixed effects with surrogate features of node fitness such as an auxiliary measure of gregariousness (in the case of social networks), or the impact factor of a paper's journal (in the case of citations networks).
A related model proposes selection probabilities proportional to the product of node fitness and degree [6,53]. This model can be written as a conditional logit model with u i, j,t = α log d j,t + θ j .
Latent space models. Another class of network formation models postulates the existence of a latent space that drives connections between nodes. Examples of latent spaces include Euclidean space [22], hyperbolic space [37], a tree [41], a circle [55], or a set of discrete classes [23]. While the conditional logit model in the form that we describe it does not facilitate finding the best-fitting latent space assignment to explain the data, it can be used to estimate the relative importance of a known latent space given a distance function d(i, j). As one example from the family of latent space models, in the community-guided attachment (CGA) model [41] all nodes have a distance derived from the height h(i, j) of common parents in a latent tree structure situating all nodes i and j. Given this tree as known, a node connects to another proportionally to c −h(i, j) for some scalar c > 0. As a conditional logit model, the corresponding utility function is u i, j = −h(i, j) · log(c). The parameter vector θ = log c can be retrieved by fitting a conditional logit with a known h(i, j) as the only variable and transforming the estimated parameter with c = exp(θ ). Assuming that the latent space representation is given is a strong assumption, and fitting such a model while estimating the latent space representation (e.g. as done by Hoff et al. [22] in Euclidean space) is much more difficult.
Additional models. Conditional logit models are very flexible and can deal with multiple features and interactions between them. Any number of features can be added, including node covariates and structural features like a node's clustering coefficient [3] or age [12,40]. Conditional logit models can also be used to investigate the role of homophily [48] in edge formation, by adding a binary feature indicating whether nodes i and j are part of the same class. Table 1 summarizes how several network formation models fit within the discrete choice framework via their corresponding utility functions and choice sets. A major advantage of this framework is that different features can easily be combined into a single model and jointly estimated. Or, when suitable, one can employ a mixture of conditional logit models, as we show in the next section.
Combining modes using Mixed Logit
So far we have written a range of existing and new edge formation models as conditional logit models, a specific type of discrete choice model. But several existing edge formation models that do not fit neatly into the conditional logit framework, meanwhile, align exactly with the use of mixture models in discrete choice modeling. Following our success formulating edge formation models as conditional logit models, in this subsection we develop mixed conditional logit formulations of several additional models.
A common proposal to make network formation models more flexible is to augment an existing model by allowing nodes to pick neighbors uniformly at random with some probability 1 − p, while running the ordinary model with probability p [17,35,39,43]. This augmentation increases flexibility because it enables the model to explain edge events that may otherwise have probability zero. Within discrete choice, this approach is precisely a mixed logit model where one of the mixture modes is uniform attachment.
While the conditional logit estimates a single parameter vector representing average preferences as shared by all agents, the mixed logit model is often used to account for differences in preferences across various types of agents. In its most general form, the mixed logit is expressed using a probability distribution f over different instantiations of the parameter vector θ :
P i (j, C) = ∫ exp θ T x j l ∈C exp θ T x l f (θ ) dθ .
Process Modes
Copy model [35] Uniform, PA Node types [38] New node, PA, none Local search [24,28] Uniform, Uniform FoF (r , p)-model Uniform, PA, Uniform FoF, PA FoF
In this work, we will only consider discrete mixtures of M logits, also called a latent class model [32]:
P i (j, C) = M m=1 π m exp θ T m x j l ∈C exp θ T m x l ,
where M m=1 π m = 1 and the weights π 1 , . . . , π M model the relative prevalence of each mode.
Copy model. The copy model is a classic formation process that can be written as a mixed logit with two modes. In the first mode, new edges connect proportional to degree with probability p, while in the second mode they connect uniformly at random with probability 1 − p [17,43]. As a conditional logit model, the utilities of the two modes are u (1) x = log d x and u (2) x = 1, respectively, and the class probabilities are (π 1 , π 2 ) = (p, 1 − p). (This is a special case of the original copy model where d edges are copied from a sampled vertex [39]; the model here is when d = 1, which is often used for analysis [19].) The connection between relaxations of preferential attachment and mixture models was also recently observed by Medina et al. [49].
Local search model. Another example of a model with multiple modes is the Jackson-Rogers model of edge formation as a mixture of uniform attachment and triadic closure [24,28]. The original model is based on a relative rate r * between edges forming at random and edges formed locally. It also has edges form based on respective acceptance probabilities. We describe a simplified version of this model, which we'll call the local search model, where edges connect to nodes selected uniformly at random from the full node set with probability r and uniformly at random from the set of friends-of-friends with probability 1 − r . 1 We can represent this simplified process with a two-mode mixed logit model. In this case the mixture parameters are (π 1 , π 2 ) = (r , 1 − r ) and both modes have the same utility function u x = 1 but their choice sets differ so that the second mode only considers friends-of-friends. 2 Table 2 overviews the mixture model formulations described above, as well as a new model-the (r, p)-model-that we use in Section 4.2 to analyze preferential attachment effects. 1 Since the r * parameter in the original presentation is actually the rate of uniform attachment, we can relate it to our r through r = r * 1+r * . For example, if the rate between random and friend-of-friend edges is one to one (r * = 1), then r = 0.5. 2 A model with a restricted choice set, for example to only friends-of-friends, gives a likelihood of zero to choices outside the choice set.
ESTIMATION AND INFERENCE
To learn a discrete choice model of network formation from data, we assume that we have access to a sequence of directed edges, in chronological order. This sequence of edges needs to be recast as choice data in order to fit a choice model. For every formed edge (i, j), we create a data point consisting of the choice j, the choice set of candidates nodes at the time, and the features of each candidate node at the time.
Given a data set and a conditional logit model, one can write out the log-likelihood, as shown in Section 2.2. For any conditional logit model with a linear utility u i, j = θ T x j , the likelihood function is convex with respect to the variables θ and can be efficiently maximized using standard gradient-based optimization (e.g., BFGS). The functional form of the logit leads to straightforward gradients. For example, for preferential attachment, the gradient is
∂ ∂α l(α; D) = (x,C)∈D log d x − y ∈C log d y · exp(α log d y ) y ∈C exp(α log d y ) ,
where the time-dependence of the features (degrees) have been suppressed to reduce clutter. Gradients for the other choice models in Section 2.2 are omitted but straightforward. One advantage of likelihood-based model fitting is that we can compute standard errors and confidence intervals of the parameters. In particular, the standard errors can be computed with
√ H −1 [74],
where H is the Hessian matrix of second derivatives of the loglikelihood at the parameters.
Mixture models and expectation-maximization. For mixed conditional logit models, the log-likelihood is no longer convex in general, making optimization more difficult. To maximize the likelihood of mixed models we turn to expectation maximization (EM) techniques [18,73]. We briefly summarize the procedure described in Train's book [74,Chapter 14.3.2]. Assume that we have a model with M modes (i.e., mixture components), where every mode starts with initial parameter values ì θ m (usually initiated at 1). Choices (x k , C k ) ∈ D are again indexed with k, so that k ∈ {1, . . . , n} and n = |D|. The EM algorithm runs through the following steps:
(1) Initiate class probabilities uniformly with π m = 1/M and initial class responsibilities γ m k = 1/M for each data point. (2) For every data point k and every mode m, compute the class responsibility given by the relative individual likelihood:
γ m k = π m · L m (θ m ; (x k , C k )) M ℓ=1 π ℓ · L ℓ (θ ℓ ; (x k , C k ))
.
(3) For every mode m, update the total class probability with
π m = 1 N N k =1 γ m k .(4)
For every mode m, update the parameters ì θ m using standard optimization for fitting a single model, weighing each choice set with its class responsibility γ m k . (5) Repeat steps 2-4 until some convergence or stopping criteria.
The total likelihood of the parameters and class probabilities is:
l(θ ; D) = M m=1 l m (θ m ; π m ; D) = M m=1 N k =1 log L m (θ m ; (x k , C k )) ·π m 5
We monitor the convergence of the iterative procedure using the change in this total likelihood between iterations.
Even though EM is theoretically an efficient estimator [82], there are cases when alternatives are appropriate. For example, if one has reasonable bounds or priors on the parameter values, then direct likelihood maximization could be used, and if the search space is low-dimensional, a grid search might be appropriate. Recent theoretical work has also developed algorithms for learning mixtures of two multinomial logit modes with theoretical guarantees assuming a separation between the modes [14].
Negative sampling. Every time an edge is formed by some node i, each node not yet connected to i is a candidate choice. For large sparse graphs, the full choice set of all nodes can become large and the gradients of the log-likelihood expensive to compute. To speed up this computation, s negative/non-chosen examples can be sampled uniformly at random to create a (random) reduced dataset with smaller choice sets. For each choice (j, C), one forms a smaller random choice set out of the positive choice and the negative samples, C ⊂ C with |C | = s + 1, and replaces the original choice data with (j,C). As long as the negative examples are sampled uniformly at random, parameter estimates on a dataset with negatively sampled choice sets are unbiased and consistent for the estimates on the on the full set [29,46,74]. Practically, there is a trade-off between feature computation and storage on the one hand, and the ability to estimate coefficients for rare features on the other.
Typical likelihood surface. In Figure 1 we show the representative likelihood surface of a copy model to illustrate its properties. We generated a synthetic graph on n = 10, 000 nodes according to the copy model with m = 4 edges per node and degree-attachment probability π 1 = 0.5. We fit a two-mode mixed logit model to this data with u (1) j = α log d j,t and u (2) j = 1. We use s = 10 negative samples. There are two free parameters in this model: the degree exponent α and the mixture probability π 1 . We plot the log-likelihood across a reasonable range of values to show that surface is generally well behaved. We see that it is hard to distinguish between data generated under a copy model (α = 1) with probability π 1 = 0.5 from data generated from no-mixture (π 1 = 0) preferential attachment with α = 0.5, and there is a general trade-off between the exponent α and the mixture probability π 1 .
Model comparison and the likelihood-ratio test. Another advantage of our discrete choice framework is that we can employ standard statistical methods for model selection. Specifically, when one model is a special case of another, their relative quality can be compared using the likelihood ratio test. In the case of the conditional logit, a model with additional features can be compared to one without them because the latter is a special case of the former with the coefficients of the additional features being set to 0. Or, in the case of the mixed logit, one can define a model with multiple modes and manually set some of their class probabilities to zero.
As a concrete example, suppose we wanted to know whether including the age of a node in a preferential attachment model results in a statistically significantly better model. To do so, we would first estimate the parameters θ 1 of the more complex model, u (1) j = θ 1,1 log(d j ) + θ 1,2 log(age). We would then estimate the parameters θ 0 of the simpler model u L 0 be the likelihoods of the two models with parametersθ 1 andθ 0 . We can compute the likelihood ratio λ = L 0 /L 1 . Under the null hypothesis of the simpler model, with some regularity conditions, −2 log λ is asymptotically distributed χ 2 1 (χ 2 k where k is the number of additional degrees of freedom in the more complex model) [
APPLICATIONS
We now demonstrate how to use our conditional logit framework to analyze network formation processes. We first consider synthetic data and show how our tools can be used to better analyze preferential attachment mechanisms. We then analyze two empirical datasets that demonstrate how to integrate different structural features of the network or integrate node covariates. In both cases, our framework provides novel insights into the network formation processes. We provide code for processing data (converting edge lists to choice data) and for model fitting (with negative sampling), available here: https://github.com/janovergoor/choose2grow/.
Measuring preferential attachment
The question of whether and when preferential attachment is an important driver of network formation is widely debated [2,3,9,11,12,24,28,54,54,65,78]. Most prior research focuses on estimating the shape of the attachment kernel by comparing the degree of chosen nodes to the distribution of available degrees [30,54,62]. However, recent work by Pham et al. shows that previous measures are biased [58]. In particular, the bias comes from the assumption that the distribution of available nodes of varying degrees is constant throughout the formation process, but this distribution clearly changes as the network grows.
To estimate the exponent α of an attachment kernel, Pham et al. propose fitting something akin to a conditional logit with a separate coefficient for each degree, and then estimating α via a weighted least squares fit over the degree coefficients [58]. Compared to this method, fitting a log-degree logit directly is much simpler. In fact, it is the maximum likelihood estimator for α, and thus consistent and efficient. 6 Figure 2: Attachment kernel fits for a synthetic preferential attachment graph. The Newman measure computes the relative likelihood of selecting a node of that degree, as compared to the likelihood of selecting the lowest degree, but it is biased for higher degrees. The non-parametric logit is consistent but noisy for higher degrees.
To illustrate, we generate a graph with pure preferential attachment (n = 2, 000, m = 1 edges per node, α = 1) and estimate the attachment kernel by the methods of Newman [54] and Pham et al. [58]. The maximum degree of this graph was 102, and the results of the different estimation procedures are shown in Figure 2. The non-parametric estimates are similar for lower degrees, but for higher degrees the Newman measure incorrectly drops, illustrating the bias that Pham et al. have previously documented. Fitting α directly using a log-degree conditional logit gives an estimate of α = 0.987. The Pham et al. least squares fit,α LS = 1.012, is close to the MLE but may deviate considerably in more difficult instances.
Disentangling preferential attachment from triadic closure
Many models exhibit similar outcomes to preferential attachment [11,24,28,36,52,78], but there are few principled ways to rigorously test the relative validity of these models. In this section, we show how to use the discrete choice framework to estimate the relative importance of preferential attachment while accounting for other dynamics. To this end, we generate data according to a known generative process and fit various (possibly mis-specified) formation models. Our generative process is a hybrid between the copy model of preferential attachment (i.e., choose nodes proportional to degree) and the Jackson-Rogers local search model (i.e., connecting to friends-of-friends). The process, which we call the (r , p)-model, is parametrized by r ∈ (0, 1] and p ∈ (0, 1]. When a new edge is formed, with probability p it is formed uniformly at random and with probability 1 − p it is formed with linear preferential attachment (α = 1). Meanwhile, the choice set is determined by the second parameter r : with probability r , the choice set is all nodes not yet connected to i, while with probability 1 −r , the choice set is limited to available friends-of-friends of i. With r = 1 this model reduces to the copy-model and with p = 1 it reduces to the simplified local search model; the (r , p)-model thus subsumes two popular models in a single, simple discrete choice framework. For a growth process on directed graphs, it is necessary that p > 0 and r > 0, otherwise new nodes will never be selected. With this general model, we investigate how estimating parameters of one of the more specific models goes awry when the true data generating process in fact comes from an instance of the more general model. For a range of values of p and r , we generated graphs using the following growth process. New nodes arrive, each creating m = 4 edges. For every edge, we sample the mode of the model (according to r and p) independently. If an edge is supposed to be a friend-of-friend edge, but no friends-of-friends are available (for example, i's first edge), then the process reverts to uniformly random formation across the full node set. 3 Sweeping through combination of p and r parameter values, for each set of parameters we generated 10 undirected graphs with n = 20, 000 nodes each.
Degree distributions. The local search and copy models both produce graphs with power-law degree distributions. Therefore, fitting a mis-specified model on a degree distribution can lead to misleading results. To illustrate, we fit a power-law distribution p(x) ∝ x −γ to the degree distribution of graphs generated from (r , p)-models using maximum likelihood estimation [16], with estimates for γ in Figure 3. In theory, an undirected graph formed with the copy model process with probability parameter p leads to a degree distribution with power law exponent γ = (3 − p)/(1 − p) [8,52] (for directed graphs, γ = (2 − p)/(1 − p)). As p increases, the degree distribution looks more like a random graph without preferential attachment. However, as r goes down (increasing the relative role of friend-of-friends), the parameter estimate looks like the estimates for the copy model, even when p = 1.
To summarize, it is not recommended to estimate a formation model from an observed degree distribution. The parameter estimates are sensitive to small deviations in the generative process. Figure 4: The log-likelihood of varying the class probabilities of the copy model (r = 1, p free) or the local search model (r free, p = 1) for two different synthetic graphs. In both cases the true model is the most likely. On the left we see a large difference in the log-likelihood between optima, while on the right we see a smaller difference. In both cases a likelihood ratio test is highly significant (P-values < 10 −16 ).
cases. As a first case, we generate graphs with r = 0.5 and p = 1, so half the edges are formed to friends-of-friends with no utility from degree. The likelihood under a local search model (r free, p = 1) as a mixed logit is maximized at r = 0.45, while for the copy model (r = 1, p free) it is maximized at p = 0.54. The former is a much better fit than the latter (P-value < 10 −16 ), and the copy model erroneously thinks that preferential attachment is driving 45% of the edges. As a second case, we look at a graph generated with r = 1 and p = 0.5, so half the edges are due to preferential attachment, and friend-of-friending plays no role. In this case, both models are correctly maximized at their relative values. Again, the correct model has a higher likelihood (P-value < 10 −16 ).
Choosing to follow on Flickr
We now apply our framework to examine a real-world network formation dataset capturing the growth of the Flickr social network. We find that incorporating a Friend-of-Friend feature beyond preferential attachment and link-reciprocation features substantially improves both likelihood and test accuracy and furthermore that the inclusion of this feature significantly reduces preference for degree-based attachment. However, omitting preferential attachment entirely leads to a worse model. We also find a preference for nodes with zero degree over low degree nodes. This hints that such nodes play a special role in the network formation process, even though they would be ignored in preferential attachment models.
Data. We use a scrape of the Flickr social network collected daily between October 2006 and May 2007 [50,51]. Users of Flickr can choose to follow other users and the "following" (but not the "followed by") connections are publicly accessible. The data was gathered using a breadth-first search crawl, which means that only the connected components reachable from the seed profiles are represented in the data. Since a full crawl was performed daily, the timing of new edges can be identified at the granularity of a day. The graph contains 3.2 million nodes and 33.1 million edges. Note: *p<0.01
As described in the original papers, this data is consistent with both preferential attachment, as inferred from the in-degree distribution, and local search, as inferred from the over-representation of edges to nodes that are close to the linking node [50]. Fitting a power law to the distribution of in-degrees givesγ = 1.741, which would indicate super-linear preferential attachment. We can test the relative importance of triadic closure by fitting a Jackson-Rogers model using the degree distribution matching procedure described in [28]. This results inr = 0.252, estimating that three out of four edges are formed through triadic closure.
Discrete choice analysis. We fit a series of conditional logit models to further investigate the network formation process. We isolated a sample of 20,000 edge formation events occurring around the same date, 4 to avoid time heterogeneity affecting the estimates. We fit several models, displayed in Table 3. Not-chosen alternatives are negatively sampled with s = 24. We log-transform in-degree (representing the number of followers), but in order to account for nodes with degree zero, we add a "has degree" feature for having a positive degree and use a modified version of log that returns 0 for input 0. 5 In the first column, we fit a model using just these two degree-related features, and a reciprocity feature capturing whether the target node is already following the chooser. Reciprocity is a common phenomenon, with 60% of edges being followed back [50]. The estimateα (the coefficient for "log Followers") for this model is significantly larger than 1, again consistent with super-linear preferential attachment.
In the second model, we test the effect of the target node being a friend-of-friend of the choosing node. In the case of Flickr, this means that the choosing user already follows someone that follows the target user, which evidently is strongly correlated with following that user. However, combining these two features in a third model (column 3) leads to both estimated parameters dropping substantially. Most remarkable is the 40% drop in the estimate of α, which paints a very different picture about the role of degree.
In the fourth model, we measure network proximity as in the original paper, by counting the number of "hops" (path length) from i to the target before an edge was made. We integrate the hops as categorical variables to show the relative impact of each additional "hop". Being two hops away is equivalent to being a friend-of-friend, and thus has strongly positive coefficient. Every additional hop corresponds to a sharp decrease in choosing that node. Being five hops away is slightly worse than there not being a path at all. This could be an artifact of the way the data was gathered, so that new regions of the graph only get "discovered" when there is at least one link to them, or this could be due to path length not being an accurate measure of distance for newer nodes. Since the number of hops is co-linear with being a friend-of-friend, we can't test them both at the same time.
In Figure 5 we visually show the effect of different specifications on the estimate ofα. The first model of the Flickr data looks like super-linear preferential attachment, while the role of degree in the other two is significantly reduced. However, fitting a nonparametric model shows that the estimated coefficients for individual degrees are remarkably linear, suggesting that the functional form of d α j is a good fit for this network. One important point is the role of zero-degree nodes. In most descriptions of preferential attachment, nodes with degree zero are not considered. However, in the Flickr data set, zero-degree nodes have a higher utility than positive low degree nodes, which could again be an artifact of the data collection process, or point to the special role of new nodes in the network. Either way, our framework allows one to find these kinds of patterns, and investigate them further.
Choosing to cite
We now turn to citation network data to show how a discrete choice framework facilitates the testing of network formation hypotheses. Previous analyses of citation networks have observed linear preferential attachment with respect to degree [62] and bias towards citing more recent work [62]. Here, we find consistent results that older papers are less likely to be cited but that accounting for age actually increases the importance of degree (i.e., after accounting for age, higher degree nodes are more likely to be cited). Flickr q q q q q q q q q q q qqq qqqqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure 5: The probability of being chosen by degree, as compared to a node with degree 1. We show the fits of parametric (lines) and non-parametric (points) conditional logit models of the Flickr and citation networks. The legend references model numbers in Table 3 and Table 4. The estimate for degree 0 is inserted for comparison. Dashed reference lines illustrate what exact linear preferential attachment (α = 1) would look like.
Data. We use the Microsoft Academic Graph 6 dataset and focus on a representative subgraph of 459,000 "Climatology" papers. We focus on the subgraph of a single field to simplify the analysis since citations are predominantly within the same field of study (our analysis was similar on other subgraphs). We construct a graph out of this data by adding an edge each time a paper in our dataset cites another paper in our dataset. For our analysis of Climatology publications, 45% of edges are within the domain and citations to papers that are not labeled are excluded, leaving 3 million edges. We sample 10,000 citation events uniformly at random from papers published after 2010 and apply negative sampling (s = 24). This processing results in 10,000 choices with 25 alternatives in each choice set. For each possible choice, we compute four features: the number of citations at the time of citation, whether the paper shares authors with the citing paper, the age of the paper in years at the time of citation, and the maximum number of publications by any one of the authors at the time of publication. This last feature is a proxy for node fitness [11].
Discrete choice analysis. We fit conditional logit choice models relating these features to the likelihood of citation (Table 4). The first model (first column) is a simple log-degree model. We find that the estimateα (the coefficient for "log Citations") is substantially lower than one, consistent with sub-linear preferential attachment. Apart from the log-likelihood of the models, we also report the predictive accuracy (defined as the share of instances predicted correctly) on a holdout test set of 2,000 examples. Just relying on prior degree already gives an accuracy of 36%, which is high for a classification task with 25 classes. In model two (second column), we add a covariate for whether a paper shares an author with the citing paper. As expected, this has a strongly positive coefficient. For the third model we add a covariate for the age of the paper in log years (years is always at least one). Older papers are less likely to get cited (accounting for degree), but accounting for age increases the relative importance of degree significantly. This expanded model also increases the accuracy to 53%, indicating that these feature weights do capture substantially more predictive power. Finally, in model four we add the "max papers by authors" feature as a proxy for fitness. The coefficient is small but positive. Accounting for fitness slightly reduces the estimated relative importance of degree, but theα estimate is still close to 1. Adding this feature does not improve the log-likelihood or predictive accuracy; a better proxy for fitness may explain the data better. Looking back to the visual display of α for the citation models in Figure 5, the non-parametric coefficients are highly linear. In this data, zero-degree nodes are significantly less attractive than nodes with degree one. As with any regression, the identifying causal effects from model fit depends on the design of the study. The estimates we provide here, as is the case with most analyses of observational data, are descriptive and not meant to describe causal processes. The point is that discrete choice models provide a flexible framework to easily test and compare different hypotheses around network formation.
DISCUSSION
When modeling network formation, the majority of the literature analyzes networks that grow "externally, " with new nodes arriving and choosing who to connect to, and this setting has also been our main focus here. External growth leads to convenient models that are relatively easy to analyze, with citation networks and patent networks as examples of empirical networks that follow this generative process reasonably closely. However, in many (especially social) networks, pairs of older nodes often form edges as well, edges that are "internal" to the existing set of nodes. An extreme example is the social networks of schools or classrooms, which have a fixed node population and "grow" purely through an internal growth process. A major advantage of modeling network formation as discrete choice is that it does not require any model of edge event initiation and simply conditions on the sequence of decisions to initiate, focusing the modeling on the choices made by the initiator. Discrete choice can therefore easily be used to model internal growth as well.
Another major advantage of discrete choice modeling is that it connects the analysis of large-scale network datasets to statistical methods (fitting generalized linear models) that are tremendously scalable. As we show in this work, additional techniques (e.g., negative sampling) makes it possible to efficiently scale the estimation process to very large network datasets.
Since the conditional logit model of discrete choice is a random utility model, the estimated parameters can be interpreted as the marginal utility of each feature. This allows one to question the functional form of features. For example, we show that preferential attachment is equivalent to the logarithmic utility of degree. Given that degree is commonly heavy-tailed, this is a natural functional form, but we point out that the conditional logit allows one to flexibly compare different specifications.
Our discrete choice perspective has implications for how network data is best collected and analyzed. It is useful to consider and record notions of directionality, even if edges can otherwise be considered to be undirected. With information about the choice set associated with each choice, we can see what each node j looked like at the time the choice was made. Datasets that record the exact time of all edge formation events, as opposed to lumping edge events at the granularity of days or years, makes it possible to further analyze the formation process in more detail.
There are a couple limitations to our proposed methodology. First, we cannot model purely undirected edges without some notion of direction. Second, even though the conditional logit and mixed logit models allow one to model similar mechanisms, the interpretations of their estimates are different. The estimates of a conditional logit are more akin to those of a linear regression model, where one estimates the expected change in an outcome from varying a covariate. A mixture model is a probabilistic combination of constituent modes, so the class probabilities indicate the relative importance to each mode, which makes it harder to compare the roles of individual features within or across modes. However, many traditional models of network formation are equivalent to mixture models, which motivated our consideration of them in this work.
By making foundational connections between network formation and discrete choice, we are hopeful that many further tools from discrete choice theory can be applied to the study of network formation. For example, there can be bias in network formation, e.g., men are more likely to cite themselves than women [34]. Our discrete choice framework can help study these cases more rigorously. For another example, discrete choice models of subset selection [5,20] could be applied to understand possible substitution and complementarity effects in network formation. And discrete choice interpretations of machine learning embeddings techniques [64] can likely help unpack the behavior of recent embedding-based network representation methods such as DeepWalk [57]. Networks fundamentally represent interactions between discrete entities, and it is therefore natural that methods for modeling and analyzing discrete choice should enable many contributions.
| 8,860 |
1811.05008
|
2963321544
|
We provide a framework for modeling social network formation through conditional multinomial logit models from discrete choice and random utility theory, in which each new edge is viewed as a “choice” made by a node to connect to another node, based on (generic) features of the other nodes available to make a connection. This perspective on network formation unifies existing models such as preferential attachment, triadic closure, and node fitness, which are all special cases, and thereby provides a flexible means for conceptualizing, estimating, and comparing models. The lens of discrete choice theory also provides several new tools for analyzing social network formation; for example, the significance of node features can be evaluated in a statistically rigorous manner, and mixtures of existing models can be estimated by adapting known expectation-maximization algorithms. We demonstrate the flexibility of our framework through examples that analyze a number of synthetic and real-world datasets. For example, we provide rigorous methods for estimating preferential attachment models and show how to separate the effects of preferential attachment and triadic closure. Non-parametric estimates of the importance of degree show a highly linear trend, and we expose the importance of looking carefully at nodes with degree zero. Examining the formation of a large citation graph, we find evidence for an increased role of degree when accounting for age.
|
There is also a connection with the literature on link prediction in social networks @cite_9 . A network formation model implicitly makes claims about what edges are most likely to form next, and so can be evaluated by the same metrics as link prediction algorithms @cite_14 . Features like distance, common neighbors and degree have been shown to be predictive of link formation in multiple contexts @cite_9 @cite_29 . However, in that literature, the main focus of interest is usually predictive accuracy, rather than a robust understanding of the drivers of formation. While we use predictive accuracy as a measure of goodness of fit, we are more concerned with interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
|
{
"abstract": [
"Networks have recently emerged as a powerful tool to describe and quantify many complex systems, with applications in engineering, communications, ecology, biochemistry and genetics. A general technique to divide network vertices in groups and sub-groups is reported. Revealing such underlying hierarchies in turn allows the predicting of missing links from partial data with higher accuracy than previous methods.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc.",
"Link prediction in complex networks has attracted increasing attention from both physical and computer science communities. The algorithms can be used to extract missing information, identify spurious interactions, evaluate network evolving mechanisms, and so on. This article summaries recent progress about link prediction algorithms, emphasizing on the contributions from physical perspectives and approaches, such as the random-walk-based methods and the maximum likelihood methods. We also introduce three typical applications: reconstruction of networks, evaluation of network evolving mechanism and classification of partially labeled networks. Finally, we introduce some applications and outline future challenges of link prediction algorithms."
],
"cite_N": [
"@cite_29",
"@cite_9",
"@cite_14"
],
"mid": [
"2157082398",
"2148847267",
"1979104937"
]
}
|
Choosing to Grow a Graph: Modeling Network Formation as Discrete Choice
|
Understanding how networks form and evolve is an essential component of understanding their structure, which in turn underlies the basis for understanding the broad range of processes that occur on networks. Models of social network formation can largely be decomposed into node formation and edge formation. In this work, we argue that edge formation can be effectively modeled as a choice made by an actor (or actors) in the network to instantiate a connection to another node. The diverse research on network formation has led to many models and mechanisms of edge formation, including preferential attachment [2], uniform attachment [12], triadic closure [31], random walks [65,78], homophily [55], copying edges from existing nodes [35,39], latent space structures [22,41,55], inherent node fitness [7,11], and combinations of all of these [28,40,43]. Here, we frame edge formation as a discrete choice process and derive a family of discrete choice models [47,74] that subsume a wide range of existing models in a unified framework and also naturally opens up a host of powerful extensions.
Discrete choice models are commonly employed in economics, social psychology, and statistics as a way to model how individuals make choices from a slate of discrete alternatives [1]. Typically, the alternatives have associated features, and statistical models of discrete choice make it possible to estimate the relative importance of such features. Such models have been used to answer questions such as how consumers choose goods [67], how people choose where they live [46], how students choose what college to attend [21], and how commuters choose between different modes of transportation [75]. Discrete choice analysis is also used to understand how choices vary depending on the context in which they are framed: in online commerce, this could be how web layouts lead to different purchasing priorities [26]; for choosing colleges, this could be incorporating the effect of the national economy. In this paper, we demonstrate how discrete choice models can similarly help us understand the factors driving social network evolution.
The starting point for the present work is the observation that edge formation events in social networks are naturally viewed as discrete choices. For simplicity, consider a directed graph where edges are formed one by one, where we can think of the formation of a directed edge (i, j) as i "choosing" to connect with j, where the set of alternatives available to i is the set of all other nodes. (While undirected graph models are common in social network analysis, the underlying formation procedure is almost always asymmetric. For example, the Facebook friendship graph is typically modeled as an undirected graph [77], but the friendships are proposed by one of the two nodes in an edge.) The key modeling question is easy to state: why did i choose j? This question has long been the informal subject of network formation modeling and at the same time the exact question that discrete choice models and analysis have been designed to answer. However, up to this point, network formation models have largely been decoupled from discrete choice theory.
In employing discrete choice analysis, we focus on the conditional multinomial logit model, commonly called the conditional logit model for short, which is a foundational workhorse of discrete choice modeling. The model belongs to the family of random utility models, where choices are interpretable as those of a rational actor selecting the alternative with the largest "utility" sampled from random variables that decompose into the inherent utility of the alternative and a noise term. With the conditional logit model, we can use existing optimization routines to estimate model parameters and existing statistical methods to asses the uncertainty of the estimates. Discrete choice models can also easily restrict the set of available alternatives, where it might not be reasonable to assume that the entire set of nodes is available for friendship. For example, sometimes only "friends of friends" are considered [24,28,40].
In this paper, we first show that many popular network formation mechanisms can be rewritten as conditional logit models, including preferential attachment, uniform attachment, node fitness, latent space models, and models of homophily. However, the real power of discrete choice models for social network analysis is the ability to combine different features (e.g., node degree and node age), as well as different mechanisms (e.g., triadic closure and preferential attachment) and estimate their relative roles. Social networks are enormously varied in their structure [27], but existing methods often do a poor job at modeling this diversity. Thus, beyond unifying the network formation and discrete choice literature, we also develop several new tools for social network analysis. For example, we show how to estimate models to distinguish the effects of preferential attachment and triadic closure. We demonstrate these tools by analyzing the formation of the Flickr social network and the formation of a citation network. We find on Flickr that accounting for triadic closure greatly reduces the estimated role of degree in choosing who to connect to, and that nodes with degree zero have a remarkably high utility. Our estimates of preferential attachment in the citation network are similar to those observed in prior studies. When accounting for the age of a paper, we find evidence for linear preferential attachment. However, for a fixed degree, we find that age is negatively correlated with the likelihood of a new citation (i.e., older papers are less likely to be cited).
The key assumption underlying our framework is that the available data actually captures edge formation events (either through edge timestamps or other sequential information). In contrast, many existing approaches to understanding network formation focus on observing only the structural properties of a network at a single point of observation, e.g., its degree distribution, and initiating a deductive process to try and understand how variations in edge formation would lead to different outcomes [2,7,28,43]. This approach leads to tidy analyses and easy-to-characterize asymptotic properties, but model selection in this context is strongly dependent on what properties are compared. Different underlying formation processes can lead to graphs with indistinguishable properties. For example, many different formation processes result in the same heavy-tailed degree distributions [52]. Thus, when "fitting" outcome measurements in this way, one has to know (or posit), e.g., the relative rates of node formation and edge formation. However, when temporal or sequential data is available [25,56], our framework overcomes these limitations by incorporating this structure.
Additional related work. There is a strong connection between our work and work on link prediction and missing data methods using network features to predict edges [15,42]. A network formation model implicitly makes claims about what edges are most likely to form next, and thus can be evaluated by the same metrics as link prediction algorithms [44]. We use predictive accuracy as a measure of goodness of fit, but our primary concern is interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
In sociology, stochastic actor-oriented models (SAOMs) employ a similar logit choice [69,70]; however, these models are targeted towards data collected as a few snapshots rather than edge-by-edge formation. SAOMs also model the rate at which nodes form new relationships, whereas we condition on the node initiating the new edge, providing better estimates of model parameters. There are also sociological models such as relational event models [10] and dynamic network actor models [71] that use fine-grained temporal information, yet these also do not condition on the initiator node as we do. While these sociological models can incorporate notions of network formation (e.g., preferential attachment), our conditional logit framework actually cleanly subsumes a wide range of models as special cases.
Finally, estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. The latter question is often pursued with so-called exponential random graph models (ERGMs) [63,79,81]. However, these models do not consider individual edge events, are hard to estimate, and have known pathologies [13,66].
DISCRETE CHOICE AND EDGE FORMATION
We now develop network formation through the lens of discrete choice. Throughout this paper, we assume that the networks are directed. Again, while undirected graphs are common in social network analysis, the actual edge formation process often has directed initiation. In the common setting of "growing graphs, " nodes arrive one at a time and form edges when arriving in a network. In these cases, the newly arriving node is considered to be the node initiating the connection; such analysis is standard with, e.g., classical preferential attachment models [2]. When modeling the directed formation of an edge (i, j), two processes need to be distinguished, roughly corresponding to the questions "who is i?" (the chooser) and "who is j?" (the chosen). In this paper, we focus on understanding the latter, i.e., the formation of (i, j) as the selection of j conditional on knowing that i is ready to form an edge. Thus, our discrete choice models of edge formation can be readily estimated from data that implicitly or explicitly contains a record of initiating i nodes and used for subsequent analysis, as we show in Sections 3 and 4. Beyond the scope of this work, our model of "j conditional on i" can be paired with a model of "initiations by i" for a full generative model of network formation.
Edge formation as discrete choice
With the above formalisms in place, we now develop network formation from a discrete choice perspective. We begin by showing how several well-known models can be conveniently expressed as conditional logit models, with a summary given in Table 1. All models are designed to grow simple graphs (i.e., without multi-edges), and the choice set C excludes any nodes to which the chooser i is already connected. Every item is represented by its features that, importantly, can evolve over time. The features x j,t of node j at time t are thus always time-indexed, but we often suppress the t to reduce notational clutter.
Preferential attachment. We start with the generalized Barabási-Albert model [2,8,36], also known as the generalized Price model [59], one of the most studied models in the network formation literature. It is typically stated as a growth model of a time-evolving graph G t = (V t , E t ), t = 1, 2, 3, . . ., and when a new node arrives it connects to m distinct existing nodes j with a probability proportional to a power of their degree d j,t at time t,
P(j, V t ) = d α j,t ℓ ∈V t d α ℓ,t .(2)
The exponent parameter α controls the relative importance of degree [36]. The case where α = 1 is called linear preferential attachment, and produces networks that can mimic a range of structural properties observed in empirical networks. If we represent each potential neighbor j with the time-indexed one-dimensional "feature vector" x j,t = log d j,t and employ a conditional logit model as in Equation (1), we obtain a utility of j for i at time t of u i, j,t = θ log d j,t . Here the choice model parameter θ plays the exact role of α, since e θ log d j, t = d θ j,t . Table 1: Network formation models framed as utility functions for a conditional logit. Where appropriate, we use the traditional notation for the parameters of each process.
Process u i, j C Uniform attachment [12] 1 V Preferential attachment [2,36] α log d j V Non-parametric PA [54,58,62] θ d j V Triadic closure [61] 1 {j : FoF i, j } FoF attachment [31,65,78] α log η i, j V PA, FoFs only α log d j {j : FoF i, j } Individual node fitness [11] θ j V PA with fitness [6,53] α log d j + θ j V Latent space [22,41,55] β · d(i, j) V Stochastic block model [33] ω д i ,д j V Homophily [48] h · 1{д i = д j } V Given a growing network G t , we can construct a choice dataset D from this network by extracting the node j t , node sets V t , and degree sequence (d 1,t , . . . , d |V t |,t ) at each time-step. The preferential attachment model has only one parameter, θ = α. The loglikelihood for that parameter given a dataset is then:
l(α; D) = (j,C)∈D log exp(α log d j ) ℓ ∈C exp(α log d ℓ ) = (j,C)∈D α log d j − log ℓ ∈C exp(α log d ℓ ) .
We've suppressed the time-index t from the features log d ℓ to reduce clutter, but emphasize that d ℓ is the degree at the time of the choice.
Non-parametric preferential attachment. The above model assumes an attachment kernel of a particular parametric form. From a discrete choice perspective, one can also estimate the role of degree in edge formation non-parametrically by estimating a coefficient θ k for each degree k = 0, . . . , n − 1 individually. This approach has the added benefit of being able to assign positive probability to choosing nodes with degree zero. Under this model, the log-likelihood of the parameters θ = (θ 0 , ..., θ n−1 ) given the dataset is:
l(θ ; D) = (j,C)∈D log exp θ d j ℓ ∈C exp θ d ℓ = (j,C)∈D θ d j − log ℓ ∈C exp θ d ℓ .
Again we've suppressed time-indexing to simplify the presentation. Pham et al. [58] previously described a version of the above likelihood as a means of measuring the attachment kernel using maximum likelihood, albeit without making the connection to discrete choice.
Uniform attachment. A simple edge formation model is to sample a new neighbor uniformly at random from all nodes [12]. There are no parameters in this model, but we can still write down the likelihood of the model given a dataset, which will be useful when 3
we later combine this model with others within a mixture model:
l(D) = (j,C)∈ D log exp (1) ℓ ∈C exp (1) = (j,C)∈ D − log |C |.
Triadic closure. A variant of uniform attachment is for i to attach to new neighbors uniformly at random from the set of their friendsof-friends, as opposed to the set of all nodes. This process effectively models triadic closure [61]. It has the same simple functional form of the uniform model, but now the choice set C varies with each choice, namely, the choice set is restricted to be only the friends of friends of node i (the chooser) to which i is not already connected. This change in choice set can also be achieved by assuming the utility of j to i at time t is u i, j,t = log(1{FoF i, j,t }), where 1{FoF i, j,t } is a boolean indicating whether i and j are friends of friends at time t, and then letting the choice set revert to the full node set. An additional model that naturally combines the ideas of preferential attachment and befriending friends-of-friends takes the number of friends in common between i and j as a feature. We could define this feature as η i, j,t = |{k : e i,k,t ∧ e k, j,t }|, where e i,k,t indicates whether there is an edge between i and k at time t. The corresponding utility would be u i, j,t = α log η i, j,t . This model is similar (but not equivalent) to random walk-based formation models [31,65,78], which emphasize formation within a local neighborhood.
Node fitness. Another line of formation models that is subsumed by the discrete choice framework are those involving fitness. In this work, nodes choose to connect to others based on some intrinsic latent fitness score. Certain distributions of fitness values lead to a scale-free degree distribution [11], providing an alternative explanation to preferential attachment for modeling such degree distributions. We can express the node fitness model by a conditional logit model with separate fixed effect θ j for each node j (so the feature of a node is an indicator vector of its identity). The likelihood of the fitness parameters θ given the data is then:
l(θ ; D) = (j,C)∈ D log exp θ j ℓ ∈C exp θ ℓ = (j,C)∈ D θ j − log ℓ ∈C exp θ ℓ .
This formation model is equivalent to the classic Bradley-Terry-Luce model of discrete choice for estimating the quality of alternatives [45]. Alternatively, one could replace the individual fixed effects with surrogate features of node fitness such as an auxiliary measure of gregariousness (in the case of social networks), or the impact factor of a paper's journal (in the case of citations networks).
A related model proposes selection probabilities proportional to the product of node fitness and degree [6,53]. This model can be written as a conditional logit model with u i, j,t = α log d j,t + θ j .
Latent space models. Another class of network formation models postulates the existence of a latent space that drives connections between nodes. Examples of latent spaces include Euclidean space [22], hyperbolic space [37], a tree [41], a circle [55], or a set of discrete classes [23]. While the conditional logit model in the form that we describe it does not facilitate finding the best-fitting latent space assignment to explain the data, it can be used to estimate the relative importance of a known latent space given a distance function d(i, j). As one example from the family of latent space models, in the community-guided attachment (CGA) model [41] all nodes have a distance derived from the height h(i, j) of common parents in a latent tree structure situating all nodes i and j. Given this tree as known, a node connects to another proportionally to c −h(i, j) for some scalar c > 0. As a conditional logit model, the corresponding utility function is u i, j = −h(i, j) · log(c). The parameter vector θ = log c can be retrieved by fitting a conditional logit with a known h(i, j) as the only variable and transforming the estimated parameter with c = exp(θ ). Assuming that the latent space representation is given is a strong assumption, and fitting such a model while estimating the latent space representation (e.g. as done by Hoff et al. [22] in Euclidean space) is much more difficult.
Additional models. Conditional logit models are very flexible and can deal with multiple features and interactions between them. Any number of features can be added, including node covariates and structural features like a node's clustering coefficient [3] or age [12,40]. Conditional logit models can also be used to investigate the role of homophily [48] in edge formation, by adding a binary feature indicating whether nodes i and j are part of the same class. Table 1 summarizes how several network formation models fit within the discrete choice framework via their corresponding utility functions and choice sets. A major advantage of this framework is that different features can easily be combined into a single model and jointly estimated. Or, when suitable, one can employ a mixture of conditional logit models, as we show in the next section.
Combining modes using Mixed Logit
So far we have written a range of existing and new edge formation models as conditional logit models, a specific type of discrete choice model. But several existing edge formation models that do not fit neatly into the conditional logit framework, meanwhile, align exactly with the use of mixture models in discrete choice modeling. Following our success formulating edge formation models as conditional logit models, in this subsection we develop mixed conditional logit formulations of several additional models.
A common proposal to make network formation models more flexible is to augment an existing model by allowing nodes to pick neighbors uniformly at random with some probability 1 − p, while running the ordinary model with probability p [17,35,39,43]. This augmentation increases flexibility because it enables the model to explain edge events that may otherwise have probability zero. Within discrete choice, this approach is precisely a mixed logit model where one of the mixture modes is uniform attachment.
While the conditional logit estimates a single parameter vector representing average preferences as shared by all agents, the mixed logit model is often used to account for differences in preferences across various types of agents. In its most general form, the mixed logit is expressed using a probability distribution f over different instantiations of the parameter vector θ :
P i (j, C) = ∫ exp θ T x j l ∈C exp θ T x l f (θ ) dθ .
Process Modes
Copy model [35] Uniform, PA Node types [38] New node, PA, none Local search [24,28] Uniform, Uniform FoF (r , p)-model Uniform, PA, Uniform FoF, PA FoF
In this work, we will only consider discrete mixtures of M logits, also called a latent class model [32]:
P i (j, C) = M m=1 π m exp θ T m x j l ∈C exp θ T m x l ,
where M m=1 π m = 1 and the weights π 1 , . . . , π M model the relative prevalence of each mode.
Copy model. The copy model is a classic formation process that can be written as a mixed logit with two modes. In the first mode, new edges connect proportional to degree with probability p, while in the second mode they connect uniformly at random with probability 1 − p [17,43]. As a conditional logit model, the utilities of the two modes are u (1) x = log d x and u (2) x = 1, respectively, and the class probabilities are (π 1 , π 2 ) = (p, 1 − p). (This is a special case of the original copy model where d edges are copied from a sampled vertex [39]; the model here is when d = 1, which is often used for analysis [19].) The connection between relaxations of preferential attachment and mixture models was also recently observed by Medina et al. [49].
Local search model. Another example of a model with multiple modes is the Jackson-Rogers model of edge formation as a mixture of uniform attachment and triadic closure [24,28]. The original model is based on a relative rate r * between edges forming at random and edges formed locally. It also has edges form based on respective acceptance probabilities. We describe a simplified version of this model, which we'll call the local search model, where edges connect to nodes selected uniformly at random from the full node set with probability r and uniformly at random from the set of friends-of-friends with probability 1 − r . 1 We can represent this simplified process with a two-mode mixed logit model. In this case the mixture parameters are (π 1 , π 2 ) = (r , 1 − r ) and both modes have the same utility function u x = 1 but their choice sets differ so that the second mode only considers friends-of-friends. 2 Table 2 overviews the mixture model formulations described above, as well as a new model-the (r, p)-model-that we use in Section 4.2 to analyze preferential attachment effects. 1 Since the r * parameter in the original presentation is actually the rate of uniform attachment, we can relate it to our r through r = r * 1+r * . For example, if the rate between random and friend-of-friend edges is one to one (r * = 1), then r = 0.5. 2 A model with a restricted choice set, for example to only friends-of-friends, gives a likelihood of zero to choices outside the choice set.
ESTIMATION AND INFERENCE
To learn a discrete choice model of network formation from data, we assume that we have access to a sequence of directed edges, in chronological order. This sequence of edges needs to be recast as choice data in order to fit a choice model. For every formed edge (i, j), we create a data point consisting of the choice j, the choice set of candidates nodes at the time, and the features of each candidate node at the time.
Given a data set and a conditional logit model, one can write out the log-likelihood, as shown in Section 2.2. For any conditional logit model with a linear utility u i, j = θ T x j , the likelihood function is convex with respect to the variables θ and can be efficiently maximized using standard gradient-based optimization (e.g., BFGS). The functional form of the logit leads to straightforward gradients. For example, for preferential attachment, the gradient is
∂ ∂α l(α; D) = (x,C)∈D log d x − y ∈C log d y · exp(α log d y ) y ∈C exp(α log d y ) ,
where the time-dependence of the features (degrees) have been suppressed to reduce clutter. Gradients for the other choice models in Section 2.2 are omitted but straightforward. One advantage of likelihood-based model fitting is that we can compute standard errors and confidence intervals of the parameters. In particular, the standard errors can be computed with
√ H −1 [74],
where H is the Hessian matrix of second derivatives of the loglikelihood at the parameters.
Mixture models and expectation-maximization. For mixed conditional logit models, the log-likelihood is no longer convex in general, making optimization more difficult. To maximize the likelihood of mixed models we turn to expectation maximization (EM) techniques [18,73]. We briefly summarize the procedure described in Train's book [74,Chapter 14.3.2]. Assume that we have a model with M modes (i.e., mixture components), where every mode starts with initial parameter values ì θ m (usually initiated at 1). Choices (x k , C k ) ∈ D are again indexed with k, so that k ∈ {1, . . . , n} and n = |D|. The EM algorithm runs through the following steps:
(1) Initiate class probabilities uniformly with π m = 1/M and initial class responsibilities γ m k = 1/M for each data point. (2) For every data point k and every mode m, compute the class responsibility given by the relative individual likelihood:
γ m k = π m · L m (θ m ; (x k , C k )) M ℓ=1 π ℓ · L ℓ (θ ℓ ; (x k , C k ))
.
(3) For every mode m, update the total class probability with
π m = 1 N N k =1 γ m k .(4)
For every mode m, update the parameters ì θ m using standard optimization for fitting a single model, weighing each choice set with its class responsibility γ m k . (5) Repeat steps 2-4 until some convergence or stopping criteria.
The total likelihood of the parameters and class probabilities is:
l(θ ; D) = M m=1 l m (θ m ; π m ; D) = M m=1 N k =1 log L m (θ m ; (x k , C k )) ·π m 5
We monitor the convergence of the iterative procedure using the change in this total likelihood between iterations.
Even though EM is theoretically an efficient estimator [82], there are cases when alternatives are appropriate. For example, if one has reasonable bounds or priors on the parameter values, then direct likelihood maximization could be used, and if the search space is low-dimensional, a grid search might be appropriate. Recent theoretical work has also developed algorithms for learning mixtures of two multinomial logit modes with theoretical guarantees assuming a separation between the modes [14].
Negative sampling. Every time an edge is formed by some node i, each node not yet connected to i is a candidate choice. For large sparse graphs, the full choice set of all nodes can become large and the gradients of the log-likelihood expensive to compute. To speed up this computation, s negative/non-chosen examples can be sampled uniformly at random to create a (random) reduced dataset with smaller choice sets. For each choice (j, C), one forms a smaller random choice set out of the positive choice and the negative samples, C ⊂ C with |C | = s + 1, and replaces the original choice data with (j,C). As long as the negative examples are sampled uniformly at random, parameter estimates on a dataset with negatively sampled choice sets are unbiased and consistent for the estimates on the on the full set [29,46,74]. Practically, there is a trade-off between feature computation and storage on the one hand, and the ability to estimate coefficients for rare features on the other.
Typical likelihood surface. In Figure 1 we show the representative likelihood surface of a copy model to illustrate its properties. We generated a synthetic graph on n = 10, 000 nodes according to the copy model with m = 4 edges per node and degree-attachment probability π 1 = 0.5. We fit a two-mode mixed logit model to this data with u (1) j = α log d j,t and u (2) j = 1. We use s = 10 negative samples. There are two free parameters in this model: the degree exponent α and the mixture probability π 1 . We plot the log-likelihood across a reasonable range of values to show that surface is generally well behaved. We see that it is hard to distinguish between data generated under a copy model (α = 1) with probability π 1 = 0.5 from data generated from no-mixture (π 1 = 0) preferential attachment with α = 0.5, and there is a general trade-off between the exponent α and the mixture probability π 1 .
Model comparison and the likelihood-ratio test. Another advantage of our discrete choice framework is that we can employ standard statistical methods for model selection. Specifically, when one model is a special case of another, their relative quality can be compared using the likelihood ratio test. In the case of the conditional logit, a model with additional features can be compared to one without them because the latter is a special case of the former with the coefficients of the additional features being set to 0. Or, in the case of the mixed logit, one can define a model with multiple modes and manually set some of their class probabilities to zero.
As a concrete example, suppose we wanted to know whether including the age of a node in a preferential attachment model results in a statistically significantly better model. To do so, we would first estimate the parameters θ 1 of the more complex model, u (1) j = θ 1,1 log(d j ) + θ 1,2 log(age). We would then estimate the parameters θ 0 of the simpler model u L 0 be the likelihoods of the two models with parametersθ 1 andθ 0 . We can compute the likelihood ratio λ = L 0 /L 1 . Under the null hypothesis of the simpler model, with some regularity conditions, −2 log λ is asymptotically distributed χ 2 1 (χ 2 k where k is the number of additional degrees of freedom in the more complex model) [
APPLICATIONS
We now demonstrate how to use our conditional logit framework to analyze network formation processes. We first consider synthetic data and show how our tools can be used to better analyze preferential attachment mechanisms. We then analyze two empirical datasets that demonstrate how to integrate different structural features of the network or integrate node covariates. In both cases, our framework provides novel insights into the network formation processes. We provide code for processing data (converting edge lists to choice data) and for model fitting (with negative sampling), available here: https://github.com/janovergoor/choose2grow/.
Measuring preferential attachment
The question of whether and when preferential attachment is an important driver of network formation is widely debated [2,3,9,11,12,24,28,54,54,65,78]. Most prior research focuses on estimating the shape of the attachment kernel by comparing the degree of chosen nodes to the distribution of available degrees [30,54,62]. However, recent work by Pham et al. shows that previous measures are biased [58]. In particular, the bias comes from the assumption that the distribution of available nodes of varying degrees is constant throughout the formation process, but this distribution clearly changes as the network grows.
To estimate the exponent α of an attachment kernel, Pham et al. propose fitting something akin to a conditional logit with a separate coefficient for each degree, and then estimating α via a weighted least squares fit over the degree coefficients [58]. Compared to this method, fitting a log-degree logit directly is much simpler. In fact, it is the maximum likelihood estimator for α, and thus consistent and efficient. 6 Figure 2: Attachment kernel fits for a synthetic preferential attachment graph. The Newman measure computes the relative likelihood of selecting a node of that degree, as compared to the likelihood of selecting the lowest degree, but it is biased for higher degrees. The non-parametric logit is consistent but noisy for higher degrees.
To illustrate, we generate a graph with pure preferential attachment (n = 2, 000, m = 1 edges per node, α = 1) and estimate the attachment kernel by the methods of Newman [54] and Pham et al. [58]. The maximum degree of this graph was 102, and the results of the different estimation procedures are shown in Figure 2. The non-parametric estimates are similar for lower degrees, but for higher degrees the Newman measure incorrectly drops, illustrating the bias that Pham et al. have previously documented. Fitting α directly using a log-degree conditional logit gives an estimate of α = 0.987. The Pham et al. least squares fit,α LS = 1.012, is close to the MLE but may deviate considerably in more difficult instances.
Disentangling preferential attachment from triadic closure
Many models exhibit similar outcomes to preferential attachment [11,24,28,36,52,78], but there are few principled ways to rigorously test the relative validity of these models. In this section, we show how to use the discrete choice framework to estimate the relative importance of preferential attachment while accounting for other dynamics. To this end, we generate data according to a known generative process and fit various (possibly mis-specified) formation models. Our generative process is a hybrid between the copy model of preferential attachment (i.e., choose nodes proportional to degree) and the Jackson-Rogers local search model (i.e., connecting to friends-of-friends). The process, which we call the (r , p)-model, is parametrized by r ∈ (0, 1] and p ∈ (0, 1]. When a new edge is formed, with probability p it is formed uniformly at random and with probability 1 − p it is formed with linear preferential attachment (α = 1). Meanwhile, the choice set is determined by the second parameter r : with probability r , the choice set is all nodes not yet connected to i, while with probability 1 −r , the choice set is limited to available friends-of-friends of i. With r = 1 this model reduces to the copy-model and with p = 1 it reduces to the simplified local search model; the (r , p)-model thus subsumes two popular models in a single, simple discrete choice framework. For a growth process on directed graphs, it is necessary that p > 0 and r > 0, otherwise new nodes will never be selected. With this general model, we investigate how estimating parameters of one of the more specific models goes awry when the true data generating process in fact comes from an instance of the more general model. For a range of values of p and r , we generated graphs using the following growth process. New nodes arrive, each creating m = 4 edges. For every edge, we sample the mode of the model (according to r and p) independently. If an edge is supposed to be a friend-of-friend edge, but no friends-of-friends are available (for example, i's first edge), then the process reverts to uniformly random formation across the full node set. 3 Sweeping through combination of p and r parameter values, for each set of parameters we generated 10 undirected graphs with n = 20, 000 nodes each.
Degree distributions. The local search and copy models both produce graphs with power-law degree distributions. Therefore, fitting a mis-specified model on a degree distribution can lead to misleading results. To illustrate, we fit a power-law distribution p(x) ∝ x −γ to the degree distribution of graphs generated from (r , p)-models using maximum likelihood estimation [16], with estimates for γ in Figure 3. In theory, an undirected graph formed with the copy model process with probability parameter p leads to a degree distribution with power law exponent γ = (3 − p)/(1 − p) [8,52] (for directed graphs, γ = (2 − p)/(1 − p)). As p increases, the degree distribution looks more like a random graph without preferential attachment. However, as r goes down (increasing the relative role of friend-of-friends), the parameter estimate looks like the estimates for the copy model, even when p = 1.
To summarize, it is not recommended to estimate a formation model from an observed degree distribution. The parameter estimates are sensitive to small deviations in the generative process. Figure 4: The log-likelihood of varying the class probabilities of the copy model (r = 1, p free) or the local search model (r free, p = 1) for two different synthetic graphs. In both cases the true model is the most likely. On the left we see a large difference in the log-likelihood between optima, while on the right we see a smaller difference. In both cases a likelihood ratio test is highly significant (P-values < 10 −16 ).
cases. As a first case, we generate graphs with r = 0.5 and p = 1, so half the edges are formed to friends-of-friends with no utility from degree. The likelihood under a local search model (r free, p = 1) as a mixed logit is maximized at r = 0.45, while for the copy model (r = 1, p free) it is maximized at p = 0.54. The former is a much better fit than the latter (P-value < 10 −16 ), and the copy model erroneously thinks that preferential attachment is driving 45% of the edges. As a second case, we look at a graph generated with r = 1 and p = 0.5, so half the edges are due to preferential attachment, and friend-of-friending plays no role. In this case, both models are correctly maximized at their relative values. Again, the correct model has a higher likelihood (P-value < 10 −16 ).
Choosing to follow on Flickr
We now apply our framework to examine a real-world network formation dataset capturing the growth of the Flickr social network. We find that incorporating a Friend-of-Friend feature beyond preferential attachment and link-reciprocation features substantially improves both likelihood and test accuracy and furthermore that the inclusion of this feature significantly reduces preference for degree-based attachment. However, omitting preferential attachment entirely leads to a worse model. We also find a preference for nodes with zero degree over low degree nodes. This hints that such nodes play a special role in the network formation process, even though they would be ignored in preferential attachment models.
Data. We use a scrape of the Flickr social network collected daily between October 2006 and May 2007 [50,51]. Users of Flickr can choose to follow other users and the "following" (but not the "followed by") connections are publicly accessible. The data was gathered using a breadth-first search crawl, which means that only the connected components reachable from the seed profiles are represented in the data. Since a full crawl was performed daily, the timing of new edges can be identified at the granularity of a day. The graph contains 3.2 million nodes and 33.1 million edges. Note: *p<0.01
As described in the original papers, this data is consistent with both preferential attachment, as inferred from the in-degree distribution, and local search, as inferred from the over-representation of edges to nodes that are close to the linking node [50]. Fitting a power law to the distribution of in-degrees givesγ = 1.741, which would indicate super-linear preferential attachment. We can test the relative importance of triadic closure by fitting a Jackson-Rogers model using the degree distribution matching procedure described in [28]. This results inr = 0.252, estimating that three out of four edges are formed through triadic closure.
Discrete choice analysis. We fit a series of conditional logit models to further investigate the network formation process. We isolated a sample of 20,000 edge formation events occurring around the same date, 4 to avoid time heterogeneity affecting the estimates. We fit several models, displayed in Table 3. Not-chosen alternatives are negatively sampled with s = 24. We log-transform in-degree (representing the number of followers), but in order to account for nodes with degree zero, we add a "has degree" feature for having a positive degree and use a modified version of log that returns 0 for input 0. 5 In the first column, we fit a model using just these two degree-related features, and a reciprocity feature capturing whether the target node is already following the chooser. Reciprocity is a common phenomenon, with 60% of edges being followed back [50]. The estimateα (the coefficient for "log Followers") for this model is significantly larger than 1, again consistent with super-linear preferential attachment.
In the second model, we test the effect of the target node being a friend-of-friend of the choosing node. In the case of Flickr, this means that the choosing user already follows someone that follows the target user, which evidently is strongly correlated with following that user. However, combining these two features in a third model (column 3) leads to both estimated parameters dropping substantially. Most remarkable is the 40% drop in the estimate of α, which paints a very different picture about the role of degree.
In the fourth model, we measure network proximity as in the original paper, by counting the number of "hops" (path length) from i to the target before an edge was made. We integrate the hops as categorical variables to show the relative impact of each additional "hop". Being two hops away is equivalent to being a friend-of-friend, and thus has strongly positive coefficient. Every additional hop corresponds to a sharp decrease in choosing that node. Being five hops away is slightly worse than there not being a path at all. This could be an artifact of the way the data was gathered, so that new regions of the graph only get "discovered" when there is at least one link to them, or this could be due to path length not being an accurate measure of distance for newer nodes. Since the number of hops is co-linear with being a friend-of-friend, we can't test them both at the same time.
In Figure 5 we visually show the effect of different specifications on the estimate ofα. The first model of the Flickr data looks like super-linear preferential attachment, while the role of degree in the other two is significantly reduced. However, fitting a nonparametric model shows that the estimated coefficients for individual degrees are remarkably linear, suggesting that the functional form of d α j is a good fit for this network. One important point is the role of zero-degree nodes. In most descriptions of preferential attachment, nodes with degree zero are not considered. However, in the Flickr data set, zero-degree nodes have a higher utility than positive low degree nodes, which could again be an artifact of the data collection process, or point to the special role of new nodes in the network. Either way, our framework allows one to find these kinds of patterns, and investigate them further.
Choosing to cite
We now turn to citation network data to show how a discrete choice framework facilitates the testing of network formation hypotheses. Previous analyses of citation networks have observed linear preferential attachment with respect to degree [62] and bias towards citing more recent work [62]. Here, we find consistent results that older papers are less likely to be cited but that accounting for age actually increases the importance of degree (i.e., after accounting for age, higher degree nodes are more likely to be cited). Flickr q q q q q q q q q q q qqq qqqqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure 5: The probability of being chosen by degree, as compared to a node with degree 1. We show the fits of parametric (lines) and non-parametric (points) conditional logit models of the Flickr and citation networks. The legend references model numbers in Table 3 and Table 4. The estimate for degree 0 is inserted for comparison. Dashed reference lines illustrate what exact linear preferential attachment (α = 1) would look like.
Data. We use the Microsoft Academic Graph 6 dataset and focus on a representative subgraph of 459,000 "Climatology" papers. We focus on the subgraph of a single field to simplify the analysis since citations are predominantly within the same field of study (our analysis was similar on other subgraphs). We construct a graph out of this data by adding an edge each time a paper in our dataset cites another paper in our dataset. For our analysis of Climatology publications, 45% of edges are within the domain and citations to papers that are not labeled are excluded, leaving 3 million edges. We sample 10,000 citation events uniformly at random from papers published after 2010 and apply negative sampling (s = 24). This processing results in 10,000 choices with 25 alternatives in each choice set. For each possible choice, we compute four features: the number of citations at the time of citation, whether the paper shares authors with the citing paper, the age of the paper in years at the time of citation, and the maximum number of publications by any one of the authors at the time of publication. This last feature is a proxy for node fitness [11].
Discrete choice analysis. We fit conditional logit choice models relating these features to the likelihood of citation (Table 4). The first model (first column) is a simple log-degree model. We find that the estimateα (the coefficient for "log Citations") is substantially lower than one, consistent with sub-linear preferential attachment. Apart from the log-likelihood of the models, we also report the predictive accuracy (defined as the share of instances predicted correctly) on a holdout test set of 2,000 examples. Just relying on prior degree already gives an accuracy of 36%, which is high for a classification task with 25 classes. In model two (second column), we add a covariate for whether a paper shares an author with the citing paper. As expected, this has a strongly positive coefficient. For the third model we add a covariate for the age of the paper in log years (years is always at least one). Older papers are less likely to get cited (accounting for degree), but accounting for age increases the relative importance of degree significantly. This expanded model also increases the accuracy to 53%, indicating that these feature weights do capture substantially more predictive power. Finally, in model four we add the "max papers by authors" feature as a proxy for fitness. The coefficient is small but positive. Accounting for fitness slightly reduces the estimated relative importance of degree, but theα estimate is still close to 1. Adding this feature does not improve the log-likelihood or predictive accuracy; a better proxy for fitness may explain the data better. Looking back to the visual display of α for the citation models in Figure 5, the non-parametric coefficients are highly linear. In this data, zero-degree nodes are significantly less attractive than nodes with degree one. As with any regression, the identifying causal effects from model fit depends on the design of the study. The estimates we provide here, as is the case with most analyses of observational data, are descriptive and not meant to describe causal processes. The point is that discrete choice models provide a flexible framework to easily test and compare different hypotheses around network formation.
DISCUSSION
When modeling network formation, the majority of the literature analyzes networks that grow "externally, " with new nodes arriving and choosing who to connect to, and this setting has also been our main focus here. External growth leads to convenient models that are relatively easy to analyze, with citation networks and patent networks as examples of empirical networks that follow this generative process reasonably closely. However, in many (especially social) networks, pairs of older nodes often form edges as well, edges that are "internal" to the existing set of nodes. An extreme example is the social networks of schools or classrooms, which have a fixed node population and "grow" purely through an internal growth process. A major advantage of modeling network formation as discrete choice is that it does not require any model of edge event initiation and simply conditions on the sequence of decisions to initiate, focusing the modeling on the choices made by the initiator. Discrete choice can therefore easily be used to model internal growth as well.
Another major advantage of discrete choice modeling is that it connects the analysis of large-scale network datasets to statistical methods (fitting generalized linear models) that are tremendously scalable. As we show in this work, additional techniques (e.g., negative sampling) makes it possible to efficiently scale the estimation process to very large network datasets.
Since the conditional logit model of discrete choice is a random utility model, the estimated parameters can be interpreted as the marginal utility of each feature. This allows one to question the functional form of features. For example, we show that preferential attachment is equivalent to the logarithmic utility of degree. Given that degree is commonly heavy-tailed, this is a natural functional form, but we point out that the conditional logit allows one to flexibly compare different specifications.
Our discrete choice perspective has implications for how network data is best collected and analyzed. It is useful to consider and record notions of directionality, even if edges can otherwise be considered to be undirected. With information about the choice set associated with each choice, we can see what each node j looked like at the time the choice was made. Datasets that record the exact time of all edge formation events, as opposed to lumping edge events at the granularity of days or years, makes it possible to further analyze the formation process in more detail.
There are a couple limitations to our proposed methodology. First, we cannot model purely undirected edges without some notion of direction. Second, even though the conditional logit and mixed logit models allow one to model similar mechanisms, the interpretations of their estimates are different. The estimates of a conditional logit are more akin to those of a linear regression model, where one estimates the expected change in an outcome from varying a covariate. A mixture model is a probabilistic combination of constituent modes, so the class probabilities indicate the relative importance to each mode, which makes it harder to compare the roles of individual features within or across modes. However, many traditional models of network formation are equivalent to mixture models, which motivated our consideration of them in this work.
By making foundational connections between network formation and discrete choice, we are hopeful that many further tools from discrete choice theory can be applied to the study of network formation. For example, there can be bias in network formation, e.g., men are more likely to cite themselves than women [34]. Our discrete choice framework can help study these cases more rigorously. For another example, discrete choice models of subset selection [5,20] could be applied to understand possible substitution and complementarity effects in network formation. And discrete choice interpretations of machine learning embeddings techniques [64] can likely help unpack the behavior of recent embedding-based network representation methods such as DeepWalk [57]. Networks fundamentally represent interactions between discrete entities, and it is therefore natural that methods for modeling and analyzing discrete choice should enable many contributions.
| 8,860 |
1811.05008
|
2963321544
|
We provide a framework for modeling social network formation through conditional multinomial logit models from discrete choice and random utility theory, in which each new edge is viewed as a “choice” made by a node to connect to another node, based on (generic) features of the other nodes available to make a connection. This perspective on network formation unifies existing models such as preferential attachment, triadic closure, and node fitness, which are all special cases, and thereby provides a flexible means for conceptualizing, estimating, and comparing models. The lens of discrete choice theory also provides several new tools for analyzing social network formation; for example, the significance of node features can be evaluated in a statistically rigorous manner, and mixtures of existing models can be estimated by adapting known expectation-maximization algorithms. We demonstrate the flexibility of our framework through examples that analyze a number of synthetic and real-world datasets. For example, we provide rigorous methods for estimating preferential attachment models and show how to separate the effects of preferential attachment and triadic closure. Non-parametric estimates of the importance of degree show a highly linear trend, and we expose the importance of looking carefully at nodes with degree zero. Examining the formation of a large citation graph, we find evidence for an increased role of degree when accounting for age.
|
A related line of research studies the so-called stochastic actor-oriented model @cite_40 @cite_27 . This model combines multiple formation dynamics in a multinomial logit functional form, and develops connections between network formation and Markov chains in the space of graphs. However, they are impractical to estimate, especially for larger data sets. should increase distancing here? Need to return to the distancing here. Why is it impractical for large datasets?
|
{
"abstract": [
"Abstract Stochastic actor-based models are models for network dynamics that can represent a wide variety of influences on network change, and allow to estimate parameters expressing such influences, and test corresponding hypotheses. The nodes in the network represent social actors, and the collection of ties represents a social relation. The assumptions posit that the network evolves as a stochastic process ‘driven by the actors’, i.e., the model lends itself especially for representing theories about how actors change their outgoing ties. The probabilities of tie changes are in part endogenously determined, i.e., as a function of the current network structure itself, and in part exogenously, as a function of characteristics of the nodes (‘actor covariates’) and of characteristics of pairs of nodes (‘dyadic covariates’). In an extended form, stochastic actor-based models can be used to analyze longitudinal data on social networks jointly with changing attributes of the actors: dynamics of networks and behavior. This paper gives an introduction to stochastic actor-based models for dynamics of directed networks, using only a minimum of mathematics. The focus is on understanding the basic principles of the model, understanding the results, and on sensible rules for model selection.",
"A class of statistical models is proposed for longitudinal network data. The dependent variable is the changing (or evolving) relation network, represented by two or more observations of a directed graph with a fixed set of actors. The network evolution is modeled as the consequence of the actors making new choices, or withdrawing existing choices, on the basis of functions, with fixed and random components, that the actors try to maximize. Individual and dyadic exogenous variables can be used as covariates. The change in the network is modeled as the stochastic result of network effects (reciprocity, transitivity, etc.) and these covariates. The existing network structure is a dynamic constraint for the evolution of the structure itself. The models are continuous-time Markov chain models that can be implemented as simulation models. The model parameters are estimated from observed data. For estimating and testing these models, statistical procedures are proposed that are based on the method of moments. The statistical procedures are implemented using a stochastic approximation algorithm based on computer simulations of the network evolution process."
],
"cite_N": [
"@cite_27",
"@cite_40"
],
"mid": [
"2099815494",
"2104725117"
]
}
|
Choosing to Grow a Graph: Modeling Network Formation as Discrete Choice
|
Understanding how networks form and evolve is an essential component of understanding their structure, which in turn underlies the basis for understanding the broad range of processes that occur on networks. Models of social network formation can largely be decomposed into node formation and edge formation. In this work, we argue that edge formation can be effectively modeled as a choice made by an actor (or actors) in the network to instantiate a connection to another node. The diverse research on network formation has led to many models and mechanisms of edge formation, including preferential attachment [2], uniform attachment [12], triadic closure [31], random walks [65,78], homophily [55], copying edges from existing nodes [35,39], latent space structures [22,41,55], inherent node fitness [7,11], and combinations of all of these [28,40,43]. Here, we frame edge formation as a discrete choice process and derive a family of discrete choice models [47,74] that subsume a wide range of existing models in a unified framework and also naturally opens up a host of powerful extensions.
Discrete choice models are commonly employed in economics, social psychology, and statistics as a way to model how individuals make choices from a slate of discrete alternatives [1]. Typically, the alternatives have associated features, and statistical models of discrete choice make it possible to estimate the relative importance of such features. Such models have been used to answer questions such as how consumers choose goods [67], how people choose where they live [46], how students choose what college to attend [21], and how commuters choose between different modes of transportation [75]. Discrete choice analysis is also used to understand how choices vary depending on the context in which they are framed: in online commerce, this could be how web layouts lead to different purchasing priorities [26]; for choosing colleges, this could be incorporating the effect of the national economy. In this paper, we demonstrate how discrete choice models can similarly help us understand the factors driving social network evolution.
The starting point for the present work is the observation that edge formation events in social networks are naturally viewed as discrete choices. For simplicity, consider a directed graph where edges are formed one by one, where we can think of the formation of a directed edge (i, j) as i "choosing" to connect with j, where the set of alternatives available to i is the set of all other nodes. (While undirected graph models are common in social network analysis, the underlying formation procedure is almost always asymmetric. For example, the Facebook friendship graph is typically modeled as an undirected graph [77], but the friendships are proposed by one of the two nodes in an edge.) The key modeling question is easy to state: why did i choose j? This question has long been the informal subject of network formation modeling and at the same time the exact question that discrete choice models and analysis have been designed to answer. However, up to this point, network formation models have largely been decoupled from discrete choice theory.
In employing discrete choice analysis, we focus on the conditional multinomial logit model, commonly called the conditional logit model for short, which is a foundational workhorse of discrete choice modeling. The model belongs to the family of random utility models, where choices are interpretable as those of a rational actor selecting the alternative with the largest "utility" sampled from random variables that decompose into the inherent utility of the alternative and a noise term. With the conditional logit model, we can use existing optimization routines to estimate model parameters and existing statistical methods to asses the uncertainty of the estimates. Discrete choice models can also easily restrict the set of available alternatives, where it might not be reasonable to assume that the entire set of nodes is available for friendship. For example, sometimes only "friends of friends" are considered [24,28,40].
In this paper, we first show that many popular network formation mechanisms can be rewritten as conditional logit models, including preferential attachment, uniform attachment, node fitness, latent space models, and models of homophily. However, the real power of discrete choice models for social network analysis is the ability to combine different features (e.g., node degree and node age), as well as different mechanisms (e.g., triadic closure and preferential attachment) and estimate their relative roles. Social networks are enormously varied in their structure [27], but existing methods often do a poor job at modeling this diversity. Thus, beyond unifying the network formation and discrete choice literature, we also develop several new tools for social network analysis. For example, we show how to estimate models to distinguish the effects of preferential attachment and triadic closure. We demonstrate these tools by analyzing the formation of the Flickr social network and the formation of a citation network. We find on Flickr that accounting for triadic closure greatly reduces the estimated role of degree in choosing who to connect to, and that nodes with degree zero have a remarkably high utility. Our estimates of preferential attachment in the citation network are similar to those observed in prior studies. When accounting for the age of a paper, we find evidence for linear preferential attachment. However, for a fixed degree, we find that age is negatively correlated with the likelihood of a new citation (i.e., older papers are less likely to be cited).
The key assumption underlying our framework is that the available data actually captures edge formation events (either through edge timestamps or other sequential information). In contrast, many existing approaches to understanding network formation focus on observing only the structural properties of a network at a single point of observation, e.g., its degree distribution, and initiating a deductive process to try and understand how variations in edge formation would lead to different outcomes [2,7,28,43]. This approach leads to tidy analyses and easy-to-characterize asymptotic properties, but model selection in this context is strongly dependent on what properties are compared. Different underlying formation processes can lead to graphs with indistinguishable properties. For example, many different formation processes result in the same heavy-tailed degree distributions [52]. Thus, when "fitting" outcome measurements in this way, one has to know (or posit), e.g., the relative rates of node formation and edge formation. However, when temporal or sequential data is available [25,56], our framework overcomes these limitations by incorporating this structure.
Additional related work. There is a strong connection between our work and work on link prediction and missing data methods using network features to predict edges [15,42]. A network formation model implicitly makes claims about what edges are most likely to form next, and thus can be evaluated by the same metrics as link prediction algorithms [44]. We use predictive accuracy as a measure of goodness of fit, but our primary concern is interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
In sociology, stochastic actor-oriented models (SAOMs) employ a similar logit choice [69,70]; however, these models are targeted towards data collected as a few snapshots rather than edge-by-edge formation. SAOMs also model the rate at which nodes form new relationships, whereas we condition on the node initiating the new edge, providing better estimates of model parameters. There are also sociological models such as relational event models [10] and dynamic network actor models [71] that use fine-grained temporal information, yet these also do not condition on the initiator node as we do. While these sociological models can incorporate notions of network formation (e.g., preferential attachment), our conditional logit framework actually cleanly subsumes a wide range of models as special cases.
Finally, estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. The latter question is often pursued with so-called exponential random graph models (ERGMs) [63,79,81]. However, these models do not consider individual edge events, are hard to estimate, and have known pathologies [13,66].
DISCRETE CHOICE AND EDGE FORMATION
We now develop network formation through the lens of discrete choice. Throughout this paper, we assume that the networks are directed. Again, while undirected graphs are common in social network analysis, the actual edge formation process often has directed initiation. In the common setting of "growing graphs, " nodes arrive one at a time and form edges when arriving in a network. In these cases, the newly arriving node is considered to be the node initiating the connection; such analysis is standard with, e.g., classical preferential attachment models [2]. When modeling the directed formation of an edge (i, j), two processes need to be distinguished, roughly corresponding to the questions "who is i?" (the chooser) and "who is j?" (the chosen). In this paper, we focus on understanding the latter, i.e., the formation of (i, j) as the selection of j conditional on knowing that i is ready to form an edge. Thus, our discrete choice models of edge formation can be readily estimated from data that implicitly or explicitly contains a record of initiating i nodes and used for subsequent analysis, as we show in Sections 3 and 4. Beyond the scope of this work, our model of "j conditional on i" can be paired with a model of "initiations by i" for a full generative model of network formation.
Edge formation as discrete choice
With the above formalisms in place, we now develop network formation from a discrete choice perspective. We begin by showing how several well-known models can be conveniently expressed as conditional logit models, with a summary given in Table 1. All models are designed to grow simple graphs (i.e., without multi-edges), and the choice set C excludes any nodes to which the chooser i is already connected. Every item is represented by its features that, importantly, can evolve over time. The features x j,t of node j at time t are thus always time-indexed, but we often suppress the t to reduce notational clutter.
Preferential attachment. We start with the generalized Barabási-Albert model [2,8,36], also known as the generalized Price model [59], one of the most studied models in the network formation literature. It is typically stated as a growth model of a time-evolving graph G t = (V t , E t ), t = 1, 2, 3, . . ., and when a new node arrives it connects to m distinct existing nodes j with a probability proportional to a power of their degree d j,t at time t,
P(j, V t ) = d α j,t ℓ ∈V t d α ℓ,t .(2)
The exponent parameter α controls the relative importance of degree [36]. The case where α = 1 is called linear preferential attachment, and produces networks that can mimic a range of structural properties observed in empirical networks. If we represent each potential neighbor j with the time-indexed one-dimensional "feature vector" x j,t = log d j,t and employ a conditional logit model as in Equation (1), we obtain a utility of j for i at time t of u i, j,t = θ log d j,t . Here the choice model parameter θ plays the exact role of α, since e θ log d j, t = d θ j,t . Table 1: Network formation models framed as utility functions for a conditional logit. Where appropriate, we use the traditional notation for the parameters of each process.
Process u i, j C Uniform attachment [12] 1 V Preferential attachment [2,36] α log d j V Non-parametric PA [54,58,62] θ d j V Triadic closure [61] 1 {j : FoF i, j } FoF attachment [31,65,78] α log η i, j V PA, FoFs only α log d j {j : FoF i, j } Individual node fitness [11] θ j V PA with fitness [6,53] α log d j + θ j V Latent space [22,41,55] β · d(i, j) V Stochastic block model [33] ω д i ,д j V Homophily [48] h · 1{д i = д j } V Given a growing network G t , we can construct a choice dataset D from this network by extracting the node j t , node sets V t , and degree sequence (d 1,t , . . . , d |V t |,t ) at each time-step. The preferential attachment model has only one parameter, θ = α. The loglikelihood for that parameter given a dataset is then:
l(α; D) = (j,C)∈D log exp(α log d j ) ℓ ∈C exp(α log d ℓ ) = (j,C)∈D α log d j − log ℓ ∈C exp(α log d ℓ ) .
We've suppressed the time-index t from the features log d ℓ to reduce clutter, but emphasize that d ℓ is the degree at the time of the choice.
Non-parametric preferential attachment. The above model assumes an attachment kernel of a particular parametric form. From a discrete choice perspective, one can also estimate the role of degree in edge formation non-parametrically by estimating a coefficient θ k for each degree k = 0, . . . , n − 1 individually. This approach has the added benefit of being able to assign positive probability to choosing nodes with degree zero. Under this model, the log-likelihood of the parameters θ = (θ 0 , ..., θ n−1 ) given the dataset is:
l(θ ; D) = (j,C)∈D log exp θ d j ℓ ∈C exp θ d ℓ = (j,C)∈D θ d j − log ℓ ∈C exp θ d ℓ .
Again we've suppressed time-indexing to simplify the presentation. Pham et al. [58] previously described a version of the above likelihood as a means of measuring the attachment kernel using maximum likelihood, albeit without making the connection to discrete choice.
Uniform attachment. A simple edge formation model is to sample a new neighbor uniformly at random from all nodes [12]. There are no parameters in this model, but we can still write down the likelihood of the model given a dataset, which will be useful when 3
we later combine this model with others within a mixture model:
l(D) = (j,C)∈ D log exp (1) ℓ ∈C exp (1) = (j,C)∈ D − log |C |.
Triadic closure. A variant of uniform attachment is for i to attach to new neighbors uniformly at random from the set of their friendsof-friends, as opposed to the set of all nodes. This process effectively models triadic closure [61]. It has the same simple functional form of the uniform model, but now the choice set C varies with each choice, namely, the choice set is restricted to be only the friends of friends of node i (the chooser) to which i is not already connected. This change in choice set can also be achieved by assuming the utility of j to i at time t is u i, j,t = log(1{FoF i, j,t }), where 1{FoF i, j,t } is a boolean indicating whether i and j are friends of friends at time t, and then letting the choice set revert to the full node set. An additional model that naturally combines the ideas of preferential attachment and befriending friends-of-friends takes the number of friends in common between i and j as a feature. We could define this feature as η i, j,t = |{k : e i,k,t ∧ e k, j,t }|, where e i,k,t indicates whether there is an edge between i and k at time t. The corresponding utility would be u i, j,t = α log η i, j,t . This model is similar (but not equivalent) to random walk-based formation models [31,65,78], which emphasize formation within a local neighborhood.
Node fitness. Another line of formation models that is subsumed by the discrete choice framework are those involving fitness. In this work, nodes choose to connect to others based on some intrinsic latent fitness score. Certain distributions of fitness values lead to a scale-free degree distribution [11], providing an alternative explanation to preferential attachment for modeling such degree distributions. We can express the node fitness model by a conditional logit model with separate fixed effect θ j for each node j (so the feature of a node is an indicator vector of its identity). The likelihood of the fitness parameters θ given the data is then:
l(θ ; D) = (j,C)∈ D log exp θ j ℓ ∈C exp θ ℓ = (j,C)∈ D θ j − log ℓ ∈C exp θ ℓ .
This formation model is equivalent to the classic Bradley-Terry-Luce model of discrete choice for estimating the quality of alternatives [45]. Alternatively, one could replace the individual fixed effects with surrogate features of node fitness such as an auxiliary measure of gregariousness (in the case of social networks), or the impact factor of a paper's journal (in the case of citations networks).
A related model proposes selection probabilities proportional to the product of node fitness and degree [6,53]. This model can be written as a conditional logit model with u i, j,t = α log d j,t + θ j .
Latent space models. Another class of network formation models postulates the existence of a latent space that drives connections between nodes. Examples of latent spaces include Euclidean space [22], hyperbolic space [37], a tree [41], a circle [55], or a set of discrete classes [23]. While the conditional logit model in the form that we describe it does not facilitate finding the best-fitting latent space assignment to explain the data, it can be used to estimate the relative importance of a known latent space given a distance function d(i, j). As one example from the family of latent space models, in the community-guided attachment (CGA) model [41] all nodes have a distance derived from the height h(i, j) of common parents in a latent tree structure situating all nodes i and j. Given this tree as known, a node connects to another proportionally to c −h(i, j) for some scalar c > 0. As a conditional logit model, the corresponding utility function is u i, j = −h(i, j) · log(c). The parameter vector θ = log c can be retrieved by fitting a conditional logit with a known h(i, j) as the only variable and transforming the estimated parameter with c = exp(θ ). Assuming that the latent space representation is given is a strong assumption, and fitting such a model while estimating the latent space representation (e.g. as done by Hoff et al. [22] in Euclidean space) is much more difficult.
Additional models. Conditional logit models are very flexible and can deal with multiple features and interactions between them. Any number of features can be added, including node covariates and structural features like a node's clustering coefficient [3] or age [12,40]. Conditional logit models can also be used to investigate the role of homophily [48] in edge formation, by adding a binary feature indicating whether nodes i and j are part of the same class. Table 1 summarizes how several network formation models fit within the discrete choice framework via their corresponding utility functions and choice sets. A major advantage of this framework is that different features can easily be combined into a single model and jointly estimated. Or, when suitable, one can employ a mixture of conditional logit models, as we show in the next section.
Combining modes using Mixed Logit
So far we have written a range of existing and new edge formation models as conditional logit models, a specific type of discrete choice model. But several existing edge formation models that do not fit neatly into the conditional logit framework, meanwhile, align exactly with the use of mixture models in discrete choice modeling. Following our success formulating edge formation models as conditional logit models, in this subsection we develop mixed conditional logit formulations of several additional models.
A common proposal to make network formation models more flexible is to augment an existing model by allowing nodes to pick neighbors uniformly at random with some probability 1 − p, while running the ordinary model with probability p [17,35,39,43]. This augmentation increases flexibility because it enables the model to explain edge events that may otherwise have probability zero. Within discrete choice, this approach is precisely a mixed logit model where one of the mixture modes is uniform attachment.
While the conditional logit estimates a single parameter vector representing average preferences as shared by all agents, the mixed logit model is often used to account for differences in preferences across various types of agents. In its most general form, the mixed logit is expressed using a probability distribution f over different instantiations of the parameter vector θ :
P i (j, C) = ∫ exp θ T x j l ∈C exp θ T x l f (θ ) dθ .
Process Modes
Copy model [35] Uniform, PA Node types [38] New node, PA, none Local search [24,28] Uniform, Uniform FoF (r , p)-model Uniform, PA, Uniform FoF, PA FoF
In this work, we will only consider discrete mixtures of M logits, also called a latent class model [32]:
P i (j, C) = M m=1 π m exp θ T m x j l ∈C exp θ T m x l ,
where M m=1 π m = 1 and the weights π 1 , . . . , π M model the relative prevalence of each mode.
Copy model. The copy model is a classic formation process that can be written as a mixed logit with two modes. In the first mode, new edges connect proportional to degree with probability p, while in the second mode they connect uniformly at random with probability 1 − p [17,43]. As a conditional logit model, the utilities of the two modes are u (1) x = log d x and u (2) x = 1, respectively, and the class probabilities are (π 1 , π 2 ) = (p, 1 − p). (This is a special case of the original copy model where d edges are copied from a sampled vertex [39]; the model here is when d = 1, which is often used for analysis [19].) The connection between relaxations of preferential attachment and mixture models was also recently observed by Medina et al. [49].
Local search model. Another example of a model with multiple modes is the Jackson-Rogers model of edge formation as a mixture of uniform attachment and triadic closure [24,28]. The original model is based on a relative rate r * between edges forming at random and edges formed locally. It also has edges form based on respective acceptance probabilities. We describe a simplified version of this model, which we'll call the local search model, where edges connect to nodes selected uniformly at random from the full node set with probability r and uniformly at random from the set of friends-of-friends with probability 1 − r . 1 We can represent this simplified process with a two-mode mixed logit model. In this case the mixture parameters are (π 1 , π 2 ) = (r , 1 − r ) and both modes have the same utility function u x = 1 but their choice sets differ so that the second mode only considers friends-of-friends. 2 Table 2 overviews the mixture model formulations described above, as well as a new model-the (r, p)-model-that we use in Section 4.2 to analyze preferential attachment effects. 1 Since the r * parameter in the original presentation is actually the rate of uniform attachment, we can relate it to our r through r = r * 1+r * . For example, if the rate between random and friend-of-friend edges is one to one (r * = 1), then r = 0.5. 2 A model with a restricted choice set, for example to only friends-of-friends, gives a likelihood of zero to choices outside the choice set.
ESTIMATION AND INFERENCE
To learn a discrete choice model of network formation from data, we assume that we have access to a sequence of directed edges, in chronological order. This sequence of edges needs to be recast as choice data in order to fit a choice model. For every formed edge (i, j), we create a data point consisting of the choice j, the choice set of candidates nodes at the time, and the features of each candidate node at the time.
Given a data set and a conditional logit model, one can write out the log-likelihood, as shown in Section 2.2. For any conditional logit model with a linear utility u i, j = θ T x j , the likelihood function is convex with respect to the variables θ and can be efficiently maximized using standard gradient-based optimization (e.g., BFGS). The functional form of the logit leads to straightforward gradients. For example, for preferential attachment, the gradient is
∂ ∂α l(α; D) = (x,C)∈D log d x − y ∈C log d y · exp(α log d y ) y ∈C exp(α log d y ) ,
where the time-dependence of the features (degrees) have been suppressed to reduce clutter. Gradients for the other choice models in Section 2.2 are omitted but straightforward. One advantage of likelihood-based model fitting is that we can compute standard errors and confidence intervals of the parameters. In particular, the standard errors can be computed with
√ H −1 [74],
where H is the Hessian matrix of second derivatives of the loglikelihood at the parameters.
Mixture models and expectation-maximization. For mixed conditional logit models, the log-likelihood is no longer convex in general, making optimization more difficult. To maximize the likelihood of mixed models we turn to expectation maximization (EM) techniques [18,73]. We briefly summarize the procedure described in Train's book [74,Chapter 14.3.2]. Assume that we have a model with M modes (i.e., mixture components), where every mode starts with initial parameter values ì θ m (usually initiated at 1). Choices (x k , C k ) ∈ D are again indexed with k, so that k ∈ {1, . . . , n} and n = |D|. The EM algorithm runs through the following steps:
(1) Initiate class probabilities uniformly with π m = 1/M and initial class responsibilities γ m k = 1/M for each data point. (2) For every data point k and every mode m, compute the class responsibility given by the relative individual likelihood:
γ m k = π m · L m (θ m ; (x k , C k )) M ℓ=1 π ℓ · L ℓ (θ ℓ ; (x k , C k ))
.
(3) For every mode m, update the total class probability with
π m = 1 N N k =1 γ m k .(4)
For every mode m, update the parameters ì θ m using standard optimization for fitting a single model, weighing each choice set with its class responsibility γ m k . (5) Repeat steps 2-4 until some convergence or stopping criteria.
The total likelihood of the parameters and class probabilities is:
l(θ ; D) = M m=1 l m (θ m ; π m ; D) = M m=1 N k =1 log L m (θ m ; (x k , C k )) ·π m 5
We monitor the convergence of the iterative procedure using the change in this total likelihood between iterations.
Even though EM is theoretically an efficient estimator [82], there are cases when alternatives are appropriate. For example, if one has reasonable bounds or priors on the parameter values, then direct likelihood maximization could be used, and if the search space is low-dimensional, a grid search might be appropriate. Recent theoretical work has also developed algorithms for learning mixtures of two multinomial logit modes with theoretical guarantees assuming a separation between the modes [14].
Negative sampling. Every time an edge is formed by some node i, each node not yet connected to i is a candidate choice. For large sparse graphs, the full choice set of all nodes can become large and the gradients of the log-likelihood expensive to compute. To speed up this computation, s negative/non-chosen examples can be sampled uniformly at random to create a (random) reduced dataset with smaller choice sets. For each choice (j, C), one forms a smaller random choice set out of the positive choice and the negative samples, C ⊂ C with |C | = s + 1, and replaces the original choice data with (j,C). As long as the negative examples are sampled uniformly at random, parameter estimates on a dataset with negatively sampled choice sets are unbiased and consistent for the estimates on the on the full set [29,46,74]. Practically, there is a trade-off between feature computation and storage on the one hand, and the ability to estimate coefficients for rare features on the other.
Typical likelihood surface. In Figure 1 we show the representative likelihood surface of a copy model to illustrate its properties. We generated a synthetic graph on n = 10, 000 nodes according to the copy model with m = 4 edges per node and degree-attachment probability π 1 = 0.5. We fit a two-mode mixed logit model to this data with u (1) j = α log d j,t and u (2) j = 1. We use s = 10 negative samples. There are two free parameters in this model: the degree exponent α and the mixture probability π 1 . We plot the log-likelihood across a reasonable range of values to show that surface is generally well behaved. We see that it is hard to distinguish between data generated under a copy model (α = 1) with probability π 1 = 0.5 from data generated from no-mixture (π 1 = 0) preferential attachment with α = 0.5, and there is a general trade-off between the exponent α and the mixture probability π 1 .
Model comparison and the likelihood-ratio test. Another advantage of our discrete choice framework is that we can employ standard statistical methods for model selection. Specifically, when one model is a special case of another, their relative quality can be compared using the likelihood ratio test. In the case of the conditional logit, a model with additional features can be compared to one without them because the latter is a special case of the former with the coefficients of the additional features being set to 0. Or, in the case of the mixed logit, one can define a model with multiple modes and manually set some of their class probabilities to zero.
As a concrete example, suppose we wanted to know whether including the age of a node in a preferential attachment model results in a statistically significantly better model. To do so, we would first estimate the parameters θ 1 of the more complex model, u (1) j = θ 1,1 log(d j ) + θ 1,2 log(age). We would then estimate the parameters θ 0 of the simpler model u L 0 be the likelihoods of the two models with parametersθ 1 andθ 0 . We can compute the likelihood ratio λ = L 0 /L 1 . Under the null hypothesis of the simpler model, with some regularity conditions, −2 log λ is asymptotically distributed χ 2 1 (χ 2 k where k is the number of additional degrees of freedom in the more complex model) [
APPLICATIONS
We now demonstrate how to use our conditional logit framework to analyze network formation processes. We first consider synthetic data and show how our tools can be used to better analyze preferential attachment mechanisms. We then analyze two empirical datasets that demonstrate how to integrate different structural features of the network or integrate node covariates. In both cases, our framework provides novel insights into the network formation processes. We provide code for processing data (converting edge lists to choice data) and for model fitting (with negative sampling), available here: https://github.com/janovergoor/choose2grow/.
Measuring preferential attachment
The question of whether and when preferential attachment is an important driver of network formation is widely debated [2,3,9,11,12,24,28,54,54,65,78]. Most prior research focuses on estimating the shape of the attachment kernel by comparing the degree of chosen nodes to the distribution of available degrees [30,54,62]. However, recent work by Pham et al. shows that previous measures are biased [58]. In particular, the bias comes from the assumption that the distribution of available nodes of varying degrees is constant throughout the formation process, but this distribution clearly changes as the network grows.
To estimate the exponent α of an attachment kernel, Pham et al. propose fitting something akin to a conditional logit with a separate coefficient for each degree, and then estimating α via a weighted least squares fit over the degree coefficients [58]. Compared to this method, fitting a log-degree logit directly is much simpler. In fact, it is the maximum likelihood estimator for α, and thus consistent and efficient. 6 Figure 2: Attachment kernel fits for a synthetic preferential attachment graph. The Newman measure computes the relative likelihood of selecting a node of that degree, as compared to the likelihood of selecting the lowest degree, but it is biased for higher degrees. The non-parametric logit is consistent but noisy for higher degrees.
To illustrate, we generate a graph with pure preferential attachment (n = 2, 000, m = 1 edges per node, α = 1) and estimate the attachment kernel by the methods of Newman [54] and Pham et al. [58]. The maximum degree of this graph was 102, and the results of the different estimation procedures are shown in Figure 2. The non-parametric estimates are similar for lower degrees, but for higher degrees the Newman measure incorrectly drops, illustrating the bias that Pham et al. have previously documented. Fitting α directly using a log-degree conditional logit gives an estimate of α = 0.987. The Pham et al. least squares fit,α LS = 1.012, is close to the MLE but may deviate considerably in more difficult instances.
Disentangling preferential attachment from triadic closure
Many models exhibit similar outcomes to preferential attachment [11,24,28,36,52,78], but there are few principled ways to rigorously test the relative validity of these models. In this section, we show how to use the discrete choice framework to estimate the relative importance of preferential attachment while accounting for other dynamics. To this end, we generate data according to a known generative process and fit various (possibly mis-specified) formation models. Our generative process is a hybrid between the copy model of preferential attachment (i.e., choose nodes proportional to degree) and the Jackson-Rogers local search model (i.e., connecting to friends-of-friends). The process, which we call the (r , p)-model, is parametrized by r ∈ (0, 1] and p ∈ (0, 1]. When a new edge is formed, with probability p it is formed uniformly at random and with probability 1 − p it is formed with linear preferential attachment (α = 1). Meanwhile, the choice set is determined by the second parameter r : with probability r , the choice set is all nodes not yet connected to i, while with probability 1 −r , the choice set is limited to available friends-of-friends of i. With r = 1 this model reduces to the copy-model and with p = 1 it reduces to the simplified local search model; the (r , p)-model thus subsumes two popular models in a single, simple discrete choice framework. For a growth process on directed graphs, it is necessary that p > 0 and r > 0, otherwise new nodes will never be selected. With this general model, we investigate how estimating parameters of one of the more specific models goes awry when the true data generating process in fact comes from an instance of the more general model. For a range of values of p and r , we generated graphs using the following growth process. New nodes arrive, each creating m = 4 edges. For every edge, we sample the mode of the model (according to r and p) independently. If an edge is supposed to be a friend-of-friend edge, but no friends-of-friends are available (for example, i's first edge), then the process reverts to uniformly random formation across the full node set. 3 Sweeping through combination of p and r parameter values, for each set of parameters we generated 10 undirected graphs with n = 20, 000 nodes each.
Degree distributions. The local search and copy models both produce graphs with power-law degree distributions. Therefore, fitting a mis-specified model on a degree distribution can lead to misleading results. To illustrate, we fit a power-law distribution p(x) ∝ x −γ to the degree distribution of graphs generated from (r , p)-models using maximum likelihood estimation [16], with estimates for γ in Figure 3. In theory, an undirected graph formed with the copy model process with probability parameter p leads to a degree distribution with power law exponent γ = (3 − p)/(1 − p) [8,52] (for directed graphs, γ = (2 − p)/(1 − p)). As p increases, the degree distribution looks more like a random graph without preferential attachment. However, as r goes down (increasing the relative role of friend-of-friends), the parameter estimate looks like the estimates for the copy model, even when p = 1.
To summarize, it is not recommended to estimate a formation model from an observed degree distribution. The parameter estimates are sensitive to small deviations in the generative process. Figure 4: The log-likelihood of varying the class probabilities of the copy model (r = 1, p free) or the local search model (r free, p = 1) for two different synthetic graphs. In both cases the true model is the most likely. On the left we see a large difference in the log-likelihood between optima, while on the right we see a smaller difference. In both cases a likelihood ratio test is highly significant (P-values < 10 −16 ).
cases. As a first case, we generate graphs with r = 0.5 and p = 1, so half the edges are formed to friends-of-friends with no utility from degree. The likelihood under a local search model (r free, p = 1) as a mixed logit is maximized at r = 0.45, while for the copy model (r = 1, p free) it is maximized at p = 0.54. The former is a much better fit than the latter (P-value < 10 −16 ), and the copy model erroneously thinks that preferential attachment is driving 45% of the edges. As a second case, we look at a graph generated with r = 1 and p = 0.5, so half the edges are due to preferential attachment, and friend-of-friending plays no role. In this case, both models are correctly maximized at their relative values. Again, the correct model has a higher likelihood (P-value < 10 −16 ).
Choosing to follow on Flickr
We now apply our framework to examine a real-world network formation dataset capturing the growth of the Flickr social network. We find that incorporating a Friend-of-Friend feature beyond preferential attachment and link-reciprocation features substantially improves both likelihood and test accuracy and furthermore that the inclusion of this feature significantly reduces preference for degree-based attachment. However, omitting preferential attachment entirely leads to a worse model. We also find a preference for nodes with zero degree over low degree nodes. This hints that such nodes play a special role in the network formation process, even though they would be ignored in preferential attachment models.
Data. We use a scrape of the Flickr social network collected daily between October 2006 and May 2007 [50,51]. Users of Flickr can choose to follow other users and the "following" (but not the "followed by") connections are publicly accessible. The data was gathered using a breadth-first search crawl, which means that only the connected components reachable from the seed profiles are represented in the data. Since a full crawl was performed daily, the timing of new edges can be identified at the granularity of a day. The graph contains 3.2 million nodes and 33.1 million edges. Note: *p<0.01
As described in the original papers, this data is consistent with both preferential attachment, as inferred from the in-degree distribution, and local search, as inferred from the over-representation of edges to nodes that are close to the linking node [50]. Fitting a power law to the distribution of in-degrees givesγ = 1.741, which would indicate super-linear preferential attachment. We can test the relative importance of triadic closure by fitting a Jackson-Rogers model using the degree distribution matching procedure described in [28]. This results inr = 0.252, estimating that three out of four edges are formed through triadic closure.
Discrete choice analysis. We fit a series of conditional logit models to further investigate the network formation process. We isolated a sample of 20,000 edge formation events occurring around the same date, 4 to avoid time heterogeneity affecting the estimates. We fit several models, displayed in Table 3. Not-chosen alternatives are negatively sampled with s = 24. We log-transform in-degree (representing the number of followers), but in order to account for nodes with degree zero, we add a "has degree" feature for having a positive degree and use a modified version of log that returns 0 for input 0. 5 In the first column, we fit a model using just these two degree-related features, and a reciprocity feature capturing whether the target node is already following the chooser. Reciprocity is a common phenomenon, with 60% of edges being followed back [50]. The estimateα (the coefficient for "log Followers") for this model is significantly larger than 1, again consistent with super-linear preferential attachment.
In the second model, we test the effect of the target node being a friend-of-friend of the choosing node. In the case of Flickr, this means that the choosing user already follows someone that follows the target user, which evidently is strongly correlated with following that user. However, combining these two features in a third model (column 3) leads to both estimated parameters dropping substantially. Most remarkable is the 40% drop in the estimate of α, which paints a very different picture about the role of degree.
In the fourth model, we measure network proximity as in the original paper, by counting the number of "hops" (path length) from i to the target before an edge was made. We integrate the hops as categorical variables to show the relative impact of each additional "hop". Being two hops away is equivalent to being a friend-of-friend, and thus has strongly positive coefficient. Every additional hop corresponds to a sharp decrease in choosing that node. Being five hops away is slightly worse than there not being a path at all. This could be an artifact of the way the data was gathered, so that new regions of the graph only get "discovered" when there is at least one link to them, or this could be due to path length not being an accurate measure of distance for newer nodes. Since the number of hops is co-linear with being a friend-of-friend, we can't test them both at the same time.
In Figure 5 we visually show the effect of different specifications on the estimate ofα. The first model of the Flickr data looks like super-linear preferential attachment, while the role of degree in the other two is significantly reduced. However, fitting a nonparametric model shows that the estimated coefficients for individual degrees are remarkably linear, suggesting that the functional form of d α j is a good fit for this network. One important point is the role of zero-degree nodes. In most descriptions of preferential attachment, nodes with degree zero are not considered. However, in the Flickr data set, zero-degree nodes have a higher utility than positive low degree nodes, which could again be an artifact of the data collection process, or point to the special role of new nodes in the network. Either way, our framework allows one to find these kinds of patterns, and investigate them further.
Choosing to cite
We now turn to citation network data to show how a discrete choice framework facilitates the testing of network formation hypotheses. Previous analyses of citation networks have observed linear preferential attachment with respect to degree [62] and bias towards citing more recent work [62]. Here, we find consistent results that older papers are less likely to be cited but that accounting for age actually increases the importance of degree (i.e., after accounting for age, higher degree nodes are more likely to be cited). Flickr q q q q q q q q q q q qqq qqqqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure 5: The probability of being chosen by degree, as compared to a node with degree 1. We show the fits of parametric (lines) and non-parametric (points) conditional logit models of the Flickr and citation networks. The legend references model numbers in Table 3 and Table 4. The estimate for degree 0 is inserted for comparison. Dashed reference lines illustrate what exact linear preferential attachment (α = 1) would look like.
Data. We use the Microsoft Academic Graph 6 dataset and focus on a representative subgraph of 459,000 "Climatology" papers. We focus on the subgraph of a single field to simplify the analysis since citations are predominantly within the same field of study (our analysis was similar on other subgraphs). We construct a graph out of this data by adding an edge each time a paper in our dataset cites another paper in our dataset. For our analysis of Climatology publications, 45% of edges are within the domain and citations to papers that are not labeled are excluded, leaving 3 million edges. We sample 10,000 citation events uniformly at random from papers published after 2010 and apply negative sampling (s = 24). This processing results in 10,000 choices with 25 alternatives in each choice set. For each possible choice, we compute four features: the number of citations at the time of citation, whether the paper shares authors with the citing paper, the age of the paper in years at the time of citation, and the maximum number of publications by any one of the authors at the time of publication. This last feature is a proxy for node fitness [11].
Discrete choice analysis. We fit conditional logit choice models relating these features to the likelihood of citation (Table 4). The first model (first column) is a simple log-degree model. We find that the estimateα (the coefficient for "log Citations") is substantially lower than one, consistent with sub-linear preferential attachment. Apart from the log-likelihood of the models, we also report the predictive accuracy (defined as the share of instances predicted correctly) on a holdout test set of 2,000 examples. Just relying on prior degree already gives an accuracy of 36%, which is high for a classification task with 25 classes. In model two (second column), we add a covariate for whether a paper shares an author with the citing paper. As expected, this has a strongly positive coefficient. For the third model we add a covariate for the age of the paper in log years (years is always at least one). Older papers are less likely to get cited (accounting for degree), but accounting for age increases the relative importance of degree significantly. This expanded model also increases the accuracy to 53%, indicating that these feature weights do capture substantially more predictive power. Finally, in model four we add the "max papers by authors" feature as a proxy for fitness. The coefficient is small but positive. Accounting for fitness slightly reduces the estimated relative importance of degree, but theα estimate is still close to 1. Adding this feature does not improve the log-likelihood or predictive accuracy; a better proxy for fitness may explain the data better. Looking back to the visual display of α for the citation models in Figure 5, the non-parametric coefficients are highly linear. In this data, zero-degree nodes are significantly less attractive than nodes with degree one. As with any regression, the identifying causal effects from model fit depends on the design of the study. The estimates we provide here, as is the case with most analyses of observational data, are descriptive and not meant to describe causal processes. The point is that discrete choice models provide a flexible framework to easily test and compare different hypotheses around network formation.
DISCUSSION
When modeling network formation, the majority of the literature analyzes networks that grow "externally, " with new nodes arriving and choosing who to connect to, and this setting has also been our main focus here. External growth leads to convenient models that are relatively easy to analyze, with citation networks and patent networks as examples of empirical networks that follow this generative process reasonably closely. However, in many (especially social) networks, pairs of older nodes often form edges as well, edges that are "internal" to the existing set of nodes. An extreme example is the social networks of schools or classrooms, which have a fixed node population and "grow" purely through an internal growth process. A major advantage of modeling network formation as discrete choice is that it does not require any model of edge event initiation and simply conditions on the sequence of decisions to initiate, focusing the modeling on the choices made by the initiator. Discrete choice can therefore easily be used to model internal growth as well.
Another major advantage of discrete choice modeling is that it connects the analysis of large-scale network datasets to statistical methods (fitting generalized linear models) that are tremendously scalable. As we show in this work, additional techniques (e.g., negative sampling) makes it possible to efficiently scale the estimation process to very large network datasets.
Since the conditional logit model of discrete choice is a random utility model, the estimated parameters can be interpreted as the marginal utility of each feature. This allows one to question the functional form of features. For example, we show that preferential attachment is equivalent to the logarithmic utility of degree. Given that degree is commonly heavy-tailed, this is a natural functional form, but we point out that the conditional logit allows one to flexibly compare different specifications.
Our discrete choice perspective has implications for how network data is best collected and analyzed. It is useful to consider and record notions of directionality, even if edges can otherwise be considered to be undirected. With information about the choice set associated with each choice, we can see what each node j looked like at the time the choice was made. Datasets that record the exact time of all edge formation events, as opposed to lumping edge events at the granularity of days or years, makes it possible to further analyze the formation process in more detail.
There are a couple limitations to our proposed methodology. First, we cannot model purely undirected edges without some notion of direction. Second, even though the conditional logit and mixed logit models allow one to model similar mechanisms, the interpretations of their estimates are different. The estimates of a conditional logit are more akin to those of a linear regression model, where one estimates the expected change in an outcome from varying a covariate. A mixture model is a probabilistic combination of constituent modes, so the class probabilities indicate the relative importance to each mode, which makes it harder to compare the roles of individual features within or across modes. However, many traditional models of network formation are equivalent to mixture models, which motivated our consideration of them in this work.
By making foundational connections between network formation and discrete choice, we are hopeful that many further tools from discrete choice theory can be applied to the study of network formation. For example, there can be bias in network formation, e.g., men are more likely to cite themselves than women [34]. Our discrete choice framework can help study these cases more rigorously. For another example, discrete choice models of subset selection [5,20] could be applied to understand possible substitution and complementarity effects in network formation. And discrete choice interpretations of machine learning embeddings techniques [64] can likely help unpack the behavior of recent embedding-based network representation methods such as DeepWalk [57]. Networks fundamentally represent interactions between discrete entities, and it is therefore natural that methods for modeling and analyzing discrete choice should enable many contributions.
| 8,860 |
1811.05008
|
2963321544
|
We provide a framework for modeling social network formation through conditional multinomial logit models from discrete choice and random utility theory, in which each new edge is viewed as a “choice” made by a node to connect to another node, based on (generic) features of the other nodes available to make a connection. This perspective on network formation unifies existing models such as preferential attachment, triadic closure, and node fitness, which are all special cases, and thereby provides a flexible means for conceptualizing, estimating, and comparing models. The lens of discrete choice theory also provides several new tools for analyzing social network formation; for example, the significance of node features can be evaluated in a statistically rigorous manner, and mixtures of existing models can be estimated by adapting known expectation-maximization algorithms. We demonstrate the flexibility of our framework through examples that analyze a number of synthetic and real-world datasets. For example, we provide rigorous methods for estimating preferential attachment models and show how to separate the effects of preferential attachment and triadic closure. Non-parametric estimates of the importance of degree show a highly linear trend, and we expose the importance of looking carefully at nodes with degree zero. Examining the formation of a large citation graph, we find evidence for an increased role of degree when accounting for age.
|
Estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. There is vast literature on pursuing the latter question by estimating a logit model with maximum likelihood, called exponential random graph models (ERGMs) @cite_46 @cite_42 @cite_54 . However, these models do not consider individual edge events, are hard to estimate, and have known pathologies @cite_2 @cite_68 . Should check out TERGMs again, temporal ERGMs, and put that work in it's place here. I think Carter Butts is in that literature.
|
{
"abstract": [
"This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determine the general form of the exponential random graph model for the network. Examples of different dependence assumptions and their associated models are given, including Bernoulli, dyad-independent and Markov random graph models. The incorporation of actor attributes in social selection models is also reviewed. Newer, more complex dependence assumptions are briefly outlined. Estimation procedures are discussed, including new methods for Monte Carlo maximum likelihood estimation. We foreshadow the discussion taken up in other papers in this special edition: that the homogeneous Markov random graph models of Frank and Strauss [Frank, O., Strauss, D., 1986. Markov graphs. Journal of the American Statistical Association 81, 832–842] are not appropriate for many observed networks, whereas the new model specifications of [Snijders, T.A.B., Pattison, P., Robins, G.L., Handock, M. New specifications for exponential random graph",
"Biological, sociological, and technological network data are often analyzed by using simple summary statistics, such as the observed degree distribution, and nonparametric bootstrap procedures to provide an adequate null distribution for testing hypotheses about the network. In this article we present a full-likelihood approach that allows us to estimate parameters for general models of network growth that can be expressed in terms of recursion relations. To handle larger networks we have developed an importance sampling scheme that allows us to approximate the likelihood and draw inference about the network and how it has been generated, estimate the parameters in the model, and perform parametric bootstrap analysis of network data. We illustrate the power of this approach by estimating growth parameters for the Caenorhabditis elegans protein interaction network.",
"We introduce a method for the theoretical analysis of exponential random graph models. The method is based on a large-deviations approximation to the normalizing constant shown to be consistent using theory developed by Chatterjee and Varadhan [European J. Combin. 32 (2011) 1000-1017]. The theory explains a host of difficulties encountered by applied workers: many distinct models have essentially the same MLE, rendering the problems practically'' ill-posed. We give the first rigorous proofs of degeneracy'' observed in these models. Here, almost all graphs have essentially no edges or are essentially complete. We supplement recent work of Bhamidi, Bresler and Sly [2008 IEEE 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS) (2008) 803-812 IEEE] showing that for many models, the extra sufficient statistics are useless: most realizations look like the results of a simple Erd o s-R ' e nyi model. We also find classes of models where the limiting graphs differ from Erd o s-R ' e nyi graphs. A limitation of our approach, inherited from the limitation of graph limit theory, is that it works only for dense graphs.",
"Spanning nearly sixty years of research, statistical network analysis has passed through (at least) two generations of researchers and models. Beginning in the late 1930's, the first generation of research dealt with the distribution of various network statistics, under a variety of null models. The second generation, beginning in the 1970's and continuing into the 1980's, concerned models, usually for probabilities of relational ties among very small subsets of actors, in which various simple substantive tendencies were parameterized. Much of this research, most of which utilized log linear models, first appeared in applied statistics publications.",
"The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses."
],
"cite_N": [
"@cite_54",
"@cite_42",
"@cite_2",
"@cite_46",
"@cite_68"
],
"mid": [
"2160268549",
"2051196362",
"1977990557",
"2076844992",
"2143270803"
]
}
|
Choosing to Grow a Graph: Modeling Network Formation as Discrete Choice
|
Understanding how networks form and evolve is an essential component of understanding their structure, which in turn underlies the basis for understanding the broad range of processes that occur on networks. Models of social network formation can largely be decomposed into node formation and edge formation. In this work, we argue that edge formation can be effectively modeled as a choice made by an actor (or actors) in the network to instantiate a connection to another node. The diverse research on network formation has led to many models and mechanisms of edge formation, including preferential attachment [2], uniform attachment [12], triadic closure [31], random walks [65,78], homophily [55], copying edges from existing nodes [35,39], latent space structures [22,41,55], inherent node fitness [7,11], and combinations of all of these [28,40,43]. Here, we frame edge formation as a discrete choice process and derive a family of discrete choice models [47,74] that subsume a wide range of existing models in a unified framework and also naturally opens up a host of powerful extensions.
Discrete choice models are commonly employed in economics, social psychology, and statistics as a way to model how individuals make choices from a slate of discrete alternatives [1]. Typically, the alternatives have associated features, and statistical models of discrete choice make it possible to estimate the relative importance of such features. Such models have been used to answer questions such as how consumers choose goods [67], how people choose where they live [46], how students choose what college to attend [21], and how commuters choose between different modes of transportation [75]. Discrete choice analysis is also used to understand how choices vary depending on the context in which they are framed: in online commerce, this could be how web layouts lead to different purchasing priorities [26]; for choosing colleges, this could be incorporating the effect of the national economy. In this paper, we demonstrate how discrete choice models can similarly help us understand the factors driving social network evolution.
The starting point for the present work is the observation that edge formation events in social networks are naturally viewed as discrete choices. For simplicity, consider a directed graph where edges are formed one by one, where we can think of the formation of a directed edge (i, j) as i "choosing" to connect with j, where the set of alternatives available to i is the set of all other nodes. (While undirected graph models are common in social network analysis, the underlying formation procedure is almost always asymmetric. For example, the Facebook friendship graph is typically modeled as an undirected graph [77], but the friendships are proposed by one of the two nodes in an edge.) The key modeling question is easy to state: why did i choose j? This question has long been the informal subject of network formation modeling and at the same time the exact question that discrete choice models and analysis have been designed to answer. However, up to this point, network formation models have largely been decoupled from discrete choice theory.
In employing discrete choice analysis, we focus on the conditional multinomial logit model, commonly called the conditional logit model for short, which is a foundational workhorse of discrete choice modeling. The model belongs to the family of random utility models, where choices are interpretable as those of a rational actor selecting the alternative with the largest "utility" sampled from random variables that decompose into the inherent utility of the alternative and a noise term. With the conditional logit model, we can use existing optimization routines to estimate model parameters and existing statistical methods to asses the uncertainty of the estimates. Discrete choice models can also easily restrict the set of available alternatives, where it might not be reasonable to assume that the entire set of nodes is available for friendship. For example, sometimes only "friends of friends" are considered [24,28,40].
In this paper, we first show that many popular network formation mechanisms can be rewritten as conditional logit models, including preferential attachment, uniform attachment, node fitness, latent space models, and models of homophily. However, the real power of discrete choice models for social network analysis is the ability to combine different features (e.g., node degree and node age), as well as different mechanisms (e.g., triadic closure and preferential attachment) and estimate their relative roles. Social networks are enormously varied in their structure [27], but existing methods often do a poor job at modeling this diversity. Thus, beyond unifying the network formation and discrete choice literature, we also develop several new tools for social network analysis. For example, we show how to estimate models to distinguish the effects of preferential attachment and triadic closure. We demonstrate these tools by analyzing the formation of the Flickr social network and the formation of a citation network. We find on Flickr that accounting for triadic closure greatly reduces the estimated role of degree in choosing who to connect to, and that nodes with degree zero have a remarkably high utility. Our estimates of preferential attachment in the citation network are similar to those observed in prior studies. When accounting for the age of a paper, we find evidence for linear preferential attachment. However, for a fixed degree, we find that age is negatively correlated with the likelihood of a new citation (i.e., older papers are less likely to be cited).
The key assumption underlying our framework is that the available data actually captures edge formation events (either through edge timestamps or other sequential information). In contrast, many existing approaches to understanding network formation focus on observing only the structural properties of a network at a single point of observation, e.g., its degree distribution, and initiating a deductive process to try and understand how variations in edge formation would lead to different outcomes [2,7,28,43]. This approach leads to tidy analyses and easy-to-characterize asymptotic properties, but model selection in this context is strongly dependent on what properties are compared. Different underlying formation processes can lead to graphs with indistinguishable properties. For example, many different formation processes result in the same heavy-tailed degree distributions [52]. Thus, when "fitting" outcome measurements in this way, one has to know (or posit), e.g., the relative rates of node formation and edge formation. However, when temporal or sequential data is available [25,56], our framework overcomes these limitations by incorporating this structure.
Additional related work. There is a strong connection between our work and work on link prediction and missing data methods using network features to predict edges [15,42]. A network formation model implicitly makes claims about what edges are most likely to form next, and thus can be evaluated by the same metrics as link prediction algorithms [44]. We use predictive accuracy as a measure of goodness of fit, but our primary concern is interpretability of the model and estimates, which is one of the advantages of the conditional logit model.
In sociology, stochastic actor-oriented models (SAOMs) employ a similar logit choice [69,70]; however, these models are targeted towards data collected as a few snapshots rather than edge-by-edge formation. SAOMs also model the rate at which nodes form new relationships, whereas we condition on the node initiating the new edge, providing better estimates of model parameters. There are also sociological models such as relational event models [10] and dynamic network actor models [71] that use fine-grained temporal information, yet these also do not condition on the initiator node as we do. While these sociological models can incorporate notions of network formation (e.g., preferential attachment), our conditional logit framework actually cleanly subsumes a wide range of models as special cases.
Finally, estimating the parameters that drive edge formation is different from identifying the factors that could have lead to the observed graph. The latter question is often pursued with so-called exponential random graph models (ERGMs) [63,79,81]. However, these models do not consider individual edge events, are hard to estimate, and have known pathologies [13,66].
DISCRETE CHOICE AND EDGE FORMATION
We now develop network formation through the lens of discrete choice. Throughout this paper, we assume that the networks are directed. Again, while undirected graphs are common in social network analysis, the actual edge formation process often has directed initiation. In the common setting of "growing graphs, " nodes arrive one at a time and form edges when arriving in a network. In these cases, the newly arriving node is considered to be the node initiating the connection; such analysis is standard with, e.g., classical preferential attachment models [2]. When modeling the directed formation of an edge (i, j), two processes need to be distinguished, roughly corresponding to the questions "who is i?" (the chooser) and "who is j?" (the chosen). In this paper, we focus on understanding the latter, i.e., the formation of (i, j) as the selection of j conditional on knowing that i is ready to form an edge. Thus, our discrete choice models of edge formation can be readily estimated from data that implicitly or explicitly contains a record of initiating i nodes and used for subsequent analysis, as we show in Sections 3 and 4. Beyond the scope of this work, our model of "j conditional on i" can be paired with a model of "initiations by i" for a full generative model of network formation.
Edge formation as discrete choice
With the above formalisms in place, we now develop network formation from a discrete choice perspective. We begin by showing how several well-known models can be conveniently expressed as conditional logit models, with a summary given in Table 1. All models are designed to grow simple graphs (i.e., without multi-edges), and the choice set C excludes any nodes to which the chooser i is already connected. Every item is represented by its features that, importantly, can evolve over time. The features x j,t of node j at time t are thus always time-indexed, but we often suppress the t to reduce notational clutter.
Preferential attachment. We start with the generalized Barabási-Albert model [2,8,36], also known as the generalized Price model [59], one of the most studied models in the network formation literature. It is typically stated as a growth model of a time-evolving graph G t = (V t , E t ), t = 1, 2, 3, . . ., and when a new node arrives it connects to m distinct existing nodes j with a probability proportional to a power of their degree d j,t at time t,
P(j, V t ) = d α j,t ℓ ∈V t d α ℓ,t .(2)
The exponent parameter α controls the relative importance of degree [36]. The case where α = 1 is called linear preferential attachment, and produces networks that can mimic a range of structural properties observed in empirical networks. If we represent each potential neighbor j with the time-indexed one-dimensional "feature vector" x j,t = log d j,t and employ a conditional logit model as in Equation (1), we obtain a utility of j for i at time t of u i, j,t = θ log d j,t . Here the choice model parameter θ plays the exact role of α, since e θ log d j, t = d θ j,t . Table 1: Network formation models framed as utility functions for a conditional logit. Where appropriate, we use the traditional notation for the parameters of each process.
Process u i, j C Uniform attachment [12] 1 V Preferential attachment [2,36] α log d j V Non-parametric PA [54,58,62] θ d j V Triadic closure [61] 1 {j : FoF i, j } FoF attachment [31,65,78] α log η i, j V PA, FoFs only α log d j {j : FoF i, j } Individual node fitness [11] θ j V PA with fitness [6,53] α log d j + θ j V Latent space [22,41,55] β · d(i, j) V Stochastic block model [33] ω д i ,д j V Homophily [48] h · 1{д i = д j } V Given a growing network G t , we can construct a choice dataset D from this network by extracting the node j t , node sets V t , and degree sequence (d 1,t , . . . , d |V t |,t ) at each time-step. The preferential attachment model has only one parameter, θ = α. The loglikelihood for that parameter given a dataset is then:
l(α; D) = (j,C)∈D log exp(α log d j ) ℓ ∈C exp(α log d ℓ ) = (j,C)∈D α log d j − log ℓ ∈C exp(α log d ℓ ) .
We've suppressed the time-index t from the features log d ℓ to reduce clutter, but emphasize that d ℓ is the degree at the time of the choice.
Non-parametric preferential attachment. The above model assumes an attachment kernel of a particular parametric form. From a discrete choice perspective, one can also estimate the role of degree in edge formation non-parametrically by estimating a coefficient θ k for each degree k = 0, . . . , n − 1 individually. This approach has the added benefit of being able to assign positive probability to choosing nodes with degree zero. Under this model, the log-likelihood of the parameters θ = (θ 0 , ..., θ n−1 ) given the dataset is:
l(θ ; D) = (j,C)∈D log exp θ d j ℓ ∈C exp θ d ℓ = (j,C)∈D θ d j − log ℓ ∈C exp θ d ℓ .
Again we've suppressed time-indexing to simplify the presentation. Pham et al. [58] previously described a version of the above likelihood as a means of measuring the attachment kernel using maximum likelihood, albeit without making the connection to discrete choice.
Uniform attachment. A simple edge formation model is to sample a new neighbor uniformly at random from all nodes [12]. There are no parameters in this model, but we can still write down the likelihood of the model given a dataset, which will be useful when 3
we later combine this model with others within a mixture model:
l(D) = (j,C)∈ D log exp (1) ℓ ∈C exp (1) = (j,C)∈ D − log |C |.
Triadic closure. A variant of uniform attachment is for i to attach to new neighbors uniformly at random from the set of their friendsof-friends, as opposed to the set of all nodes. This process effectively models triadic closure [61]. It has the same simple functional form of the uniform model, but now the choice set C varies with each choice, namely, the choice set is restricted to be only the friends of friends of node i (the chooser) to which i is not already connected. This change in choice set can also be achieved by assuming the utility of j to i at time t is u i, j,t = log(1{FoF i, j,t }), where 1{FoF i, j,t } is a boolean indicating whether i and j are friends of friends at time t, and then letting the choice set revert to the full node set. An additional model that naturally combines the ideas of preferential attachment and befriending friends-of-friends takes the number of friends in common between i and j as a feature. We could define this feature as η i, j,t = |{k : e i,k,t ∧ e k, j,t }|, where e i,k,t indicates whether there is an edge between i and k at time t. The corresponding utility would be u i, j,t = α log η i, j,t . This model is similar (but not equivalent) to random walk-based formation models [31,65,78], which emphasize formation within a local neighborhood.
Node fitness. Another line of formation models that is subsumed by the discrete choice framework are those involving fitness. In this work, nodes choose to connect to others based on some intrinsic latent fitness score. Certain distributions of fitness values lead to a scale-free degree distribution [11], providing an alternative explanation to preferential attachment for modeling such degree distributions. We can express the node fitness model by a conditional logit model with separate fixed effect θ j for each node j (so the feature of a node is an indicator vector of its identity). The likelihood of the fitness parameters θ given the data is then:
l(θ ; D) = (j,C)∈ D log exp θ j ℓ ∈C exp θ ℓ = (j,C)∈ D θ j − log ℓ ∈C exp θ ℓ .
This formation model is equivalent to the classic Bradley-Terry-Luce model of discrete choice for estimating the quality of alternatives [45]. Alternatively, one could replace the individual fixed effects with surrogate features of node fitness such as an auxiliary measure of gregariousness (in the case of social networks), or the impact factor of a paper's journal (in the case of citations networks).
A related model proposes selection probabilities proportional to the product of node fitness and degree [6,53]. This model can be written as a conditional logit model with u i, j,t = α log d j,t + θ j .
Latent space models. Another class of network formation models postulates the existence of a latent space that drives connections between nodes. Examples of latent spaces include Euclidean space [22], hyperbolic space [37], a tree [41], a circle [55], or a set of discrete classes [23]. While the conditional logit model in the form that we describe it does not facilitate finding the best-fitting latent space assignment to explain the data, it can be used to estimate the relative importance of a known latent space given a distance function d(i, j). As one example from the family of latent space models, in the community-guided attachment (CGA) model [41] all nodes have a distance derived from the height h(i, j) of common parents in a latent tree structure situating all nodes i and j. Given this tree as known, a node connects to another proportionally to c −h(i, j) for some scalar c > 0. As a conditional logit model, the corresponding utility function is u i, j = −h(i, j) · log(c). The parameter vector θ = log c can be retrieved by fitting a conditional logit with a known h(i, j) as the only variable and transforming the estimated parameter with c = exp(θ ). Assuming that the latent space representation is given is a strong assumption, and fitting such a model while estimating the latent space representation (e.g. as done by Hoff et al. [22] in Euclidean space) is much more difficult.
Additional models. Conditional logit models are very flexible and can deal with multiple features and interactions between them. Any number of features can be added, including node covariates and structural features like a node's clustering coefficient [3] or age [12,40]. Conditional logit models can also be used to investigate the role of homophily [48] in edge formation, by adding a binary feature indicating whether nodes i and j are part of the same class. Table 1 summarizes how several network formation models fit within the discrete choice framework via their corresponding utility functions and choice sets. A major advantage of this framework is that different features can easily be combined into a single model and jointly estimated. Or, when suitable, one can employ a mixture of conditional logit models, as we show in the next section.
Combining modes using Mixed Logit
So far we have written a range of existing and new edge formation models as conditional logit models, a specific type of discrete choice model. But several existing edge formation models that do not fit neatly into the conditional logit framework, meanwhile, align exactly with the use of mixture models in discrete choice modeling. Following our success formulating edge formation models as conditional logit models, in this subsection we develop mixed conditional logit formulations of several additional models.
A common proposal to make network formation models more flexible is to augment an existing model by allowing nodes to pick neighbors uniformly at random with some probability 1 − p, while running the ordinary model with probability p [17,35,39,43]. This augmentation increases flexibility because it enables the model to explain edge events that may otherwise have probability zero. Within discrete choice, this approach is precisely a mixed logit model where one of the mixture modes is uniform attachment.
While the conditional logit estimates a single parameter vector representing average preferences as shared by all agents, the mixed logit model is often used to account for differences in preferences across various types of agents. In its most general form, the mixed logit is expressed using a probability distribution f over different instantiations of the parameter vector θ :
P i (j, C) = ∫ exp θ T x j l ∈C exp θ T x l f (θ ) dθ .
Process Modes
Copy model [35] Uniform, PA Node types [38] New node, PA, none Local search [24,28] Uniform, Uniform FoF (r , p)-model Uniform, PA, Uniform FoF, PA FoF
In this work, we will only consider discrete mixtures of M logits, also called a latent class model [32]:
P i (j, C) = M m=1 π m exp θ T m x j l ∈C exp θ T m x l ,
where M m=1 π m = 1 and the weights π 1 , . . . , π M model the relative prevalence of each mode.
Copy model. The copy model is a classic formation process that can be written as a mixed logit with two modes. In the first mode, new edges connect proportional to degree with probability p, while in the second mode they connect uniformly at random with probability 1 − p [17,43]. As a conditional logit model, the utilities of the two modes are u (1) x = log d x and u (2) x = 1, respectively, and the class probabilities are (π 1 , π 2 ) = (p, 1 − p). (This is a special case of the original copy model where d edges are copied from a sampled vertex [39]; the model here is when d = 1, which is often used for analysis [19].) The connection between relaxations of preferential attachment and mixture models was also recently observed by Medina et al. [49].
Local search model. Another example of a model with multiple modes is the Jackson-Rogers model of edge formation as a mixture of uniform attachment and triadic closure [24,28]. The original model is based on a relative rate r * between edges forming at random and edges formed locally. It also has edges form based on respective acceptance probabilities. We describe a simplified version of this model, which we'll call the local search model, where edges connect to nodes selected uniformly at random from the full node set with probability r and uniformly at random from the set of friends-of-friends with probability 1 − r . 1 We can represent this simplified process with a two-mode mixed logit model. In this case the mixture parameters are (π 1 , π 2 ) = (r , 1 − r ) and both modes have the same utility function u x = 1 but their choice sets differ so that the second mode only considers friends-of-friends. 2 Table 2 overviews the mixture model formulations described above, as well as a new model-the (r, p)-model-that we use in Section 4.2 to analyze preferential attachment effects. 1 Since the r * parameter in the original presentation is actually the rate of uniform attachment, we can relate it to our r through r = r * 1+r * . For example, if the rate between random and friend-of-friend edges is one to one (r * = 1), then r = 0.5. 2 A model with a restricted choice set, for example to only friends-of-friends, gives a likelihood of zero to choices outside the choice set.
ESTIMATION AND INFERENCE
To learn a discrete choice model of network formation from data, we assume that we have access to a sequence of directed edges, in chronological order. This sequence of edges needs to be recast as choice data in order to fit a choice model. For every formed edge (i, j), we create a data point consisting of the choice j, the choice set of candidates nodes at the time, and the features of each candidate node at the time.
Given a data set and a conditional logit model, one can write out the log-likelihood, as shown in Section 2.2. For any conditional logit model with a linear utility u i, j = θ T x j , the likelihood function is convex with respect to the variables θ and can be efficiently maximized using standard gradient-based optimization (e.g., BFGS). The functional form of the logit leads to straightforward gradients. For example, for preferential attachment, the gradient is
∂ ∂α l(α; D) = (x,C)∈D log d x − y ∈C log d y · exp(α log d y ) y ∈C exp(α log d y ) ,
where the time-dependence of the features (degrees) have been suppressed to reduce clutter. Gradients for the other choice models in Section 2.2 are omitted but straightforward. One advantage of likelihood-based model fitting is that we can compute standard errors and confidence intervals of the parameters. In particular, the standard errors can be computed with
√ H −1 [74],
where H is the Hessian matrix of second derivatives of the loglikelihood at the parameters.
Mixture models and expectation-maximization. For mixed conditional logit models, the log-likelihood is no longer convex in general, making optimization more difficult. To maximize the likelihood of mixed models we turn to expectation maximization (EM) techniques [18,73]. We briefly summarize the procedure described in Train's book [74,Chapter 14.3.2]. Assume that we have a model with M modes (i.e., mixture components), where every mode starts with initial parameter values ì θ m (usually initiated at 1). Choices (x k , C k ) ∈ D are again indexed with k, so that k ∈ {1, . . . , n} and n = |D|. The EM algorithm runs through the following steps:
(1) Initiate class probabilities uniformly with π m = 1/M and initial class responsibilities γ m k = 1/M for each data point. (2) For every data point k and every mode m, compute the class responsibility given by the relative individual likelihood:
γ m k = π m · L m (θ m ; (x k , C k )) M ℓ=1 π ℓ · L ℓ (θ ℓ ; (x k , C k ))
.
(3) For every mode m, update the total class probability with
π m = 1 N N k =1 γ m k .(4)
For every mode m, update the parameters ì θ m using standard optimization for fitting a single model, weighing each choice set with its class responsibility γ m k . (5) Repeat steps 2-4 until some convergence or stopping criteria.
The total likelihood of the parameters and class probabilities is:
l(θ ; D) = M m=1 l m (θ m ; π m ; D) = M m=1 N k =1 log L m (θ m ; (x k , C k )) ·π m 5
We monitor the convergence of the iterative procedure using the change in this total likelihood between iterations.
Even though EM is theoretically an efficient estimator [82], there are cases when alternatives are appropriate. For example, if one has reasonable bounds or priors on the parameter values, then direct likelihood maximization could be used, and if the search space is low-dimensional, a grid search might be appropriate. Recent theoretical work has also developed algorithms for learning mixtures of two multinomial logit modes with theoretical guarantees assuming a separation between the modes [14].
Negative sampling. Every time an edge is formed by some node i, each node not yet connected to i is a candidate choice. For large sparse graphs, the full choice set of all nodes can become large and the gradients of the log-likelihood expensive to compute. To speed up this computation, s negative/non-chosen examples can be sampled uniformly at random to create a (random) reduced dataset with smaller choice sets. For each choice (j, C), one forms a smaller random choice set out of the positive choice and the negative samples, C ⊂ C with |C | = s + 1, and replaces the original choice data with (j,C). As long as the negative examples are sampled uniformly at random, parameter estimates on a dataset with negatively sampled choice sets are unbiased and consistent for the estimates on the on the full set [29,46,74]. Practically, there is a trade-off between feature computation and storage on the one hand, and the ability to estimate coefficients for rare features on the other.
Typical likelihood surface. In Figure 1 we show the representative likelihood surface of a copy model to illustrate its properties. We generated a synthetic graph on n = 10, 000 nodes according to the copy model with m = 4 edges per node and degree-attachment probability π 1 = 0.5. We fit a two-mode mixed logit model to this data with u (1) j = α log d j,t and u (2) j = 1. We use s = 10 negative samples. There are two free parameters in this model: the degree exponent α and the mixture probability π 1 . We plot the log-likelihood across a reasonable range of values to show that surface is generally well behaved. We see that it is hard to distinguish between data generated under a copy model (α = 1) with probability π 1 = 0.5 from data generated from no-mixture (π 1 = 0) preferential attachment with α = 0.5, and there is a general trade-off between the exponent α and the mixture probability π 1 .
Model comparison and the likelihood-ratio test. Another advantage of our discrete choice framework is that we can employ standard statistical methods for model selection. Specifically, when one model is a special case of another, their relative quality can be compared using the likelihood ratio test. In the case of the conditional logit, a model with additional features can be compared to one without them because the latter is a special case of the former with the coefficients of the additional features being set to 0. Or, in the case of the mixed logit, one can define a model with multiple modes and manually set some of their class probabilities to zero.
As a concrete example, suppose we wanted to know whether including the age of a node in a preferential attachment model results in a statistically significantly better model. To do so, we would first estimate the parameters θ 1 of the more complex model, u (1) j = θ 1,1 log(d j ) + θ 1,2 log(age). We would then estimate the parameters θ 0 of the simpler model u L 0 be the likelihoods of the two models with parametersθ 1 andθ 0 . We can compute the likelihood ratio λ = L 0 /L 1 . Under the null hypothesis of the simpler model, with some regularity conditions, −2 log λ is asymptotically distributed χ 2 1 (χ 2 k where k is the number of additional degrees of freedom in the more complex model) [
APPLICATIONS
We now demonstrate how to use our conditional logit framework to analyze network formation processes. We first consider synthetic data and show how our tools can be used to better analyze preferential attachment mechanisms. We then analyze two empirical datasets that demonstrate how to integrate different structural features of the network or integrate node covariates. In both cases, our framework provides novel insights into the network formation processes. We provide code for processing data (converting edge lists to choice data) and for model fitting (with negative sampling), available here: https://github.com/janovergoor/choose2grow/.
Measuring preferential attachment
The question of whether and when preferential attachment is an important driver of network formation is widely debated [2,3,9,11,12,24,28,54,54,65,78]. Most prior research focuses on estimating the shape of the attachment kernel by comparing the degree of chosen nodes to the distribution of available degrees [30,54,62]. However, recent work by Pham et al. shows that previous measures are biased [58]. In particular, the bias comes from the assumption that the distribution of available nodes of varying degrees is constant throughout the formation process, but this distribution clearly changes as the network grows.
To estimate the exponent α of an attachment kernel, Pham et al. propose fitting something akin to a conditional logit with a separate coefficient for each degree, and then estimating α via a weighted least squares fit over the degree coefficients [58]. Compared to this method, fitting a log-degree logit directly is much simpler. In fact, it is the maximum likelihood estimator for α, and thus consistent and efficient. 6 Figure 2: Attachment kernel fits for a synthetic preferential attachment graph. The Newman measure computes the relative likelihood of selecting a node of that degree, as compared to the likelihood of selecting the lowest degree, but it is biased for higher degrees. The non-parametric logit is consistent but noisy for higher degrees.
To illustrate, we generate a graph with pure preferential attachment (n = 2, 000, m = 1 edges per node, α = 1) and estimate the attachment kernel by the methods of Newman [54] and Pham et al. [58]. The maximum degree of this graph was 102, and the results of the different estimation procedures are shown in Figure 2. The non-parametric estimates are similar for lower degrees, but for higher degrees the Newman measure incorrectly drops, illustrating the bias that Pham et al. have previously documented. Fitting α directly using a log-degree conditional logit gives an estimate of α = 0.987. The Pham et al. least squares fit,α LS = 1.012, is close to the MLE but may deviate considerably in more difficult instances.
Disentangling preferential attachment from triadic closure
Many models exhibit similar outcomes to preferential attachment [11,24,28,36,52,78], but there are few principled ways to rigorously test the relative validity of these models. In this section, we show how to use the discrete choice framework to estimate the relative importance of preferential attachment while accounting for other dynamics. To this end, we generate data according to a known generative process and fit various (possibly mis-specified) formation models. Our generative process is a hybrid between the copy model of preferential attachment (i.e., choose nodes proportional to degree) and the Jackson-Rogers local search model (i.e., connecting to friends-of-friends). The process, which we call the (r , p)-model, is parametrized by r ∈ (0, 1] and p ∈ (0, 1]. When a new edge is formed, with probability p it is formed uniformly at random and with probability 1 − p it is formed with linear preferential attachment (α = 1). Meanwhile, the choice set is determined by the second parameter r : with probability r , the choice set is all nodes not yet connected to i, while with probability 1 −r , the choice set is limited to available friends-of-friends of i. With r = 1 this model reduces to the copy-model and with p = 1 it reduces to the simplified local search model; the (r , p)-model thus subsumes two popular models in a single, simple discrete choice framework. For a growth process on directed graphs, it is necessary that p > 0 and r > 0, otherwise new nodes will never be selected. With this general model, we investigate how estimating parameters of one of the more specific models goes awry when the true data generating process in fact comes from an instance of the more general model. For a range of values of p and r , we generated graphs using the following growth process. New nodes arrive, each creating m = 4 edges. For every edge, we sample the mode of the model (according to r and p) independently. If an edge is supposed to be a friend-of-friend edge, but no friends-of-friends are available (for example, i's first edge), then the process reverts to uniformly random formation across the full node set. 3 Sweeping through combination of p and r parameter values, for each set of parameters we generated 10 undirected graphs with n = 20, 000 nodes each.
Degree distributions. The local search and copy models both produce graphs with power-law degree distributions. Therefore, fitting a mis-specified model on a degree distribution can lead to misleading results. To illustrate, we fit a power-law distribution p(x) ∝ x −γ to the degree distribution of graphs generated from (r , p)-models using maximum likelihood estimation [16], with estimates for γ in Figure 3. In theory, an undirected graph formed with the copy model process with probability parameter p leads to a degree distribution with power law exponent γ = (3 − p)/(1 − p) [8,52] (for directed graphs, γ = (2 − p)/(1 − p)). As p increases, the degree distribution looks more like a random graph without preferential attachment. However, as r goes down (increasing the relative role of friend-of-friends), the parameter estimate looks like the estimates for the copy model, even when p = 1.
To summarize, it is not recommended to estimate a formation model from an observed degree distribution. The parameter estimates are sensitive to small deviations in the generative process. Figure 4: The log-likelihood of varying the class probabilities of the copy model (r = 1, p free) or the local search model (r free, p = 1) for two different synthetic graphs. In both cases the true model is the most likely. On the left we see a large difference in the log-likelihood between optima, while on the right we see a smaller difference. In both cases a likelihood ratio test is highly significant (P-values < 10 −16 ).
cases. As a first case, we generate graphs with r = 0.5 and p = 1, so half the edges are formed to friends-of-friends with no utility from degree. The likelihood under a local search model (r free, p = 1) as a mixed logit is maximized at r = 0.45, while for the copy model (r = 1, p free) it is maximized at p = 0.54. The former is a much better fit than the latter (P-value < 10 −16 ), and the copy model erroneously thinks that preferential attachment is driving 45% of the edges. As a second case, we look at a graph generated with r = 1 and p = 0.5, so half the edges are due to preferential attachment, and friend-of-friending plays no role. In this case, both models are correctly maximized at their relative values. Again, the correct model has a higher likelihood (P-value < 10 −16 ).
Choosing to follow on Flickr
We now apply our framework to examine a real-world network formation dataset capturing the growth of the Flickr social network. We find that incorporating a Friend-of-Friend feature beyond preferential attachment and link-reciprocation features substantially improves both likelihood and test accuracy and furthermore that the inclusion of this feature significantly reduces preference for degree-based attachment. However, omitting preferential attachment entirely leads to a worse model. We also find a preference for nodes with zero degree over low degree nodes. This hints that such nodes play a special role in the network formation process, even though they would be ignored in preferential attachment models.
Data. We use a scrape of the Flickr social network collected daily between October 2006 and May 2007 [50,51]. Users of Flickr can choose to follow other users and the "following" (but not the "followed by") connections are publicly accessible. The data was gathered using a breadth-first search crawl, which means that only the connected components reachable from the seed profiles are represented in the data. Since a full crawl was performed daily, the timing of new edges can be identified at the granularity of a day. The graph contains 3.2 million nodes and 33.1 million edges. Note: *p<0.01
As described in the original papers, this data is consistent with both preferential attachment, as inferred from the in-degree distribution, and local search, as inferred from the over-representation of edges to nodes that are close to the linking node [50]. Fitting a power law to the distribution of in-degrees givesγ = 1.741, which would indicate super-linear preferential attachment. We can test the relative importance of triadic closure by fitting a Jackson-Rogers model using the degree distribution matching procedure described in [28]. This results inr = 0.252, estimating that three out of four edges are formed through triadic closure.
Discrete choice analysis. We fit a series of conditional logit models to further investigate the network formation process. We isolated a sample of 20,000 edge formation events occurring around the same date, 4 to avoid time heterogeneity affecting the estimates. We fit several models, displayed in Table 3. Not-chosen alternatives are negatively sampled with s = 24. We log-transform in-degree (representing the number of followers), but in order to account for nodes with degree zero, we add a "has degree" feature for having a positive degree and use a modified version of log that returns 0 for input 0. 5 In the first column, we fit a model using just these two degree-related features, and a reciprocity feature capturing whether the target node is already following the chooser. Reciprocity is a common phenomenon, with 60% of edges being followed back [50]. The estimateα (the coefficient for "log Followers") for this model is significantly larger than 1, again consistent with super-linear preferential attachment.
In the second model, we test the effect of the target node being a friend-of-friend of the choosing node. In the case of Flickr, this means that the choosing user already follows someone that follows the target user, which evidently is strongly correlated with following that user. However, combining these two features in a third model (column 3) leads to both estimated parameters dropping substantially. Most remarkable is the 40% drop in the estimate of α, which paints a very different picture about the role of degree.
In the fourth model, we measure network proximity as in the original paper, by counting the number of "hops" (path length) from i to the target before an edge was made. We integrate the hops as categorical variables to show the relative impact of each additional "hop". Being two hops away is equivalent to being a friend-of-friend, and thus has strongly positive coefficient. Every additional hop corresponds to a sharp decrease in choosing that node. Being five hops away is slightly worse than there not being a path at all. This could be an artifact of the way the data was gathered, so that new regions of the graph only get "discovered" when there is at least one link to them, or this could be due to path length not being an accurate measure of distance for newer nodes. Since the number of hops is co-linear with being a friend-of-friend, we can't test them both at the same time.
In Figure 5 we visually show the effect of different specifications on the estimate ofα. The first model of the Flickr data looks like super-linear preferential attachment, while the role of degree in the other two is significantly reduced. However, fitting a nonparametric model shows that the estimated coefficients for individual degrees are remarkably linear, suggesting that the functional form of d α j is a good fit for this network. One important point is the role of zero-degree nodes. In most descriptions of preferential attachment, nodes with degree zero are not considered. However, in the Flickr data set, zero-degree nodes have a higher utility than positive low degree nodes, which could again be an artifact of the data collection process, or point to the special role of new nodes in the network. Either way, our framework allows one to find these kinds of patterns, and investigate them further.
Choosing to cite
We now turn to citation network data to show how a discrete choice framework facilitates the testing of network formation hypotheses. Previous analyses of citation networks have observed linear preferential attachment with respect to degree [62] and bias towards citing more recent work [62]. Here, we find consistent results that older papers are less likely to be cited but that accounting for age actually increases the importance of degree (i.e., after accounting for age, higher degree nodes are more likely to be cited). Flickr q q q q q q q q q q q qqq qqqqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure 5: The probability of being chosen by degree, as compared to a node with degree 1. We show the fits of parametric (lines) and non-parametric (points) conditional logit models of the Flickr and citation networks. The legend references model numbers in Table 3 and Table 4. The estimate for degree 0 is inserted for comparison. Dashed reference lines illustrate what exact linear preferential attachment (α = 1) would look like.
Data. We use the Microsoft Academic Graph 6 dataset and focus on a representative subgraph of 459,000 "Climatology" papers. We focus on the subgraph of a single field to simplify the analysis since citations are predominantly within the same field of study (our analysis was similar on other subgraphs). We construct a graph out of this data by adding an edge each time a paper in our dataset cites another paper in our dataset. For our analysis of Climatology publications, 45% of edges are within the domain and citations to papers that are not labeled are excluded, leaving 3 million edges. We sample 10,000 citation events uniformly at random from papers published after 2010 and apply negative sampling (s = 24). This processing results in 10,000 choices with 25 alternatives in each choice set. For each possible choice, we compute four features: the number of citations at the time of citation, whether the paper shares authors with the citing paper, the age of the paper in years at the time of citation, and the maximum number of publications by any one of the authors at the time of publication. This last feature is a proxy for node fitness [11].
Discrete choice analysis. We fit conditional logit choice models relating these features to the likelihood of citation (Table 4). The first model (first column) is a simple log-degree model. We find that the estimateα (the coefficient for "log Citations") is substantially lower than one, consistent with sub-linear preferential attachment. Apart from the log-likelihood of the models, we also report the predictive accuracy (defined as the share of instances predicted correctly) on a holdout test set of 2,000 examples. Just relying on prior degree already gives an accuracy of 36%, which is high for a classification task with 25 classes. In model two (second column), we add a covariate for whether a paper shares an author with the citing paper. As expected, this has a strongly positive coefficient. For the third model we add a covariate for the age of the paper in log years (years is always at least one). Older papers are less likely to get cited (accounting for degree), but accounting for age increases the relative importance of degree significantly. This expanded model also increases the accuracy to 53%, indicating that these feature weights do capture substantially more predictive power. Finally, in model four we add the "max papers by authors" feature as a proxy for fitness. The coefficient is small but positive. Accounting for fitness slightly reduces the estimated relative importance of degree, but theα estimate is still close to 1. Adding this feature does not improve the log-likelihood or predictive accuracy; a better proxy for fitness may explain the data better. Looking back to the visual display of α for the citation models in Figure 5, the non-parametric coefficients are highly linear. In this data, zero-degree nodes are significantly less attractive than nodes with degree one. As with any regression, the identifying causal effects from model fit depends on the design of the study. The estimates we provide here, as is the case with most analyses of observational data, are descriptive and not meant to describe causal processes. The point is that discrete choice models provide a flexible framework to easily test and compare different hypotheses around network formation.
DISCUSSION
When modeling network formation, the majority of the literature analyzes networks that grow "externally, " with new nodes arriving and choosing who to connect to, and this setting has also been our main focus here. External growth leads to convenient models that are relatively easy to analyze, with citation networks and patent networks as examples of empirical networks that follow this generative process reasonably closely. However, in many (especially social) networks, pairs of older nodes often form edges as well, edges that are "internal" to the existing set of nodes. An extreme example is the social networks of schools or classrooms, which have a fixed node population and "grow" purely through an internal growth process. A major advantage of modeling network formation as discrete choice is that it does not require any model of edge event initiation and simply conditions on the sequence of decisions to initiate, focusing the modeling on the choices made by the initiator. Discrete choice can therefore easily be used to model internal growth as well.
Another major advantage of discrete choice modeling is that it connects the analysis of large-scale network datasets to statistical methods (fitting generalized linear models) that are tremendously scalable. As we show in this work, additional techniques (e.g., negative sampling) makes it possible to efficiently scale the estimation process to very large network datasets.
Since the conditional logit model of discrete choice is a random utility model, the estimated parameters can be interpreted as the marginal utility of each feature. This allows one to question the functional form of features. For example, we show that preferential attachment is equivalent to the logarithmic utility of degree. Given that degree is commonly heavy-tailed, this is a natural functional form, but we point out that the conditional logit allows one to flexibly compare different specifications.
Our discrete choice perspective has implications for how network data is best collected and analyzed. It is useful to consider and record notions of directionality, even if edges can otherwise be considered to be undirected. With information about the choice set associated with each choice, we can see what each node j looked like at the time the choice was made. Datasets that record the exact time of all edge formation events, as opposed to lumping edge events at the granularity of days or years, makes it possible to further analyze the formation process in more detail.
There are a couple limitations to our proposed methodology. First, we cannot model purely undirected edges without some notion of direction. Second, even though the conditional logit and mixed logit models allow one to model similar mechanisms, the interpretations of their estimates are different. The estimates of a conditional logit are more akin to those of a linear regression model, where one estimates the expected change in an outcome from varying a covariate. A mixture model is a probabilistic combination of constituent modes, so the class probabilities indicate the relative importance to each mode, which makes it harder to compare the roles of individual features within or across modes. However, many traditional models of network formation are equivalent to mixture models, which motivated our consideration of them in this work.
By making foundational connections between network formation and discrete choice, we are hopeful that many further tools from discrete choice theory can be applied to the study of network formation. For example, there can be bias in network formation, e.g., men are more likely to cite themselves than women [34]. Our discrete choice framework can help study these cases more rigorously. For another example, discrete choice models of subset selection [5,20] could be applied to understand possible substitution and complementarity effects in network formation. And discrete choice interpretations of machine learning embeddings techniques [64] can likely help unpack the behavior of recent embedding-based network representation methods such as DeepWalk [57]. Networks fundamentally represent interactions between discrete entities, and it is therefore natural that methods for modeling and analyzing discrete choice should enable many contributions.
| 8,860 |
1906.10562
|
2955300983
|
We develop new voting mechanisms for the case when voters and candidates are located in an arbitrary unknown metric space, and the goal is to choose a candidate minimizing social cost: the total distance from the voters to this candidate. Previous work has often assumed that only ordinal preferences of the voters are known (instead of their true costs), and focused on minimizing distortion: the quality of the chosen candidate as compared with the best possible candidate. In this paper, we instead assume that a (very small) amount of information is known about the voter preference strengths, not just about their ordinal preferences. We provide mechanisms with much better distortion when this extra information is known as compared to mechanisms which use only ordinal information. We quantify tradeoffs between the amount of information known about preference strengths and the achievable distortion. We further provide advice about which type of information about preference strengths seems to be the most useful. Finally, we conclude by quantifying the ideal candidate distortion, which compares the quality of the chosen outcome with the best possible candidate that could ever exist, instead of only the best candidate that is actually in the running.
|
Attempts to exploit preference strength information have led to various approaches for modeling, eliciting, measuring, and aggregating people's preference intensities in a variety of fields, including Likert scales, semantic differential scales, sliders, constant sum paired comparisons, graded pair comparisons, response times, willingness to pay, vote buying, and many others (see @cite_26 @cite_21 @cite_12 for summaries). In our work we specifically consider only a small amount of coarse information about preference strengths, since obtaining detailed information is extremely difficult. Intuitively, any rule used to aggregate preference strengths must ask under what circumstances an apathetic majority' should win over a more passionate minority @cite_10 , and we provide a partial answer to this question when the objective is to minimize distortion.
|
{
"abstract": [
"Much of the dissatisfaction with Professor Arrow's formulation (1963) of the problem of social choice rests on Arrow's imposition of the controversial \"independence of irrelevant alternatives\" condition (A3). Arrow himself has ceased to insist upon this condition in its strong original form (Arrow 1967). The chief objection to A3 is that it prevents the collectivechoice rule from responding to intensity of individual preference, making the choice rule unacceptable on a priori grounds (Kemp and Asimakopulos 1952; Hildreth 1953; Coleman 1966; Quirk and Saposnik 1968; Sen 1970). This view is not correct if intensity of preference is measured by an ability to make side payments in order to influence collective decision making. I show that a modified version of A3 which takes explicit account of intensity of preference actually implies the original independence condition for realistic environment sets. This will be true no matter how side payments of private goods and services are evaluated. A modest amount of notation is required before the independence condition can be given formal expression. The set X, the nonnegative orthant of m-dimensional Euclidean space, is the space of hypothetical states of the economy. There are n individuals whose preferences are to be considered by any collective decision-making process, and to each individual i there corresponds a binary relation Ri which is complete on X. The term",
"Dinner is over. Mr. and Mrs. Jones and Mr. and Mrs. Smith are having coffee. The question arises: What shall we do this evening? Play bridge? Go to the movies? Listen to some chamber music from the local FM station? Sit and chat? Each, in due course, expresses a “preference” among these four alternatives but with this difference: Mr. and Mrs. Jones and Mrs. Smith, though each has a preference, “don't much care.” Their preferences are “mild” or “marginal.” Not so Mr. Smith. His preference is “strong.” He is tired, couldn't possibly get his mind on bridge, or muster the energies for going out to a movie. He has listened to chamber music all afternoon while working on an architectural problem, and couldn't bear any more. If the group does anything other than sit and chat, he at least will do it grudgingly. He “cares enormously” which alternative is chosen. Now: which is the “correct” choice among the four alternatives? Which, “distributive justice” to one side, is the choice most likely to preserve good relations among the members of the group? Some theorists, it would seem, find these two questions easy to answer. Mr. Smith ought to have his way, and good relations are likely to be endangered if he does not; and these answers are equally valid whether the other three all prefer the same thing or prefer different things. Since, for the latter, the choice is a matter of indifference, it is both “more fair” and “more expedient” (less likely to lead to a quarrel) for the group to do what Mr. Smith prefers to do.",
"The concept of preference intensity has been criticized over the past sixty years for having no substantive meaning. Much of the controversy stems from the inadequacy of measurement procedures. In reviewing the shortcomings of existing procedures, we identify three objectives for developing a satisfactory procedure: (1) the capability of validating expressed preference differences by actual choices among naturally occurring options, (2) compatibility with the existing problem structure, and (3) no confounding of extraneous factors in the measurement of preference intensity. Several recently developed measurement procedures are criticized for failing one or more of these objectives. We then examine three different approaches for measuring preference intensity based on multiple perspectives. Thereplication approach emerges as a promising way of satisfying the three objectives above. This methodology applies to problems where an attribute can be replicated by “parallel components” that are independent, identical copies of the attribute. We illustrate the approach with two applications reported in the decision analysis literature. We also offer guidance on how to construct parallel components satisfying the requisite properties.",
"This article revisits the problem of preference intensity modelling by proposing and analysing the novel concept of preference intensity functions. These retain what are argued to be the most essential properties expected from the consistent numerical representation of preference intensity orderings and the ordinary preferences induced by them. Their existence is characterized by simple and clearly interpretable axioms, while they are also shown to be genuinely ordinally unique and more general than utility functions associated with utility-difference representations. The empirical content of this model is then pinned down by means of a testable, intuitive and comparatively weak necessary and sufficient condition on observable behavioural data. In line with recent developments in the literature, these data are assumed to include choices and an additional variable foregone resource with intensity-revealing potential such as response times, willingness to pay or desirability ratings."
],
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_21",
"@cite_12"
],
"mid": [
"2081359028",
"2316958765",
"1971181796",
"2890914144"
]
}
| 0 |
||
1906.10343
|
2954276933
|
Recent advances in semi-supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms: access to vast amounts of human-labeled training data. Algorithms based on self-ensemble learning and virtual adversarial training can harness the abundance of unlabeled data to produce impressive state-of-the-art results on a number of semi-supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. However, these methods often require careful tuning of many hyper-parameters and are usually not easy to implement in practice. In this work, we present a conceptually simple yet effective semi-supervised algorithm based on self-supervised learning to combine semantic feature representations from unlabeled data. Our models are efficiently trained end-to-end for the joint, multi-task learning of labeled and unlabeled data in a single stage. Striving for simplicity and practicality, our approach requires no additional hyper-parameters to tune for optimal performance beyond the standard set for training convolutional neural networks. We conduct a comprehensive empirical evaluation of our models for semi-supervised image classification on SVHN, CIFAR-10 and CIFAR-100, and demonstrate results competitive with, and in some cases exceeding, prior state of the art. Reference code and data are available at this https URL
|
Self-supervised learning is similar in flavor to unsupervised learning, where the goal is to learn visual representations from large-scale unlabeled images or videos without using any human annotations. Self-supervised representations are learned by first defining a pretext task, an objective function, for the model to solve and then producing proxy labels to guide the pretext task based solely on the visual information present in unlabeled data. The simplest self-supervised task is minimizing reconstruction error in autoencoders @cite_45 to create low-dimensional feature representations, where the proxy labels are the values of the image pixels. More sophisticated self-supervised tasks such as image inpainting @cite_35 , colorizing grayscale images @cite_33 @cite_41 , and predicting image rotations @cite_4 have shown impressive results for unsupervised visual feature learning. The key to utilizing self-supervision for SSL is to learn useful features from unlabeled data through the pretext task that can be transferred and adapted to downstream supervised applications where labeled training data is scarce.
|
{
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
],
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_45"
],
"mid": [
"2963420272",
"2962742544",
"2326925005",
"",
"2100495367"
]
}
|
Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning
|
Contemporary approaches to supervised representation learning, such as the convolutional neural network (CNN), continue to push the boundaries of research across a number of domains including speech recognition, visual understanding, and language modeling. However, such progresses usually require massive amounts of human-labeled training data. The process of collecting, curating, and hand-labeling large amounts of training data is often tedious, time-consuming, and costly to scale. Thus, there is a growing body of research dedicated to learning with limited labels, enabling machines to do more with less human supervision, in order to fully harness the benefits of deep learning in real-world settings. Such emerging research directions include domain adaptation [51], low-shot learning [47], self-supervised learning [20], and multi-task learning [36]. In this work, we propose to combine multiple modes of supervision on sources of labeled and unlabeled data for enhanced learning and generalization. Specifically, our work falls within the framework of semi-supervised learning (SSL) [5], in the context of image classification, which can leverage abundant unlabeled data to significantly improve upon supervised classifiers in the limited labeled data setting. Indeed, in many cases, state-of-the-art semi-supervised algorithms have been shown to approach the performance of strong supervised baselines using only a fraction of the available labeled data [30,43,26]. (4) .
⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Neural Network Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Data Augmentation Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DUL 1 Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . (4) .
Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Lmbce = P mi , (2) ⌘ . (4) .
Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 D L y i2DL xi 2 D UL y i2DUL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 D L (xi, yi) 2 D ULzi ziỹi 2 D UL 1 Supervised
Cross-Entropy Loss
Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) ,
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) Lself-supervised Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1
Self-Supervised Cross-Entropy Loss
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , (4) . (4) .
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Geometric Transformation (âi)) · ⇣ (1 ai) log (1 (âi)) , e , ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . ReLU ⇣ W T zi + b (3) ⌘ + b
(âi)) · ⇣ (1 ai) log (1 (âi)) , (4) .
= P i mi Lbce P i mi . ly-supervised Lself-supervised Lsemi-supervised g(x) h(x) 1 Data Augmentation Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 zi = U · ReLU W T zi + b (3) + b (5) , y i = softmax(zi). Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Supervised Branch
Self-Supervised Branch
(a) SESEMI architecture for supervised and semi-supervised image classification, with the self-supervised task of recognizing geometric transformations. The function h(x) produces six proxy labels defined as image rotations belonging in the set of {0, 90, 180, 270} degrees along with horizontal (left-right) and vertical (up-down) flips. Our approach to SSL belongs to a class of methods that produce proxy, or surrogate, labels from unlabeled data without requiring human annotations, which are used as targets together with labeled data. Although proxy labels may not reflect the ground truth, they provide surprisingly strong supervisory signals for learning the underlying structure of the data manifold. The training protocol for this class of SSL algorithms simply imposes an additional loss term to the overall objective function of an otherwise supervised algorithm. The auxiliary loss describes the contribution of unlabeled data and is referred to as the unsupervised loss component.
Summary of Contributions
We introduce a new algorithm to jointly train a self-supervised loss term with the traditional supervised objective for the multi-task learning of both unlabeled and labeled data in a single stage. Our work is in direct contrast to prior SSL approaches based on unsupervised or self-supervised learning [10,11], which require the sequential combination of unsupervised or self-supervised pre-training followed by supervised fine-tuning.
Our approach to utilize the self-supervised loss term both as a regularizer (applied to labeled data) and SSL method (applied to unlabeled data) is analogous to consistency regularization. Although leading approaches based on consistency regularization achieve state-of-the-art SSL results, these methods require careful tuning of many hyper-parameters and are generally not easy to implement in practice. Striving for simplicity and pragmatism, our models with self-supervised regularization require no additional hyper-parameters to tune for optimal performance beyond the standard set for training neural networks. Our work is among the first to challenge the long-standing success of consistency regularization for SSL.
We conduct extensive comparative experiments to validate the effectiveness of our models by showing semi-supervised results competitive with, and in many cases surpassing, previous state-of-the-art consistency baselines. We also demonstrate that supervised learning augmented with self-supervised regularization is a viable and attractive alternative to transfer learning without the need to pre-train a separate model on large labeled datasets. Lastly, we perform an ablation study showing our proposed algorithm is the best among a family of self-supervised regularization techniques, when comparing their relative contributions to SSL performance.
Learning with Self-Supervised Regularization
We present SESEMI, a conceptually simple yet effective algorithm for enhancing supervised and semi-supervised image classification via self-supervision. The design of SESEMI is depicted in Figure 1a. The input to SESEMI is a training set of input-target pairs (x, y) ∈ D L and (optional) unlabeled inputs x ∈ D U . Typically, we assume D L and D U are sampled from the same distribution p(x), in which case D L is a labeled subset of D U . However, that assumption may not necessarily hold true in real-world settings where there exists the potential for class-distribution mismatch [33]. That is, D L is sampled from p(x) but D U may be sampled from a different, although somewhat related, distribution q(x). The goal of SESEMI is to train a prediction function f θ (x), parametrized by θ, that utilizes a combination of D L and D U to obtain significantly better predictive performance than what would be achieved by using D L alone. The prediction function f θ (x) is a multi-layer perceptron with three hidden layers, each hidden layer containing 100 leaky ReLU units [28] with α = 0.1. We observe that the supervised model is unable to fully capture the underlying shape of the data manifold of two moons when trained only on the labeled examples. Together with unlabeled data, SESEMI is able to learn better decision boundaries of both two moons and three spirals that would result in fewer Algorithm 1: SESEMI mini-batch training.
Require : Training set of labeled input-target pairs (x, y) ∈ D L . Training set of unlabeled inputs x ∈ D U . Geometric transformation function h(x) producing proxy labelsỹ ∈ D U . Input data augmentation functions g(x) andg(x).
Neural network architecture f θ (x) with trainable parameters θ.
for each epoch over
D U do B L ← g (x i∈D L ) Sample mini-batches of augmented labeled inputs. B U ←g (h (x i∈D U ))
Sample mini-batches of augmented unlabeled inputs.
for each mini-batch do z i∈B L ← f θ (B L ) Compute model outputs for labeled inputs. z i∈B U ← f θ (B U )
Compute model outputs for unlabeled inputs.
L ← − 1 |B L | i∈B L c∈C y ic log(z ic ) Supervised cross-entropy loss. − 1 |B U | i∈B U k∈Kỹ ik log(z ik ) Self-supervised cross-entropy loss. θ ← θ − ∇ θ L Update parameters via gradient descent. end end return f θ (x)
mis-classifications on the test set. This demonstration illustrates the applicability of SESEMI to other data modalities besides image.
Convolutional Architectures
In principle, the prediction function f θ (x) could be any classifier. For comparison and analysis with previous work, we experiment with three high-performance CNN architectures: (i) the 13-layer maxpooling Network-in-Network (NiN) [11]; (ii) the 13-layer max-pooling ConvNet [22,43,30,34,27]; and (iii) the more modern wide residual network with depth 28 and width 2 (WRN-28-2) [48,33].
We faithfully follow the original specifications of the NiN, WRN, and ConvNet architectures, so we refer to their papers for details. All architectures have convolutional layers followed by batch normalization [18] and ReLU non-linearity [31], except the ConvNet architecture uses leaky ReLU [28] with α = 0.1. The NiN, WRN, ConvNet architectures have roughly 1.01, 1.47, and 3.13 million parameters, respectively.
For both supervised and semi-supervised settings, we separate the input data into labeled and unlabeled branches, and apply the same CNN model to both. Note that the unlabeled branch consists of all available training examples, but without ground truth label information. One can view SESEMI as a multi-task architecture that has a common CNN "backbone" to learn a shared representation of both labeled and unlabeled data, and an output "head" for each task. The ConvNet backbone computes an abstract 6 × 6 × 128 dimensional feature representation from the input image, while the NiN and WRN backbones have an output of 8 × 8 × 192 and 8 × 8 × 128 dimensions, respectively. Each task has extra layers in the head, which may have a complex structure, and computes a separate loss. The head of the labeled branch has a global average pooling layer followed by softmax activation to evaluate the supervised task with standard categorical cross-entropy loss. For the unlabeled branch, we define a self-supervised pretext task to be learned in conjunction with the labeled branch.
Recognizing Image Rotations and Flips as Self-Supervision
Following [11], we apply a set of discrete geometric transformations on the input image and train the network to recognize the resulting transformations as the self-supervised task. The network architecture for the self-supervised task shares the same CNN backbone with its supervised counterpart but has a separate output head consisting of a global average pooling layer followed by softmax activation. In their original work on self-supervised rotation recognition, Gidaris et al. [11] defined the proxy labels to be image rotations belonging in the set of {0, 90, 180, 270} degrees, resulting in a four-way classification task. Their models performed well on the rotation recognition task by learning salient visual features depicted in the image, such as location of objects, type, and pose. In this work, we extend the geometric transformations to include horizontal (left-right) and vertical (up-down) flips, resulting in the self-supervised cross-entropy loss over six classes. Further, we propose to train SESEMI on both labeled and unlabeled data simultaneously, which is more efficient and yields better performance than the approach of Gidaris et al. based on the sequential combination of self-supervised pre-training on unlabeled data followed by supervised fine-tuning on labeled data.
Integrating Self-Supervised Loss as Regularization
The algorithmic overview of SESEMI is provided in Algorithm 1. At each training step, we sample two mini-batches having the same number of labeled and unlabeled examples as inputs to a shared CNN backbone f θ (x). Note that in a typical semi-supervised setting, labeled examples will repeat in a mini-batch because the number of unlabeled examples is much greater. We forward propagate f θ (x) twice, once on the labeled branch x i∈D L and another pass on the unlabeled branch x i∈D U , resulting in softmax prediction vectors z i andz i , respectively. We compute the supervised cross-entropy loss L SUPER (y i , z i ) using ground truth labels y i and compute the self-supervised cross-entropy loss L SELF (ỹ i ,z i ) using proxy labelsỹ i generated from image rotations and flips. The parameters θ are learned via backpropagation by minimizing the multi-task SESEMI objective function defined as the weighted sum of supervised and self-supervised loss components:
L SESEMI = L SUPER (y i , z i ) + wL SELF (ỹ i ,z i ).
Our formulation of the SESEMI objective treats the self-supervised loss as a regularization term, and w > 0 is the regularization hyper-parameter that controls the relative contribution of self-supervision in the overall objective function.
In previous SSL approaches based on consistency regularization, such as Π model and Mean Teacher, w was formulated as the consistency coefficient and was subjected to considerable tuning, on a per-dataset basis, for optimal performance. We experimented with different values for the weighting parameter w in SESEMI and found w = 1 yields consistent results across all datasets and CNN architectures, suggesting that supervised and self-supervised losses are relatively balanced and compatible for image classification. Moreover, setting w = 1 leads to a convenient benefit of having one less hyper-parameter to tune. We backpropagate gradients to both branches of the network to update θ, similar to Π model. The self-supervised loss term has a dual purpose. First, it enables SESEMI to learn additional, complementary visual features from unlabeled data that help guide its decision boundaries along the data manifold. Second, it is compatible with conventional supervised learning without unlabeled data by serving as a strong regularizer against geometric transformations for improved generalization. We refer to SESEMI models trained with self-supervised regularization on labeled data as augmented supervised learning (ASL). At inference time, we simply take the supervised branch of the network to make predictions on test data and discard the self-supervised branch.
Empirical Evaluation
We follow standard evaluation protocol for SSL, in which we randomly sample varying fractions of the training data as labeled examples while treating the entire training set, discarding all label information, as the source of unlabeled data. We train a model with both labeled and unlabeled data according to SESEMI (Algorithm 1) and compare its performance to that of the same model trained using only the labeled portion in the traditional supervised manner. For ASL, we train SESEMI using the ConvNet architecture on labeled data, but augment the supervised objective with self-supervised regularization. The performance metric is classification error rate. We expect a good SSL algorithm to yield better results (lower error rate) when unlabeled data is used together with labeled data. We closely follow the experimental protocols described in [11,22,43,33] to remain consistent with previous work.
Datasets and Baselines
We evaluate our proposed SESEMI algorithm on three benchmark datasets for supervised and semi-supervised image classification: Street View House Numbers (SVHN) [32], CIFAR-10 and CIFAR-100 [21]. For details on the datasets and implementation, see Appendices A and B. We also use two auxiliary datasets to augment our experiments on supervised and semi-supervised learning: 80 million Tiny Images [44] and ImageNet-32 [7]. Tiny Images is the superset of CIFAR-10 and CIFAR-100 organized into 75,062 generic scene and object categories. ImageNet-32 is the full ImageNet dataset [8] down-sampled to 32 × 32 pixels. We use ImageNet-32 for supervised transfer learning experiments. We use Tiny Images as a source of unlabeled extra data to augment SSL on CIFAR-100 and to evaluate SESEMI under the condition of class-distribution mismatch.
We empirically compare our SESEMI models trained with self-supervised regularization against two state-of-the-art baselines for supervised and semi-supervised learning: (a) the RotNet models of Gidaris et al. (2018) [11], which were pre-trained on unlabeled data with self-supervised rotation loss followed by a separate step of supervised fine-tuning on labeled data; and (b) models jointly trained on both unlabeled and labeled data using consistency regularization as the unsupervised loss, namely Π model and its Temporal Ensembling (TempEns) variant [22], VAT [30], and Mean Teacher [43].
The RotNet baseline uses the 13-layer max-pooling NiN architecture, whereas the consistency models use the 13-layer max-pooling ConvNet architecture. We also provide a comparison of SESEMI within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency models using the WRN-28-2 architecture. Thus, our experiments report results from both ConvNet and WRN backbones to evaluate the relative impact of alternative convolutional architectures on SSL performance.
Results and Analysis
Self-Supervised Regularization Outperforms Pre-Training + Fine-Tuning on CIFAR-10
Following the protocol of Gidaris et al. (2018) [11], we evaluate the accuracy of SESEMI using varying quantities of labeled examples from 200 to 50,000. Figure 2 presents our supervised and semi-supervised results on CIFAR-10 with those previously obtained by RotNet. The best results are in boldface indicating the lowest classification error rate. For both supervised and semi-supervised learning, we find that our SESEMI models trained with self-supervised regularization significantly outperform RotNet models by as much as 23.9%. Why does SESEMI outperform RotNet when the two models ostensibly share the same architecture and self-supervised task? This question can be partly explained by empirical observations that suggest better performance on the pre-training task does not always translate to higher accuracy on downstream tasks via supervised fine-tuning [20]. By solving both supervised and self-supervised objectives during training, SESEMI is able to learn complementary visual features from labeled and unlabeled data simultaneously for enhanced generalization.
We also perform an experiment to pre-train the NiN model on the large ImageNet-32 dataset containing 1.28 million images and then transfer to CIFAR-10 via supervised fine-tuning. Our motivation is to gain insight into the potential upper bound of supervised learning with limited labels via transfer learning. We find that our SESEMI models compare favorably to supervised transfer learning without the need to pre-train a separate model on ImageNet-scale labeled dataset. The ImageNet-32 entry is regarded as the upper bound in performance, whereas the Supervised entry indicates the lower bound.
Self-Supervised Regularization Outperforms Consistency Regularization on CIFAR-10 and CIFAR-100
SVHN Table 1 compares our supervised and semi-supervised results with consistency baselines. In analyzing the SVHN results on the left side of Table 1, we observe that SESEMI ASL surpasses the supervised baselines, including ImageNet-32 and those with strong Mixup [50] and Manifold Mixup [45] regularization, by a large margin for all experiments. However, the results are not satisfactory when compared against the semi-supervised baselines, especially Mean Teacher. We discuss the limitation of SESEMI for semi-supervised learning on the SVHN dataset in Section 4.
CIFAR-10
Experiments on CIFAR-10 tell a different story. The right side of Table 1 shows that SESEMI uniformly outperforms all supervised and semi-supervised baselines, improving on SSL results by as much as 17%. On CIFAR-10, the combination of supervised and self-supervised learning is a strength of SESEMI, but it is also a limitation in the case of SVHN. We observe that the ConvNet and WRN architectures produce comparable results across the board.
CIFAR-100 and Tiny Images
The successes of SESEMI on CIFAR-10 also transfer to experiments on CIFAR-100. The left side of Table 2 provides a comparison of SESEMI against the Π model and TempEns baselines, where we obtain competitive semi-supervised performance using 10,000 labels and achieve state-of-the-art supervised results when all 50,000 labels are available, matching the upper bound performance of ImageNet-32 supervised fine-tuning.
Additionally, we run two experiments to evaluate the performance of SESEMI in the case of classdistribution mismatch. Following Laine and Aila (2017) [22], our first experiment utilizes all 50,000 available labels from CIFAR-100 and randomly samples 500,000 unlabeled extra Tiny Images, most belonging to categories not found in CIFAR-100. Our second experiment uses a restricted set of 237,203 Tiny Images from categories found in CIFAR-100. The right side of Table 2 presents SSL error rates on CIFAR-100 augmented with Tiny Images. Results show that adding 500,000 unlabeled extra data with significant class-distribution mismatch does not degrade SESEMI performance, as observed by Oliver et al. (2018) [33] with other SSL approaches. For SESEMI with WRN-28-2 architecture, the addition of (randomly selected) unlabeled extra data from Tiny Images further Table 3 provides a comparison of SESEMI for semi-supervised learning on SVHN and CIFAR-10 within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency baselines using the WRN-28-2 architecture, carried out large-scale hyper-parameter optimization specific to each technique, and reported best-case performances. Our SESEMI model with WRN-28-2 architecture establishes a new upper bound in SSL performance by outperforming all methods, including ImageNet-32, under this evaluation setting for both SVHN and CIFAR-10.
SESEMI with Residual Networks
It is important to note that we do not perform any hyper-parameter search in this experiment, but use the same set of hyper-parameters described in Appendix B along with w = 1 for the weighting of the self-supervised loss term in SESEMI. In practical applications where tuning many (possibly inter-dependent) hyper-parameters can be problematic [33], especially over small validation sets, our approach to supervised and semi-supervised learning using SESEMI offers a clear and significant benefit.
Comparison with State-of-the-Art Supervised Methods on CIFAR-10 and CIFAR-100
Motivated by the strong performances of SESEMI ASL for supervised learning augmented with self-supervised regularization, we provide a comparative analysis of SESEMI against several previous state-of-the-art supervised methods on CIFAR-10 and CIFAR-100 in Table 4. We observe that SESEMI ASL is competitive in predictive performance with advanced CNN architectures like FractalNet [24], Fractional Max-Pooling [14], ResNet-1001 [15], Wide ResNet-40-4 [48], and DenseNet [17] while requiring a fraction of the computational complexity, as measured in millions of trainable parameters. For those architectures having roughly the same number of trainable parameters, our SESEMI ASL models outperform Highway Network [41] and FitResNet with LSUV initialization [29] by a large margin.
The effectiveness of SESEMI ASL is directly attributed to self-supervised regularization and not to the CNN architecture. The same ConvNet architecture without self-supervised regularization performs significantly worse on both CIFAR-10 (5.82% error rate) and CIFAR-100 (26.42% error rate). In principle, self-supervised regularization could be incorporated into any CNN architecture for further reduction in classification error rate.
Ablation Study
Several studies have provided conclusive evidence that self-supervision is an effective unsupervised pre-training technique for downstream supervised visual understanding tasks such as image classification, object detection, and semantic segmentation [9,20]. However, the evaluation of self-supervised algorithms for SSL has not been explored, especially in the setting where the supervised and self-supervised losses are jointly trained, per Algorithm 1. We briefly describe the following self-supervised tasks based on image reconstruction and compare their SSL performances against the task of classifying image rotations and flips. All experiments use the same convolutional encoder-decoder framework, where the encoder backbone is the WRN-28-2 architecture, and the decoder head comprises a set of two deconvolutional layers [25] with batch normalization and ReLU non-linearity to produce a reconstructed output with the same dimensions as the input.
Denoising Autoencoder The self-supervised objective of this simple baseline is to minimize the mean pixel-wise squared error between the reconstructed output and image input corrupted with Gaussian noise.
Image Inpainting Following [35], the input to the encoder is an image with the central square patch covering 1 /4 of the image masked out or set to zero. The decoder is trained to generate prediction for the masked region using a masked L 2 reconstruction loss as self-supervision.
Image Colorization Following [23], the input to the encoder is a grayscale image (the L* channel of the L*a*b* color space) and the decoder is trained to predict the a*b* color components at every pixel. The self-supervised loss is the mean squared error between the reconstructed a*b* color output and ground truth a*b* color components. Figure 3 shows the individual tasks of recognizing image flips and rotations outperform image reconstruction, inpainting, and colorization on the CIFAR-10 dataset. These results suggest that classification-based self-supervision provides a better, or perhaps more compatible, proxy label for semi-supervised image classification than reconstruction-based tasks. Our findings corroborate recent studies showing rotation-based self-supervision is the superior pre-training technique for downstream transfer learning tasks [11,20]. Lastly, combining horizontal and vertical flips with the rotation recognition task outperforms all other self-supervised tasks, leading to an improvement in SSL performance over rotation recognition by 8.2%.
Ablation Results
Discussion
Limitation of SESEMI We speculate the poor performance of SESEMI on the SVHN dataset stems from our chosen self-supervised task of predicting image rotations and flips. Gidaris et al. (2018) [11] showed that their self-supervised model focused its attention maps on salient parts of the images to aid in the rotation recognition task. We hypothesize similar dynamics are at play here, but the SVHN dataset presents an additional layer of complexity in which the centermost digits (the digits to be recognized) are often surrounded by "distractor" digits. When the digits are rotated and flipped, the self-supervised branch is likely picking up dominant visual features corresponding to the distractor digits and relate them to the supervised branch as belonging to the digits of interest. These "miscues" are most prominent when few labels are present, where the supervised branch is simply learning visual information from the self-supervised branch. However, when all labels are available, the supervised branch is able to correct the miscues, and our SESEMI models produce the best classification results.
Comparison with Recent Developments in SSL Recent years have seen a flurry of research on SSL techniques that are related to or concurrent with our work. The prior work of Smooth Neighbors on Teacher Graphs [27], Virtual Adversarial Dropout [34], Interpolation Consistency Training [46], Stochastic Weight Averaging [2], and Label Propagation [19] advanced the field of semi-supervised learning by achieving impressive results on SVHN, CIFAR-10, and CIFAR-100 benchmarks. However, those methods all build upon strong consistency baselines by either adding a third loss (and new hyper-parameters to tune) to the overall objective of consistency models or averaging model weights. The concurrent work on self-supervised semi-supervised learning [49] independently explores the contributions of self-supervised regularization for SSL in a way similar to ours, but with a different evaluation protocol and end goal. MixMatch [4] combines strong Mixup [50] regularization with consistency regularization to achieve state-of-the-art SSL results.
Our goal for this work is to directly compare the effectiveness of self-supervised regularization to consistency regularization, which has not been done before. Moreover, we deliberately avoid increasing complexity in favor of simplicity and pragmatism with our design choices and training protocol. In principle, our work is potentially complementary to Label Propagation and MixMatch. Label Propagation requires the Mean Teacher prediction function to work well, which can be directly replaced by our SESEMI module. For MixMatch, we can integrate a third loss term into the objective function for learning additional self-supervised features. These are viable topics for future research.
Conclusion
We presented a conceptually simple yet effective multi-task CNN architecture for supervised and semi-supervised learning based on self-supervised regularization. Our approach produces proxy labels from geometric transformations on unlabeled data, which are combined with ground truth labels for improved learning and generalization in the limited labeled data setting. We provided a comprehensive empirical evaluation of our approach using three different CNN architectures, spanning multiple benchmark datasets and baselines to demonstrate its effectiveness and wide range of applicability. We highlight two attractive benefits of SESEMI. First, SESEMI achieves state-of-the-art predictive performance for both supervised and semi-supervised image classification without introducing additional hyper-parameters to tune. Second, SESEMI does not need a separate pre-training step, but is trained end-to-end for simplicity, efficiency, and practicality.
| 5,497 |
1906.10343
|
2954276933
|
Recent advances in semi-supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms: access to vast amounts of human-labeled training data. Algorithms based on self-ensemble learning and virtual adversarial training can harness the abundance of unlabeled data to produce impressive state-of-the-art results on a number of semi-supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. However, these methods often require careful tuning of many hyper-parameters and are usually not easy to implement in practice. In this work, we present a conceptually simple yet effective semi-supervised algorithm based on self-supervised learning to combine semantic feature representations from unlabeled data. Our models are efficiently trained end-to-end for the joint, multi-task learning of labeled and unlabeled data in a single stage. Striving for simplicity and practicality, our approach requires no additional hyper-parameters to tune for optimal performance beyond the standard set for training convolutional neural networks. We conduct a comprehensive empirical evaluation of our models for semi-supervised image classification on SVHN, CIFAR-10 and CIFAR-100, and demonstrate results competitive with, and in some cases exceeding, prior state of the art. Reference code and data are available at this https URL
|
Models belonging to the self-ensembling class, such as Pseudo-Ensembles @cite_6 , Ladder networks @cite_7 , @math model @cite_13 and Mean Teacher @cite_44 , utilize the output predictions on unlabeled data as proxy labels for SSL. This class of methods considers the model as a stochastic prediction function, in which different model configurations, such as dropout @cite_34 and data augmentation, along with varying levels of noise in the input data can produce drastically different output predictions. The unsupervised objective of self-ensemble models is to minimize the mean squared error of multiple model outputs under random perturbations and data augmentation for the same training examples. The motivation behind this approach is to further regularize the model through the principle that perturbations in the input data and or data augmentation techniques should not significantly change the output of the model @cite_46 . Self-ensembling approaches are robust to random perturbations and geometric transformations, and are currently among the state of the art in SSL on several benchmark image classification datasets.
|
{
"abstract": [
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.",
"",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels."
],
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_44",
"@cite_46",
"@cite_34",
"@cite_13"
],
"mid": [
"830076066",
"",
"2592691248",
"2963435192",
"2095705004",
"2951970475"
]
}
|
Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning
|
Contemporary approaches to supervised representation learning, such as the convolutional neural network (CNN), continue to push the boundaries of research across a number of domains including speech recognition, visual understanding, and language modeling. However, such progresses usually require massive amounts of human-labeled training data. The process of collecting, curating, and hand-labeling large amounts of training data is often tedious, time-consuming, and costly to scale. Thus, there is a growing body of research dedicated to learning with limited labels, enabling machines to do more with less human supervision, in order to fully harness the benefits of deep learning in real-world settings. Such emerging research directions include domain adaptation [51], low-shot learning [47], self-supervised learning [20], and multi-task learning [36]. In this work, we propose to combine multiple modes of supervision on sources of labeled and unlabeled data for enhanced learning and generalization. Specifically, our work falls within the framework of semi-supervised learning (SSL) [5], in the context of image classification, which can leverage abundant unlabeled data to significantly improve upon supervised classifiers in the limited labeled data setting. Indeed, in many cases, state-of-the-art semi-supervised algorithms have been shown to approach the performance of strong supervised baselines using only a fraction of the available labeled data [30,43,26]. (4) .
⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Neural Network Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Data Augmentation Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DUL 1 Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . (4) .
Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Lmbce = P mi , (2) ⌘ . (4) .
Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 D L y i2DL xi 2 D UL y i2DUL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 D L (xi, yi) 2 D ULzi ziỹi 2 D UL 1 Supervised
Cross-Entropy Loss
Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) ,
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) Lself-supervised Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1
Self-Supervised Cross-Entropy Loss
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , (4) . (4) .
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Geometric Transformation (âi)) · ⇣ (1 ai) log (1 (âi)) , e , ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . ReLU ⇣ W T zi + b (3) ⌘ + b
(âi)) · ⇣ (1 ai) log (1 (âi)) , (4) .
= P i mi Lbce P i mi . ly-supervised Lself-supervised Lsemi-supervised g(x) h(x) 1 Data Augmentation Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 zi = U · ReLU W T zi + b (3) + b (5) , y i = softmax(zi). Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Supervised Branch
Self-Supervised Branch
(a) SESEMI architecture for supervised and semi-supervised image classification, with the self-supervised task of recognizing geometric transformations. The function h(x) produces six proxy labels defined as image rotations belonging in the set of {0, 90, 180, 270} degrees along with horizontal (left-right) and vertical (up-down) flips. Our approach to SSL belongs to a class of methods that produce proxy, or surrogate, labels from unlabeled data without requiring human annotations, which are used as targets together with labeled data. Although proxy labels may not reflect the ground truth, they provide surprisingly strong supervisory signals for learning the underlying structure of the data manifold. The training protocol for this class of SSL algorithms simply imposes an additional loss term to the overall objective function of an otherwise supervised algorithm. The auxiliary loss describes the contribution of unlabeled data and is referred to as the unsupervised loss component.
Summary of Contributions
We introduce a new algorithm to jointly train a self-supervised loss term with the traditional supervised objective for the multi-task learning of both unlabeled and labeled data in a single stage. Our work is in direct contrast to prior SSL approaches based on unsupervised or self-supervised learning [10,11], which require the sequential combination of unsupervised or self-supervised pre-training followed by supervised fine-tuning.
Our approach to utilize the self-supervised loss term both as a regularizer (applied to labeled data) and SSL method (applied to unlabeled data) is analogous to consistency regularization. Although leading approaches based on consistency regularization achieve state-of-the-art SSL results, these methods require careful tuning of many hyper-parameters and are generally not easy to implement in practice. Striving for simplicity and pragmatism, our models with self-supervised regularization require no additional hyper-parameters to tune for optimal performance beyond the standard set for training neural networks. Our work is among the first to challenge the long-standing success of consistency regularization for SSL.
We conduct extensive comparative experiments to validate the effectiveness of our models by showing semi-supervised results competitive with, and in many cases surpassing, previous state-of-the-art consistency baselines. We also demonstrate that supervised learning augmented with self-supervised regularization is a viable and attractive alternative to transfer learning without the need to pre-train a separate model on large labeled datasets. Lastly, we perform an ablation study showing our proposed algorithm is the best among a family of self-supervised regularization techniques, when comparing their relative contributions to SSL performance.
Learning with Self-Supervised Regularization
We present SESEMI, a conceptually simple yet effective algorithm for enhancing supervised and semi-supervised image classification via self-supervision. The design of SESEMI is depicted in Figure 1a. The input to SESEMI is a training set of input-target pairs (x, y) ∈ D L and (optional) unlabeled inputs x ∈ D U . Typically, we assume D L and D U are sampled from the same distribution p(x), in which case D L is a labeled subset of D U . However, that assumption may not necessarily hold true in real-world settings where there exists the potential for class-distribution mismatch [33]. That is, D L is sampled from p(x) but D U may be sampled from a different, although somewhat related, distribution q(x). The goal of SESEMI is to train a prediction function f θ (x), parametrized by θ, that utilizes a combination of D L and D U to obtain significantly better predictive performance than what would be achieved by using D L alone. The prediction function f θ (x) is a multi-layer perceptron with three hidden layers, each hidden layer containing 100 leaky ReLU units [28] with α = 0.1. We observe that the supervised model is unable to fully capture the underlying shape of the data manifold of two moons when trained only on the labeled examples. Together with unlabeled data, SESEMI is able to learn better decision boundaries of both two moons and three spirals that would result in fewer Algorithm 1: SESEMI mini-batch training.
Require : Training set of labeled input-target pairs (x, y) ∈ D L . Training set of unlabeled inputs x ∈ D U . Geometric transformation function h(x) producing proxy labelsỹ ∈ D U . Input data augmentation functions g(x) andg(x).
Neural network architecture f θ (x) with trainable parameters θ.
for each epoch over
D U do B L ← g (x i∈D L ) Sample mini-batches of augmented labeled inputs. B U ←g (h (x i∈D U ))
Sample mini-batches of augmented unlabeled inputs.
for each mini-batch do z i∈B L ← f θ (B L ) Compute model outputs for labeled inputs. z i∈B U ← f θ (B U )
Compute model outputs for unlabeled inputs.
L ← − 1 |B L | i∈B L c∈C y ic log(z ic ) Supervised cross-entropy loss. − 1 |B U | i∈B U k∈Kỹ ik log(z ik ) Self-supervised cross-entropy loss. θ ← θ − ∇ θ L Update parameters via gradient descent. end end return f θ (x)
mis-classifications on the test set. This demonstration illustrates the applicability of SESEMI to other data modalities besides image.
Convolutional Architectures
In principle, the prediction function f θ (x) could be any classifier. For comparison and analysis with previous work, we experiment with three high-performance CNN architectures: (i) the 13-layer maxpooling Network-in-Network (NiN) [11]; (ii) the 13-layer max-pooling ConvNet [22,43,30,34,27]; and (iii) the more modern wide residual network with depth 28 and width 2 (WRN-28-2) [48,33].
We faithfully follow the original specifications of the NiN, WRN, and ConvNet architectures, so we refer to their papers for details. All architectures have convolutional layers followed by batch normalization [18] and ReLU non-linearity [31], except the ConvNet architecture uses leaky ReLU [28] with α = 0.1. The NiN, WRN, ConvNet architectures have roughly 1.01, 1.47, and 3.13 million parameters, respectively.
For both supervised and semi-supervised settings, we separate the input data into labeled and unlabeled branches, and apply the same CNN model to both. Note that the unlabeled branch consists of all available training examples, but without ground truth label information. One can view SESEMI as a multi-task architecture that has a common CNN "backbone" to learn a shared representation of both labeled and unlabeled data, and an output "head" for each task. The ConvNet backbone computes an abstract 6 × 6 × 128 dimensional feature representation from the input image, while the NiN and WRN backbones have an output of 8 × 8 × 192 and 8 × 8 × 128 dimensions, respectively. Each task has extra layers in the head, which may have a complex structure, and computes a separate loss. The head of the labeled branch has a global average pooling layer followed by softmax activation to evaluate the supervised task with standard categorical cross-entropy loss. For the unlabeled branch, we define a self-supervised pretext task to be learned in conjunction with the labeled branch.
Recognizing Image Rotations and Flips as Self-Supervision
Following [11], we apply a set of discrete geometric transformations on the input image and train the network to recognize the resulting transformations as the self-supervised task. The network architecture for the self-supervised task shares the same CNN backbone with its supervised counterpart but has a separate output head consisting of a global average pooling layer followed by softmax activation. In their original work on self-supervised rotation recognition, Gidaris et al. [11] defined the proxy labels to be image rotations belonging in the set of {0, 90, 180, 270} degrees, resulting in a four-way classification task. Their models performed well on the rotation recognition task by learning salient visual features depicted in the image, such as location of objects, type, and pose. In this work, we extend the geometric transformations to include horizontal (left-right) and vertical (up-down) flips, resulting in the self-supervised cross-entropy loss over six classes. Further, we propose to train SESEMI on both labeled and unlabeled data simultaneously, which is more efficient and yields better performance than the approach of Gidaris et al. based on the sequential combination of self-supervised pre-training on unlabeled data followed by supervised fine-tuning on labeled data.
Integrating Self-Supervised Loss as Regularization
The algorithmic overview of SESEMI is provided in Algorithm 1. At each training step, we sample two mini-batches having the same number of labeled and unlabeled examples as inputs to a shared CNN backbone f θ (x). Note that in a typical semi-supervised setting, labeled examples will repeat in a mini-batch because the number of unlabeled examples is much greater. We forward propagate f θ (x) twice, once on the labeled branch x i∈D L and another pass on the unlabeled branch x i∈D U , resulting in softmax prediction vectors z i andz i , respectively. We compute the supervised cross-entropy loss L SUPER (y i , z i ) using ground truth labels y i and compute the self-supervised cross-entropy loss L SELF (ỹ i ,z i ) using proxy labelsỹ i generated from image rotations and flips. The parameters θ are learned via backpropagation by minimizing the multi-task SESEMI objective function defined as the weighted sum of supervised and self-supervised loss components:
L SESEMI = L SUPER (y i , z i ) + wL SELF (ỹ i ,z i ).
Our formulation of the SESEMI objective treats the self-supervised loss as a regularization term, and w > 0 is the regularization hyper-parameter that controls the relative contribution of self-supervision in the overall objective function.
In previous SSL approaches based on consistency regularization, such as Π model and Mean Teacher, w was formulated as the consistency coefficient and was subjected to considerable tuning, on a per-dataset basis, for optimal performance. We experimented with different values for the weighting parameter w in SESEMI and found w = 1 yields consistent results across all datasets and CNN architectures, suggesting that supervised and self-supervised losses are relatively balanced and compatible for image classification. Moreover, setting w = 1 leads to a convenient benefit of having one less hyper-parameter to tune. We backpropagate gradients to both branches of the network to update θ, similar to Π model. The self-supervised loss term has a dual purpose. First, it enables SESEMI to learn additional, complementary visual features from unlabeled data that help guide its decision boundaries along the data manifold. Second, it is compatible with conventional supervised learning without unlabeled data by serving as a strong regularizer against geometric transformations for improved generalization. We refer to SESEMI models trained with self-supervised regularization on labeled data as augmented supervised learning (ASL). At inference time, we simply take the supervised branch of the network to make predictions on test data and discard the self-supervised branch.
Empirical Evaluation
We follow standard evaluation protocol for SSL, in which we randomly sample varying fractions of the training data as labeled examples while treating the entire training set, discarding all label information, as the source of unlabeled data. We train a model with both labeled and unlabeled data according to SESEMI (Algorithm 1) and compare its performance to that of the same model trained using only the labeled portion in the traditional supervised manner. For ASL, we train SESEMI using the ConvNet architecture on labeled data, but augment the supervised objective with self-supervised regularization. The performance metric is classification error rate. We expect a good SSL algorithm to yield better results (lower error rate) when unlabeled data is used together with labeled data. We closely follow the experimental protocols described in [11,22,43,33] to remain consistent with previous work.
Datasets and Baselines
We evaluate our proposed SESEMI algorithm on three benchmark datasets for supervised and semi-supervised image classification: Street View House Numbers (SVHN) [32], CIFAR-10 and CIFAR-100 [21]. For details on the datasets and implementation, see Appendices A and B. We also use two auxiliary datasets to augment our experiments on supervised and semi-supervised learning: 80 million Tiny Images [44] and ImageNet-32 [7]. Tiny Images is the superset of CIFAR-10 and CIFAR-100 organized into 75,062 generic scene and object categories. ImageNet-32 is the full ImageNet dataset [8] down-sampled to 32 × 32 pixels. We use ImageNet-32 for supervised transfer learning experiments. We use Tiny Images as a source of unlabeled extra data to augment SSL on CIFAR-100 and to evaluate SESEMI under the condition of class-distribution mismatch.
We empirically compare our SESEMI models trained with self-supervised regularization against two state-of-the-art baselines for supervised and semi-supervised learning: (a) the RotNet models of Gidaris et al. (2018) [11], which were pre-trained on unlabeled data with self-supervised rotation loss followed by a separate step of supervised fine-tuning on labeled data; and (b) models jointly trained on both unlabeled and labeled data using consistency regularization as the unsupervised loss, namely Π model and its Temporal Ensembling (TempEns) variant [22], VAT [30], and Mean Teacher [43].
The RotNet baseline uses the 13-layer max-pooling NiN architecture, whereas the consistency models use the 13-layer max-pooling ConvNet architecture. We also provide a comparison of SESEMI within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency models using the WRN-28-2 architecture. Thus, our experiments report results from both ConvNet and WRN backbones to evaluate the relative impact of alternative convolutional architectures on SSL performance.
Results and Analysis
Self-Supervised Regularization Outperforms Pre-Training + Fine-Tuning on CIFAR-10
Following the protocol of Gidaris et al. (2018) [11], we evaluate the accuracy of SESEMI using varying quantities of labeled examples from 200 to 50,000. Figure 2 presents our supervised and semi-supervised results on CIFAR-10 with those previously obtained by RotNet. The best results are in boldface indicating the lowest classification error rate. For both supervised and semi-supervised learning, we find that our SESEMI models trained with self-supervised regularization significantly outperform RotNet models by as much as 23.9%. Why does SESEMI outperform RotNet when the two models ostensibly share the same architecture and self-supervised task? This question can be partly explained by empirical observations that suggest better performance on the pre-training task does not always translate to higher accuracy on downstream tasks via supervised fine-tuning [20]. By solving both supervised and self-supervised objectives during training, SESEMI is able to learn complementary visual features from labeled and unlabeled data simultaneously for enhanced generalization.
We also perform an experiment to pre-train the NiN model on the large ImageNet-32 dataset containing 1.28 million images and then transfer to CIFAR-10 via supervised fine-tuning. Our motivation is to gain insight into the potential upper bound of supervised learning with limited labels via transfer learning. We find that our SESEMI models compare favorably to supervised transfer learning without the need to pre-train a separate model on ImageNet-scale labeled dataset. The ImageNet-32 entry is regarded as the upper bound in performance, whereas the Supervised entry indicates the lower bound.
Self-Supervised Regularization Outperforms Consistency Regularization on CIFAR-10 and CIFAR-100
SVHN Table 1 compares our supervised and semi-supervised results with consistency baselines. In analyzing the SVHN results on the left side of Table 1, we observe that SESEMI ASL surpasses the supervised baselines, including ImageNet-32 and those with strong Mixup [50] and Manifold Mixup [45] regularization, by a large margin for all experiments. However, the results are not satisfactory when compared against the semi-supervised baselines, especially Mean Teacher. We discuss the limitation of SESEMI for semi-supervised learning on the SVHN dataset in Section 4.
CIFAR-10
Experiments on CIFAR-10 tell a different story. The right side of Table 1 shows that SESEMI uniformly outperforms all supervised and semi-supervised baselines, improving on SSL results by as much as 17%. On CIFAR-10, the combination of supervised and self-supervised learning is a strength of SESEMI, but it is also a limitation in the case of SVHN. We observe that the ConvNet and WRN architectures produce comparable results across the board.
CIFAR-100 and Tiny Images
The successes of SESEMI on CIFAR-10 also transfer to experiments on CIFAR-100. The left side of Table 2 provides a comparison of SESEMI against the Π model and TempEns baselines, where we obtain competitive semi-supervised performance using 10,000 labels and achieve state-of-the-art supervised results when all 50,000 labels are available, matching the upper bound performance of ImageNet-32 supervised fine-tuning.
Additionally, we run two experiments to evaluate the performance of SESEMI in the case of classdistribution mismatch. Following Laine and Aila (2017) [22], our first experiment utilizes all 50,000 available labels from CIFAR-100 and randomly samples 500,000 unlabeled extra Tiny Images, most belonging to categories not found in CIFAR-100. Our second experiment uses a restricted set of 237,203 Tiny Images from categories found in CIFAR-100. The right side of Table 2 presents SSL error rates on CIFAR-100 augmented with Tiny Images. Results show that adding 500,000 unlabeled extra data with significant class-distribution mismatch does not degrade SESEMI performance, as observed by Oliver et al. (2018) [33] with other SSL approaches. For SESEMI with WRN-28-2 architecture, the addition of (randomly selected) unlabeled extra data from Tiny Images further Table 3 provides a comparison of SESEMI for semi-supervised learning on SVHN and CIFAR-10 within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency baselines using the WRN-28-2 architecture, carried out large-scale hyper-parameter optimization specific to each technique, and reported best-case performances. Our SESEMI model with WRN-28-2 architecture establishes a new upper bound in SSL performance by outperforming all methods, including ImageNet-32, under this evaluation setting for both SVHN and CIFAR-10.
SESEMI with Residual Networks
It is important to note that we do not perform any hyper-parameter search in this experiment, but use the same set of hyper-parameters described in Appendix B along with w = 1 for the weighting of the self-supervised loss term in SESEMI. In practical applications where tuning many (possibly inter-dependent) hyper-parameters can be problematic [33], especially over small validation sets, our approach to supervised and semi-supervised learning using SESEMI offers a clear and significant benefit.
Comparison with State-of-the-Art Supervised Methods on CIFAR-10 and CIFAR-100
Motivated by the strong performances of SESEMI ASL for supervised learning augmented with self-supervised regularization, we provide a comparative analysis of SESEMI against several previous state-of-the-art supervised methods on CIFAR-10 and CIFAR-100 in Table 4. We observe that SESEMI ASL is competitive in predictive performance with advanced CNN architectures like FractalNet [24], Fractional Max-Pooling [14], ResNet-1001 [15], Wide ResNet-40-4 [48], and DenseNet [17] while requiring a fraction of the computational complexity, as measured in millions of trainable parameters. For those architectures having roughly the same number of trainable parameters, our SESEMI ASL models outperform Highway Network [41] and FitResNet with LSUV initialization [29] by a large margin.
The effectiveness of SESEMI ASL is directly attributed to self-supervised regularization and not to the CNN architecture. The same ConvNet architecture without self-supervised regularization performs significantly worse on both CIFAR-10 (5.82% error rate) and CIFAR-100 (26.42% error rate). In principle, self-supervised regularization could be incorporated into any CNN architecture for further reduction in classification error rate.
Ablation Study
Several studies have provided conclusive evidence that self-supervision is an effective unsupervised pre-training technique for downstream supervised visual understanding tasks such as image classification, object detection, and semantic segmentation [9,20]. However, the evaluation of self-supervised algorithms for SSL has not been explored, especially in the setting where the supervised and self-supervised losses are jointly trained, per Algorithm 1. We briefly describe the following self-supervised tasks based on image reconstruction and compare their SSL performances against the task of classifying image rotations and flips. All experiments use the same convolutional encoder-decoder framework, where the encoder backbone is the WRN-28-2 architecture, and the decoder head comprises a set of two deconvolutional layers [25] with batch normalization and ReLU non-linearity to produce a reconstructed output with the same dimensions as the input.
Denoising Autoencoder The self-supervised objective of this simple baseline is to minimize the mean pixel-wise squared error between the reconstructed output and image input corrupted with Gaussian noise.
Image Inpainting Following [35], the input to the encoder is an image with the central square patch covering 1 /4 of the image masked out or set to zero. The decoder is trained to generate prediction for the masked region using a masked L 2 reconstruction loss as self-supervision.
Image Colorization Following [23], the input to the encoder is a grayscale image (the L* channel of the L*a*b* color space) and the decoder is trained to predict the a*b* color components at every pixel. The self-supervised loss is the mean squared error between the reconstructed a*b* color output and ground truth a*b* color components. Figure 3 shows the individual tasks of recognizing image flips and rotations outperform image reconstruction, inpainting, and colorization on the CIFAR-10 dataset. These results suggest that classification-based self-supervision provides a better, or perhaps more compatible, proxy label for semi-supervised image classification than reconstruction-based tasks. Our findings corroborate recent studies showing rotation-based self-supervision is the superior pre-training technique for downstream transfer learning tasks [11,20]. Lastly, combining horizontal and vertical flips with the rotation recognition task outperforms all other self-supervised tasks, leading to an improvement in SSL performance over rotation recognition by 8.2%.
Ablation Results
Discussion
Limitation of SESEMI We speculate the poor performance of SESEMI on the SVHN dataset stems from our chosen self-supervised task of predicting image rotations and flips. Gidaris et al. (2018) [11] showed that their self-supervised model focused its attention maps on salient parts of the images to aid in the rotation recognition task. We hypothesize similar dynamics are at play here, but the SVHN dataset presents an additional layer of complexity in which the centermost digits (the digits to be recognized) are often surrounded by "distractor" digits. When the digits are rotated and flipped, the self-supervised branch is likely picking up dominant visual features corresponding to the distractor digits and relate them to the supervised branch as belonging to the digits of interest. These "miscues" are most prominent when few labels are present, where the supervised branch is simply learning visual information from the self-supervised branch. However, when all labels are available, the supervised branch is able to correct the miscues, and our SESEMI models produce the best classification results.
Comparison with Recent Developments in SSL Recent years have seen a flurry of research on SSL techniques that are related to or concurrent with our work. The prior work of Smooth Neighbors on Teacher Graphs [27], Virtual Adversarial Dropout [34], Interpolation Consistency Training [46], Stochastic Weight Averaging [2], and Label Propagation [19] advanced the field of semi-supervised learning by achieving impressive results on SVHN, CIFAR-10, and CIFAR-100 benchmarks. However, those methods all build upon strong consistency baselines by either adding a third loss (and new hyper-parameters to tune) to the overall objective of consistency models or averaging model weights. The concurrent work on self-supervised semi-supervised learning [49] independently explores the contributions of self-supervised regularization for SSL in a way similar to ours, but with a different evaluation protocol and end goal. MixMatch [4] combines strong Mixup [50] regularization with consistency regularization to achieve state-of-the-art SSL results.
Our goal for this work is to directly compare the effectiveness of self-supervised regularization to consistency regularization, which has not been done before. Moreover, we deliberately avoid increasing complexity in favor of simplicity and pragmatism with our design choices and training protocol. In principle, our work is potentially complementary to Label Propagation and MixMatch. Label Propagation requires the Mean Teacher prediction function to work well, which can be directly replaced by our SESEMI module. For MixMatch, we can integrate a third loss term into the objective function for learning additional self-supervised features. These are viable topics for future research.
Conclusion
We presented a conceptually simple yet effective multi-task CNN architecture for supervised and semi-supervised learning based on self-supervised regularization. Our approach produces proxy labels from geometric transformations on unlabeled data, which are combined with ground truth labels for improved learning and generalization in the limited labeled data setting. We provided a comprehensive empirical evaluation of our approach using three different CNN architectures, spanning multiple benchmark datasets and baselines to demonstrate its effectiveness and wide range of applicability. We highlight two attractive benefits of SESEMI. First, SESEMI achieves state-of-the-art predictive performance for both supervised and semi-supervised image classification without introducing additional hyper-parameters to tune. Second, SESEMI does not need a separate pre-training step, but is trained end-to-end for simplicity, efficiency, and practicality.
| 5,497 |
1906.10343
|
2954276933
|
Recent advances in semi-supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms: access to vast amounts of human-labeled training data. Algorithms based on self-ensemble learning and virtual adversarial training can harness the abundance of unlabeled data to produce impressive state-of-the-art results on a number of semi-supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. However, these methods often require careful tuning of many hyper-parameters and are usually not easy to implement in practice. In this work, we present a conceptually simple yet effective semi-supervised algorithm based on self-supervised learning to combine semantic feature representations from unlabeled data. Our models are efficiently trained end-to-end for the joint, multi-task learning of labeled and unlabeled data in a single stage. Striving for simplicity and practicality, our approach requires no additional hyper-parameters to tune for optimal performance beyond the standard set for training convolutional neural networks. We conduct a comprehensive empirical evaluation of our models for semi-supervised image classification on SVHN, CIFAR-10 and CIFAR-100, and demonstrate results competitive with, and in some cases exceeding, prior state of the art. Reference code and data are available at this https URL
|
Rather than relying on the model to randomly perturb the input data by way of dropout or data augmentation, @cite_10 proposed the concept of adversarial training to approximate the perturbations in the direction that would most significantly alter the output of the model. While adversarial training requires access to ground truth labels to perform adversarial perturbations, the Virtual Adversarial Training (VAT) mechanism proposed by @cite_32 @cite_36 can be applied to unlabeled data and is thus suitable for SSL under the consistency regularization principle. Adversarial training is closely related to generative adversarial networks (GANs) @cite_20 , which have been proposed for semi-supervised learning with promising results @cite_37 @cite_43 @cite_22 . Most recently, the self-supervised GANs with auxiliary rotation loss @cite_25 have been shown to synthesize high-fidelity, diverse natural images at high resolution using only a fraction of the available labels.
|
{
"abstract": [
"Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.",
"",
"We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"Abstract: We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution. We named the LDS based regularization as virtual adversarial training (VAT). The LDS of a model at an input datapoint is defined as the KL-divergence based robustness of the model distribution against local perturbation around the datapoint. VAT resembles adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone without using the label information, making it applicable to semi-supervised learning. The computational cost for VAT is relatively low. For neural network, the approximated gradient of the LDS can be computed with no more than three pairs of forward and back propagations. When we applied our technique to supervised and semi-supervised learning for the MNIST dataset, it outperformed all the training methods other than the current state of the art method, which is based on a highly advanced generative model. We also applied our method to SVHN and NORB, and confirmed our method's superior performance over the current state of the art semi-supervised method applied to these datasets.",
"",
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10 of the labels and outperform it using 20 of the labels.",
"For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”"
],
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_43",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2963043971",
"",
"2964159205",
"2964040467",
"",
"2963207607",
"2920684403",
"1710476689"
]
}
|
Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning
|
Contemporary approaches to supervised representation learning, such as the convolutional neural network (CNN), continue to push the boundaries of research across a number of domains including speech recognition, visual understanding, and language modeling. However, such progresses usually require massive amounts of human-labeled training data. The process of collecting, curating, and hand-labeling large amounts of training data is often tedious, time-consuming, and costly to scale. Thus, there is a growing body of research dedicated to learning with limited labels, enabling machines to do more with less human supervision, in order to fully harness the benefits of deep learning in real-world settings. Such emerging research directions include domain adaptation [51], low-shot learning [47], self-supervised learning [20], and multi-task learning [36]. In this work, we propose to combine multiple modes of supervision on sources of labeled and unlabeled data for enhanced learning and generalization. Specifically, our work falls within the framework of semi-supervised learning (SSL) [5], in the context of image classification, which can leverage abundant unlabeled data to significantly improve upon supervised classifiers in the limited labeled data setting. Indeed, in many cases, state-of-the-art semi-supervised algorithms have been shown to approach the performance of strong supervised baselines using only a fraction of the available labeled data [30,43,26]. (4) .
⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Neural Network Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x) (2) ⌘ . (4) .
1 Data Augmentation Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi2D L yi2D L xi2D UL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x) h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DUL 1 Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . (4) .
Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Lmbce = P mi , (2) ⌘ . (4) .
Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + bDecoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + b
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) ,
Lmbce = P i mi Lbce P i mi .
Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 D L y i2DL xi 2 D UL y i2DUL Lfully-supervised Lself-supervised Lsemi-supervised g(x) h(x)h(x) f ✓ (x)ŷiỹi (xi, yi) 2 D L (xi, yi) 2 D ULzi ziỹi 2 D UL 1 Supervised
Cross-Entropy Loss
Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) ,
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) Lself-supervised Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1
Self-Supervised Cross-Entropy Loss
Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , (4) . (4) .
Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DUL 1 Geometric Transformation (âi)) · ⇣ (1 ai) log (1 (âi)) , e , ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . ReLU ⇣ W T zi + b (3) ⌘ + b
(âi)) · ⇣ (1 ai) log (1 (âi)) , (4) .
= P i mi Lbce P i mi . ly-supervised Lself-supervised Lsemi-supervised g(x) h(x) 1 Data Augmentation Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x) yiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Lbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2)
⌘ . (4) . (4) .
Decoderâ i = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (â i)) · ⇣ (1 ai) log (1 (â i)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 zi = U · ReLU W T zi + b (3) + b (5) , y i = softmax(zi). Lbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = mi Lbce P mi , Encoder zi = ReLU ⇣ W · ReLU ⇣ Vai + b (1) ⌘ + b (2) ⌘ . Decoderâi = V T · ReLU ⇣ W T zi + b (3) ⌘ + bLbce = ai log ( (âi)) · ⇣ (1 ai) log (1 (âi)) , Lmbce = P i mi Lbce P i mi . Lmulti-task = Lmbce + Lssc mi = 1 if ai 6 = ?, else mi = 0 F 0 (x) xi 2 DL xi 2 DUỹi 2 DU yi2D L xi 2 DUL yi2D UL L(yi, zi) L(ỹi,zi) Lsemi-supervised g(x) h(x)h(x) f✓(x)ŷiỹi (xi, yi) 2 DL (xi, yi) 2 DULzi ziỹi 2 DULg(x) 1 Supervised Branch
Self-Supervised Branch
(a) SESEMI architecture for supervised and semi-supervised image classification, with the self-supervised task of recognizing geometric transformations. The function h(x) produces six proxy labels defined as image rotations belonging in the set of {0, 90, 180, 270} degrees along with horizontal (left-right) and vertical (up-down) flips. Our approach to SSL belongs to a class of methods that produce proxy, or surrogate, labels from unlabeled data without requiring human annotations, which are used as targets together with labeled data. Although proxy labels may not reflect the ground truth, they provide surprisingly strong supervisory signals for learning the underlying structure of the data manifold. The training protocol for this class of SSL algorithms simply imposes an additional loss term to the overall objective function of an otherwise supervised algorithm. The auxiliary loss describes the contribution of unlabeled data and is referred to as the unsupervised loss component.
Summary of Contributions
We introduce a new algorithm to jointly train a self-supervised loss term with the traditional supervised objective for the multi-task learning of both unlabeled and labeled data in a single stage. Our work is in direct contrast to prior SSL approaches based on unsupervised or self-supervised learning [10,11], which require the sequential combination of unsupervised or self-supervised pre-training followed by supervised fine-tuning.
Our approach to utilize the self-supervised loss term both as a regularizer (applied to labeled data) and SSL method (applied to unlabeled data) is analogous to consistency regularization. Although leading approaches based on consistency regularization achieve state-of-the-art SSL results, these methods require careful tuning of many hyper-parameters and are generally not easy to implement in practice. Striving for simplicity and pragmatism, our models with self-supervised regularization require no additional hyper-parameters to tune for optimal performance beyond the standard set for training neural networks. Our work is among the first to challenge the long-standing success of consistency regularization for SSL.
We conduct extensive comparative experiments to validate the effectiveness of our models by showing semi-supervised results competitive with, and in many cases surpassing, previous state-of-the-art consistency baselines. We also demonstrate that supervised learning augmented with self-supervised regularization is a viable and attractive alternative to transfer learning without the need to pre-train a separate model on large labeled datasets. Lastly, we perform an ablation study showing our proposed algorithm is the best among a family of self-supervised regularization techniques, when comparing their relative contributions to SSL performance.
Learning with Self-Supervised Regularization
We present SESEMI, a conceptually simple yet effective algorithm for enhancing supervised and semi-supervised image classification via self-supervision. The design of SESEMI is depicted in Figure 1a. The input to SESEMI is a training set of input-target pairs (x, y) ∈ D L and (optional) unlabeled inputs x ∈ D U . Typically, we assume D L and D U are sampled from the same distribution p(x), in which case D L is a labeled subset of D U . However, that assumption may not necessarily hold true in real-world settings where there exists the potential for class-distribution mismatch [33]. That is, D L is sampled from p(x) but D U may be sampled from a different, although somewhat related, distribution q(x). The goal of SESEMI is to train a prediction function f θ (x), parametrized by θ, that utilizes a combination of D L and D U to obtain significantly better predictive performance than what would be achieved by using D L alone. The prediction function f θ (x) is a multi-layer perceptron with three hidden layers, each hidden layer containing 100 leaky ReLU units [28] with α = 0.1. We observe that the supervised model is unable to fully capture the underlying shape of the data manifold of two moons when trained only on the labeled examples. Together with unlabeled data, SESEMI is able to learn better decision boundaries of both two moons and three spirals that would result in fewer Algorithm 1: SESEMI mini-batch training.
Require : Training set of labeled input-target pairs (x, y) ∈ D L . Training set of unlabeled inputs x ∈ D U . Geometric transformation function h(x) producing proxy labelsỹ ∈ D U . Input data augmentation functions g(x) andg(x).
Neural network architecture f θ (x) with trainable parameters θ.
for each epoch over
D U do B L ← g (x i∈D L ) Sample mini-batches of augmented labeled inputs. B U ←g (h (x i∈D U ))
Sample mini-batches of augmented unlabeled inputs.
for each mini-batch do z i∈B L ← f θ (B L ) Compute model outputs for labeled inputs. z i∈B U ← f θ (B U )
Compute model outputs for unlabeled inputs.
L ← − 1 |B L | i∈B L c∈C y ic log(z ic ) Supervised cross-entropy loss. − 1 |B U | i∈B U k∈Kỹ ik log(z ik ) Self-supervised cross-entropy loss. θ ← θ − ∇ θ L Update parameters via gradient descent. end end return f θ (x)
mis-classifications on the test set. This demonstration illustrates the applicability of SESEMI to other data modalities besides image.
Convolutional Architectures
In principle, the prediction function f θ (x) could be any classifier. For comparison and analysis with previous work, we experiment with three high-performance CNN architectures: (i) the 13-layer maxpooling Network-in-Network (NiN) [11]; (ii) the 13-layer max-pooling ConvNet [22,43,30,34,27]; and (iii) the more modern wide residual network with depth 28 and width 2 (WRN-28-2) [48,33].
We faithfully follow the original specifications of the NiN, WRN, and ConvNet architectures, so we refer to their papers for details. All architectures have convolutional layers followed by batch normalization [18] and ReLU non-linearity [31], except the ConvNet architecture uses leaky ReLU [28] with α = 0.1. The NiN, WRN, ConvNet architectures have roughly 1.01, 1.47, and 3.13 million parameters, respectively.
For both supervised and semi-supervised settings, we separate the input data into labeled and unlabeled branches, and apply the same CNN model to both. Note that the unlabeled branch consists of all available training examples, but without ground truth label information. One can view SESEMI as a multi-task architecture that has a common CNN "backbone" to learn a shared representation of both labeled and unlabeled data, and an output "head" for each task. The ConvNet backbone computes an abstract 6 × 6 × 128 dimensional feature representation from the input image, while the NiN and WRN backbones have an output of 8 × 8 × 192 and 8 × 8 × 128 dimensions, respectively. Each task has extra layers in the head, which may have a complex structure, and computes a separate loss. The head of the labeled branch has a global average pooling layer followed by softmax activation to evaluate the supervised task with standard categorical cross-entropy loss. For the unlabeled branch, we define a self-supervised pretext task to be learned in conjunction with the labeled branch.
Recognizing Image Rotations and Flips as Self-Supervision
Following [11], we apply a set of discrete geometric transformations on the input image and train the network to recognize the resulting transformations as the self-supervised task. The network architecture for the self-supervised task shares the same CNN backbone with its supervised counterpart but has a separate output head consisting of a global average pooling layer followed by softmax activation. In their original work on self-supervised rotation recognition, Gidaris et al. [11] defined the proxy labels to be image rotations belonging in the set of {0, 90, 180, 270} degrees, resulting in a four-way classification task. Their models performed well on the rotation recognition task by learning salient visual features depicted in the image, such as location of objects, type, and pose. In this work, we extend the geometric transformations to include horizontal (left-right) and vertical (up-down) flips, resulting in the self-supervised cross-entropy loss over six classes. Further, we propose to train SESEMI on both labeled and unlabeled data simultaneously, which is more efficient and yields better performance than the approach of Gidaris et al. based on the sequential combination of self-supervised pre-training on unlabeled data followed by supervised fine-tuning on labeled data.
Integrating Self-Supervised Loss as Regularization
The algorithmic overview of SESEMI is provided in Algorithm 1. At each training step, we sample two mini-batches having the same number of labeled and unlabeled examples as inputs to a shared CNN backbone f θ (x). Note that in a typical semi-supervised setting, labeled examples will repeat in a mini-batch because the number of unlabeled examples is much greater. We forward propagate f θ (x) twice, once on the labeled branch x i∈D L and another pass on the unlabeled branch x i∈D U , resulting in softmax prediction vectors z i andz i , respectively. We compute the supervised cross-entropy loss L SUPER (y i , z i ) using ground truth labels y i and compute the self-supervised cross-entropy loss L SELF (ỹ i ,z i ) using proxy labelsỹ i generated from image rotations and flips. The parameters θ are learned via backpropagation by minimizing the multi-task SESEMI objective function defined as the weighted sum of supervised and self-supervised loss components:
L SESEMI = L SUPER (y i , z i ) + wL SELF (ỹ i ,z i ).
Our formulation of the SESEMI objective treats the self-supervised loss as a regularization term, and w > 0 is the regularization hyper-parameter that controls the relative contribution of self-supervision in the overall objective function.
In previous SSL approaches based on consistency regularization, such as Π model and Mean Teacher, w was formulated as the consistency coefficient and was subjected to considerable tuning, on a per-dataset basis, for optimal performance. We experimented with different values for the weighting parameter w in SESEMI and found w = 1 yields consistent results across all datasets and CNN architectures, suggesting that supervised and self-supervised losses are relatively balanced and compatible for image classification. Moreover, setting w = 1 leads to a convenient benefit of having one less hyper-parameter to tune. We backpropagate gradients to both branches of the network to update θ, similar to Π model. The self-supervised loss term has a dual purpose. First, it enables SESEMI to learn additional, complementary visual features from unlabeled data that help guide its decision boundaries along the data manifold. Second, it is compatible with conventional supervised learning without unlabeled data by serving as a strong regularizer against geometric transformations for improved generalization. We refer to SESEMI models trained with self-supervised regularization on labeled data as augmented supervised learning (ASL). At inference time, we simply take the supervised branch of the network to make predictions on test data and discard the self-supervised branch.
Empirical Evaluation
We follow standard evaluation protocol for SSL, in which we randomly sample varying fractions of the training data as labeled examples while treating the entire training set, discarding all label information, as the source of unlabeled data. We train a model with both labeled and unlabeled data according to SESEMI (Algorithm 1) and compare its performance to that of the same model trained using only the labeled portion in the traditional supervised manner. For ASL, we train SESEMI using the ConvNet architecture on labeled data, but augment the supervised objective with self-supervised regularization. The performance metric is classification error rate. We expect a good SSL algorithm to yield better results (lower error rate) when unlabeled data is used together with labeled data. We closely follow the experimental protocols described in [11,22,43,33] to remain consistent with previous work.
Datasets and Baselines
We evaluate our proposed SESEMI algorithm on three benchmark datasets for supervised and semi-supervised image classification: Street View House Numbers (SVHN) [32], CIFAR-10 and CIFAR-100 [21]. For details on the datasets and implementation, see Appendices A and B. We also use two auxiliary datasets to augment our experiments on supervised and semi-supervised learning: 80 million Tiny Images [44] and ImageNet-32 [7]. Tiny Images is the superset of CIFAR-10 and CIFAR-100 organized into 75,062 generic scene and object categories. ImageNet-32 is the full ImageNet dataset [8] down-sampled to 32 × 32 pixels. We use ImageNet-32 for supervised transfer learning experiments. We use Tiny Images as a source of unlabeled extra data to augment SSL on CIFAR-100 and to evaluate SESEMI under the condition of class-distribution mismatch.
We empirically compare our SESEMI models trained with self-supervised regularization against two state-of-the-art baselines for supervised and semi-supervised learning: (a) the RotNet models of Gidaris et al. (2018) [11], which were pre-trained on unlabeled data with self-supervised rotation loss followed by a separate step of supervised fine-tuning on labeled data; and (b) models jointly trained on both unlabeled and labeled data using consistency regularization as the unsupervised loss, namely Π model and its Temporal Ensembling (TempEns) variant [22], VAT [30], and Mean Teacher [43].
The RotNet baseline uses the 13-layer max-pooling NiN architecture, whereas the consistency models use the 13-layer max-pooling ConvNet architecture. We also provide a comparison of SESEMI within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency models using the WRN-28-2 architecture. Thus, our experiments report results from both ConvNet and WRN backbones to evaluate the relative impact of alternative convolutional architectures on SSL performance.
Results and Analysis
Self-Supervised Regularization Outperforms Pre-Training + Fine-Tuning on CIFAR-10
Following the protocol of Gidaris et al. (2018) [11], we evaluate the accuracy of SESEMI using varying quantities of labeled examples from 200 to 50,000. Figure 2 presents our supervised and semi-supervised results on CIFAR-10 with those previously obtained by RotNet. The best results are in boldface indicating the lowest classification error rate. For both supervised and semi-supervised learning, we find that our SESEMI models trained with self-supervised regularization significantly outperform RotNet models by as much as 23.9%. Why does SESEMI outperform RotNet when the two models ostensibly share the same architecture and self-supervised task? This question can be partly explained by empirical observations that suggest better performance on the pre-training task does not always translate to higher accuracy on downstream tasks via supervised fine-tuning [20]. By solving both supervised and self-supervised objectives during training, SESEMI is able to learn complementary visual features from labeled and unlabeled data simultaneously for enhanced generalization.
We also perform an experiment to pre-train the NiN model on the large ImageNet-32 dataset containing 1.28 million images and then transfer to CIFAR-10 via supervised fine-tuning. Our motivation is to gain insight into the potential upper bound of supervised learning with limited labels via transfer learning. We find that our SESEMI models compare favorably to supervised transfer learning without the need to pre-train a separate model on ImageNet-scale labeled dataset. The ImageNet-32 entry is regarded as the upper bound in performance, whereas the Supervised entry indicates the lower bound.
Self-Supervised Regularization Outperforms Consistency Regularization on CIFAR-10 and CIFAR-100
SVHN Table 1 compares our supervised and semi-supervised results with consistency baselines. In analyzing the SVHN results on the left side of Table 1, we observe that SESEMI ASL surpasses the supervised baselines, including ImageNet-32 and those with strong Mixup [50] and Manifold Mixup [45] regularization, by a large margin for all experiments. However, the results are not satisfactory when compared against the semi-supervised baselines, especially Mean Teacher. We discuss the limitation of SESEMI for semi-supervised learning on the SVHN dataset in Section 4.
CIFAR-10
Experiments on CIFAR-10 tell a different story. The right side of Table 1 shows that SESEMI uniformly outperforms all supervised and semi-supervised baselines, improving on SSL results by as much as 17%. On CIFAR-10, the combination of supervised and self-supervised learning is a strength of SESEMI, but it is also a limitation in the case of SVHN. We observe that the ConvNet and WRN architectures produce comparable results across the board.
CIFAR-100 and Tiny Images
The successes of SESEMI on CIFAR-10 also transfer to experiments on CIFAR-100. The left side of Table 2 provides a comparison of SESEMI against the Π model and TempEns baselines, where we obtain competitive semi-supervised performance using 10,000 labels and achieve state-of-the-art supervised results when all 50,000 labels are available, matching the upper bound performance of ImageNet-32 supervised fine-tuning.
Additionally, we run two experiments to evaluate the performance of SESEMI in the case of classdistribution mismatch. Following Laine and Aila (2017) [22], our first experiment utilizes all 50,000 available labels from CIFAR-100 and randomly samples 500,000 unlabeled extra Tiny Images, most belonging to categories not found in CIFAR-100. Our second experiment uses a restricted set of 237,203 Tiny Images from categories found in CIFAR-100. The right side of Table 2 presents SSL error rates on CIFAR-100 augmented with Tiny Images. Results show that adding 500,000 unlabeled extra data with significant class-distribution mismatch does not degrade SESEMI performance, as observed by Oliver et al. (2018) [33] with other SSL approaches. For SESEMI with WRN-28-2 architecture, the addition of (randomly selected) unlabeled extra data from Tiny Images further Table 3 provides a comparison of SESEMI for semi-supervised learning on SVHN and CIFAR-10 within the unified evaluation framework of Oliver et al. (2018) [33], in which they re-implemented the consistency baselines using the WRN-28-2 architecture, carried out large-scale hyper-parameter optimization specific to each technique, and reported best-case performances. Our SESEMI model with WRN-28-2 architecture establishes a new upper bound in SSL performance by outperforming all methods, including ImageNet-32, under this evaluation setting for both SVHN and CIFAR-10.
SESEMI with Residual Networks
It is important to note that we do not perform any hyper-parameter search in this experiment, but use the same set of hyper-parameters described in Appendix B along with w = 1 for the weighting of the self-supervised loss term in SESEMI. In practical applications where tuning many (possibly inter-dependent) hyper-parameters can be problematic [33], especially over small validation sets, our approach to supervised and semi-supervised learning using SESEMI offers a clear and significant benefit.
Comparison with State-of-the-Art Supervised Methods on CIFAR-10 and CIFAR-100
Motivated by the strong performances of SESEMI ASL for supervised learning augmented with self-supervised regularization, we provide a comparative analysis of SESEMI against several previous state-of-the-art supervised methods on CIFAR-10 and CIFAR-100 in Table 4. We observe that SESEMI ASL is competitive in predictive performance with advanced CNN architectures like FractalNet [24], Fractional Max-Pooling [14], ResNet-1001 [15], Wide ResNet-40-4 [48], and DenseNet [17] while requiring a fraction of the computational complexity, as measured in millions of trainable parameters. For those architectures having roughly the same number of trainable parameters, our SESEMI ASL models outperform Highway Network [41] and FitResNet with LSUV initialization [29] by a large margin.
The effectiveness of SESEMI ASL is directly attributed to self-supervised regularization and not to the CNN architecture. The same ConvNet architecture without self-supervised regularization performs significantly worse on both CIFAR-10 (5.82% error rate) and CIFAR-100 (26.42% error rate). In principle, self-supervised regularization could be incorporated into any CNN architecture for further reduction in classification error rate.
Ablation Study
Several studies have provided conclusive evidence that self-supervision is an effective unsupervised pre-training technique for downstream supervised visual understanding tasks such as image classification, object detection, and semantic segmentation [9,20]. However, the evaluation of self-supervised algorithms for SSL has not been explored, especially in the setting where the supervised and self-supervised losses are jointly trained, per Algorithm 1. We briefly describe the following self-supervised tasks based on image reconstruction and compare their SSL performances against the task of classifying image rotations and flips. All experiments use the same convolutional encoder-decoder framework, where the encoder backbone is the WRN-28-2 architecture, and the decoder head comprises a set of two deconvolutional layers [25] with batch normalization and ReLU non-linearity to produce a reconstructed output with the same dimensions as the input.
Denoising Autoencoder The self-supervised objective of this simple baseline is to minimize the mean pixel-wise squared error between the reconstructed output and image input corrupted with Gaussian noise.
Image Inpainting Following [35], the input to the encoder is an image with the central square patch covering 1 /4 of the image masked out or set to zero. The decoder is trained to generate prediction for the masked region using a masked L 2 reconstruction loss as self-supervision.
Image Colorization Following [23], the input to the encoder is a grayscale image (the L* channel of the L*a*b* color space) and the decoder is trained to predict the a*b* color components at every pixel. The self-supervised loss is the mean squared error between the reconstructed a*b* color output and ground truth a*b* color components. Figure 3 shows the individual tasks of recognizing image flips and rotations outperform image reconstruction, inpainting, and colorization on the CIFAR-10 dataset. These results suggest that classification-based self-supervision provides a better, or perhaps more compatible, proxy label for semi-supervised image classification than reconstruction-based tasks. Our findings corroborate recent studies showing rotation-based self-supervision is the superior pre-training technique for downstream transfer learning tasks [11,20]. Lastly, combining horizontal and vertical flips with the rotation recognition task outperforms all other self-supervised tasks, leading to an improvement in SSL performance over rotation recognition by 8.2%.
Ablation Results
Discussion
Limitation of SESEMI We speculate the poor performance of SESEMI on the SVHN dataset stems from our chosen self-supervised task of predicting image rotations and flips. Gidaris et al. (2018) [11] showed that their self-supervised model focused its attention maps on salient parts of the images to aid in the rotation recognition task. We hypothesize similar dynamics are at play here, but the SVHN dataset presents an additional layer of complexity in which the centermost digits (the digits to be recognized) are often surrounded by "distractor" digits. When the digits are rotated and flipped, the self-supervised branch is likely picking up dominant visual features corresponding to the distractor digits and relate them to the supervised branch as belonging to the digits of interest. These "miscues" are most prominent when few labels are present, where the supervised branch is simply learning visual information from the self-supervised branch. However, when all labels are available, the supervised branch is able to correct the miscues, and our SESEMI models produce the best classification results.
Comparison with Recent Developments in SSL Recent years have seen a flurry of research on SSL techniques that are related to or concurrent with our work. The prior work of Smooth Neighbors on Teacher Graphs [27], Virtual Adversarial Dropout [34], Interpolation Consistency Training [46], Stochastic Weight Averaging [2], and Label Propagation [19] advanced the field of semi-supervised learning by achieving impressive results on SVHN, CIFAR-10, and CIFAR-100 benchmarks. However, those methods all build upon strong consistency baselines by either adding a third loss (and new hyper-parameters to tune) to the overall objective of consistency models or averaging model weights. The concurrent work on self-supervised semi-supervised learning [49] independently explores the contributions of self-supervised regularization for SSL in a way similar to ours, but with a different evaluation protocol and end goal. MixMatch [4] combines strong Mixup [50] regularization with consistency regularization to achieve state-of-the-art SSL results.
Our goal for this work is to directly compare the effectiveness of self-supervised regularization to consistency regularization, which has not been done before. Moreover, we deliberately avoid increasing complexity in favor of simplicity and pragmatism with our design choices and training protocol. In principle, our work is potentially complementary to Label Propagation and MixMatch. Label Propagation requires the Mean Teacher prediction function to work well, which can be directly replaced by our SESEMI module. For MixMatch, we can integrate a third loss term into the objective function for learning additional self-supervised features. These are viable topics for future research.
Conclusion
We presented a conceptually simple yet effective multi-task CNN architecture for supervised and semi-supervised learning based on self-supervised regularization. Our approach produces proxy labels from geometric transformations on unlabeled data, which are combined with ground truth labels for improved learning and generalization in the limited labeled data setting. We provided a comprehensive empirical evaluation of our approach using three different CNN architectures, spanning multiple benchmark datasets and baselines to demonstrate its effectiveness and wide range of applicability. We highlight two attractive benefits of SESEMI. First, SESEMI achieves state-of-the-art predictive performance for both supervised and semi-supervised image classification without introducing additional hyper-parameters to tune. Second, SESEMI does not need a separate pre-training step, but is trained end-to-end for simplicity, efficiency, and practicality.
| 5,497 |
1811.04441
|
2950393809
|
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10 relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
|
Knowledge graph embedding learning has been an active research area with applications directly in knowledge base completion (i.e. link prediction) and relation extractions. TransE @cite_28 started this line of work by projecting both entities and relations into the same embedding vector space, with translational constraint of @math . Later works enhanced KG embedding models such as TransH @cite_5 , TransR @cite_23 , and TransD @cite_25 introduced new representations of relational translation and thus increased model complexity. These models were categorized as translational distance models @cite_21 or additive models, while DistMult @cite_6 and ComplEx @cite_0 are multiplicative models @cite_10 , due to the multiplicative score functions used for computing entity-relation-entity triplet likelihood.
|
{
"abstract": [
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.",
"Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.",
"We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.",
"In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.",
"Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.",
"",
"Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods."
],
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_25"
],
"mid": [
"2127795553",
"2759136286",
"2951077644",
"2432356473",
"2184957013",
"2283196293",
"2798955995",
"2250342289"
]
}
|
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
|
Over the recent years, large-scale knowledge bases (KBs), such as Freebase (Bollacker et al. 2008), DBpedia (Auer et al. 2007), NELL (Carlson et al. 2010) and YAGO3 (Mahdisoltani, Biega, and Suchanek 2013), have been built to store structured information about common facts. KBs are multirelational graphs whose nodes represent entities and edges represent relationships between entities, and the edges are labeled with different relations. The relationships are organized in the forms of (s, r, o) triplets (e.g. entity s = Abraham Lincoln, relation r = DateOfBirth, entity o = 02-12-1809). These KBs are extensively used for web search, rec-ommendation and question answering. Although these KBs have already contained millions of entities and triplets, they are far from complete compared to existing facts and newly added knowledge of the real world. Therefore knowledge base completion is important in order to predict new triplets based on existing ones and thus further expand KBs.
One of the recent active research areas for knowledge base completion is knowledge graph embedding: it encodes the semantics of entities and relations in a continuous lowdimensional vector space (called embeddings). These embeddings are then used for predicting new relations. Started from a simple and effective approach called TransE (Bordes et al. 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al. 2014), TransR (Lin et al. 2015), DistMult (Yang et al. 2014), TransD (Ji et al. 2015), ComplEx (Trouillon et al. 2016), STransE (Nguyen et al. 2016). Some surveys (Nguyen 2017;Wang et al. 2017) give details and comparisons of these embedding methods.
The most recent ConvE (Dettmers et al. 2017) model uses 2D convolution over embeddings and multiple layers of nonlinear features, and achieves the state-of-the-art performance on common benchmark datasets for knowledge graph link prediction. In ConvE, the embeddings of s and r are reshaped and concatenated into an input matrix and fed to the convolution layer. Convolutional filters of n × n are used to output feature maps that are across different dimensional embedding entries. Thus ConvE does not keep the translational property as TransE which is an additive embedding vector operation: e s + e r ≈ e o ( ). In this paper, we remove the reshape step of ConvE and operate convolutional filters directly in the same dimensions of s and r. This modification gives better performance compared with the original ConvE, and has an intuitive interpretation which keeps the global learning metric the same for s, r, and o in an embedding triple (e s , e r , e o ). We name this embedding as Conv-TransE.
ConvE also does not incorporate connectivity structure in the knowledge graph into the embedding space. In contrast, graph convolutional network (GCN) has been an effective tool to create node embeddings which aggregate local information in the graph neighborhood for each node (Kipf and Welling 2016b;Hamilton, Ying, and Leskovec 2017a;Kipf and Welling 2016a;Pham et al. 2017;Shang et al. 2018). GCN models have additional benefits (Hamilton, Ying, and Leskovec 2017b), such as leveraging the attributes associated with nodes. They can also impose the same aggregation scheme when computing the convolution for each node, which can be considered a method of regularization, and improves efficiency. Although scalability is originally an issue for GCN models, the latest data-efficient GCN, Pin-Sage (Ying et al. 2018), is able to handle billions of nodes and edges.
In this paper, we propose an end-to-end graph Structure-Aware Convolutional Networks (SACN) that take all benefits of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and relation types. It has learnable weights to determine the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes are added to WGCN as additional for easy integration. The output of WGCN becomes the input of the decoder Conv-TransE. Conv-TransE is similar to ConvE but with the difference that Conv-TransE keeps the translational characteristic between entities and relations. We show that Conv-TransE performs better than ConvE, and our SACN improves further on top of Conv-TransE in the standard benchmark datasets. The code for our model and experiments is publicly available 1 .
Our contributions are summarized as follows:
• We present an end-to-end network learning framework SACN that takes benefit of both GCN and Conv-TransE. The encoder GCN model leverages graph structure and attributes of graph nodes. The decoder Conv-TransE simplifies ConvE with special convolutions and keeps the translational property of TransE and the prediction performance of ConvE;
• We demonstrate the effectiveness of our proposed SACN on the standard FB15k-237 and WN18RR datasets, and show about 10% relative improvement over the stateof-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
Method
In this section, we describe the proposed end-to-end SACN. The encoder WGCN is focused on representing entities by aggregating connected entities as specified by the relations in the KB. With node embeddings as the input, the decoder Conv-TransE network aims to represent the relations more accurately by recovering the original triplets in the KB. Both encoder and decoder are trained jointly by minimizing the discrepancy (cross-entropy) between the embeddings e s +e r and e o to preserve the translational property e s + e r ≈ e o . We consider an undirected graph G = (V, E) throughout this section, where V is a set of nodes with |V | = N , and
E ⊆ V × V is a set of edges with |E| = M .
Weighted Graph Convolutional Layer
The WGCN is an extension of classic GCN (Kipf and Welling 2016b) in the way that it weighs the different types of relations differently when aggregating and the weights are adaptively learned during the training of the network. By this adaptation, the WGCN can control the amount of information from neighboring nodes used in aggregation. Roughly speaking, the WGCN treats a multi-relational KB graph as multiple single-relational subgraphs where each subgraph entails a specific type of relations. The WGCN determines how much weights to give to each subgraph when combining the GCN embeddings for a node.
The l-th WGCN layer takes the output vector of length F l for each node from the previous layer as inputs and generates a new representation comprising F l+1 elements. Let h l i represent the input (row) vector of the node v i in the l-th layer, and thus H l ∈ R N ×F l be the input matrix for this layer. The initial embedding H 1 is randomly drawn from Gaussian. If there are a total of L layers in the WGCN, the output H L+1 of the L-th layer is the final embedding. Let the total number of edge types be T in a multi-relational KB graph with E edges. The interaction strength between two adjacent nodes is determined by their relation type and this strength is specified by a parameter {α t , 1 ≤ t ≤ T } for each edge type, which is automatically learned in the neural network. Figure 1 illustrates the entire process of SACN. In this example, the WGCN layers of the network compute the embeddings for the red node in the middle graph. These layers aggregate the embeddings of neighboring entity nodes as specified in the KB relations. Three colors (blue, yellow and green) of the edges indicate three different relation types in the graph. The corresponding three entity nodes are summed up with different weights according to α t in this layer to obtain the embedding of the red node. The edges with the same color (same relation type) use the same α t . Each layer has its own set of relation weights α l t . Hence, the output of the l-th layer for the node v i can be written as follows:
h l+1 i = σ j∈N i α l t g(h l i , h l j ) ,(1)
where h l j ∈ R F l is the input for node v j , and v j is a node in the neighbor N i of node v i . The g function specifies how to incorporate neighboring information. Note that the activation function σ here is applied to every component of its input vector. Although any function g suitable for a KB embedding can be used in conjunction with the proposed framework, we implement the following g function:
g(h l i , h l j ) = h l j W l ,(2)
where W l ∈ R F l ×F l+1 is the connection coefficient matrix and used to linearly transform h l i to h l+1 i ∈ R F l+1 . In Eq. (1), the input vectors of all neighboring nodes are summed up but not the node v i itself, hence self-loops are enforced in the network. For node v i , the propagation process is defined as:
h l+1 i = σ( j∈N i α l t h l j W l + h l i W l ).(3)
The output of the layer l is a node feature matrix: H l+1 ∈ R N ×F l+1 , and h l+1 i is the i-th row of H l+1 , which represents features of the node v i in the (l + 1)-th layer.
The above process can be organized as a matrix multiplication as shown in Figure 2 to simultaneously compute embeddings for all nodes through an adjacency matrix. For each relation (edge) type, an adjacency matrix A t is a binary matrix whose ij-th entry is 1 if an edge connecting v i and v j exists or 0 otherwise. The final adjacency matrix is written as follows:
A l = T t=1 (α l t A t ) + I,(4)
where I is the identity matrix of size N × N . Basically, the A l is the weighted sum of the adjacency matrices of subgraphs plus self-connections. In our implementation, we consider all first-order neighbors in the linear transformation for each layer as shown in Figure 2:
H l+1 = σ(A l H l W l ).(5)
Node Attributes. In a KB graph, nodes are often associated with several attributes in the form of (entity, relation, attribute). For example, (s = Tom, r = people.person.gender, a = male) is an instance where gender is an attribute associated with a person. If a vector representation is used for node attributes, there would be two potential problems. First, the number of attributes for each node is usually small, and differs from one to another.
Hence, the attribute vector would be very sparse. Second, the value of zero in the attribute vectors may have ambiguous meanings: the node does not have the specific attribute, or the node misses the value for this attribute. These zeros would affect the accuracy of the embedding. In this work, the entity attributes in the knowledge graph are represented by another set of nodes in the network called attribute nodes. Attribute nodes act as the "bridges" to link the related entities. The entity embeddings can be transported over these "bridges" to incorporate the entity's attribute into its embedding. Because these attributes exhibit in triplets, we represent the attributes similarly to the representation of the entity o in relation triplets. Note that each type of attribute corresponds to a node. For instance, in our example, gender is represented by a single node rather than two nodes for "male" and "female". In this way, the WGCN not only utilizes the graph connectivity structure (relations and relation types), but also leverages the node attributes (a kind of graph structure) effectively. That is why we name our WGCN as a structure-aware convolution network.
Conv-TransE
We develop the Conv-TransE model as a decoder that is based on ConvE but with the translational property of TransE: e s + e r ≈ e o . The key difference of our approach from ConvE is that there is no reshaping after stacking e s and e r . Filters (or kernels) of size 2 × k , k ∈ {1, 2, 3, ...}, are used in the convolution. The example in Figure 1 uses 2 × 3 kernels to compute 2D convolutions. We experimented with several of such settings in our empirical study.
Note that in the encoder of SACN, the dimension of the relation embedding is commonly chosen to be the same as the dimension of the entity embedding, so in other words, is equal to F L . Hence, the two embeddings can be stacked. For the decoder, the inputs are two embedding matrices: one R N ×F L from WGCN for all entity nodes, and the other R M ×F L for relation embedding matrix which is trained as well. Because we use a mini-batch stochastic training algorithm, the first step of the decoder performs a look-up operation upon the embedding matrices to retrieve the input e s and e r for the triplets in the mini-batch.
More precisely, given C different kernels where the c-th kernel is parameterized by ω c , the convolution in the decoder is computed as follows:
m c (e s , e r , n) = K−1 τ =0 ω c (τ, 0)ê s (n + τ ) + ω c (τ, 1)ê r (n + τ ),(6)
where K is the kernel width, n indexes the entries in the output vector and n ∈ [0, F L − 1], and the kernel parameters ω c are trainable.ê s andê r are padding version of e s and e r respectively. If the dimension s of kernel is odd, the first K/2 and last K/2 components are filled with 0. Here value returns the floor of value. Otherwise, the first K/2 − 1 and last K/2 components are filled with 0. Other components are copied from e s and e r directly. As shown in Eq. (6) the convolution operation amounts to
where W ∈ R CF L ×F L is a matrix for the linear transformation, and f denotes a non-linear function. The feature map matrix is reshaped into a vector vec(M) ∈ R CF L and projected into a F L dimensional space using W for linear transformation. Then the calculated embedding is matched to e o by an appropriate distance metric. During the training in our experiments, we apply the logistic sigmoid function to the scoring:
p(e s , e r , e o ) = σ(ψ(e s , e o )).
(8) In Table 1, we summarize the scoring functions used by several state of the art models. The vector e s and e o are the subject and object embedding respectively, e r is the relation embedding, "concat" means concatenates the inputs, and "*" denotes the convolution operator.
In summary, the proposed SACN model takes advantage of knowledge graph node connectivity, node attributes and relation types. The learnable weights in WGCN help to collect adaptive amount of information from neighboring graph nodes. The entity attributes are added as additional nodes in the network and are easily integrated into the WGCN. Conv-TransE keeps the translational property between entities and relations to learn node embeddings for the link prediction. We also emphasize that our SACN has significant improvements over ConvE with or without the use of node attributes.
Experiments Benchmark Datasets
Three benchmark datasets (FB15k-237, WN18RR and FB15k-237-Attr) are utilized in this study to evaluate the performance of link prediction.
FB15k-237. The FB15k-237 (Toutanova and Chen 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs, as used in the work published in (Toutanova and Chen 2015). The knowledge base triples are a subset of the FB15K (Bordes et al. 2013),
Data Construction
Most of the previous methods only model the entities and relations, and ignore the abundant entity attributes. Our method can easily model a large number of entity attribute triples. In order to prove the efficiency, we extract the attribute triples from the FB24k (Lin, Liu, and Sun 2016) dataset to build the evaluation dataset called FB15k-237-Attr.
FB24k. FB24k (Lin, Liu, and Sun 2016) is built based on Freebase dataset. FB24k only selects the entities and relations which constitute at least 30 triples. The number of entities is 23,634, and the number of relations is 673. In addition, the reversed relations are removed from the original dataset. In the FB24k datasets, the attribute triples are provided. FB24k contains 207,151 attribute triples and 314 attributes. FB15k-237-Attr. We extract the attribute triples of entities in FB15k-237 from FB24k. During the mapping, there are 7,589 nodes from the original 14,541 entities which have the node attributes. Finally, we extract 78,334 attribute triples from FB24k. These triples include 203 attributes and 247 relations. Based on these triples, we create the "FB15k-237-Attr" dataset, which includes 14,541 entity nodes, 203 attribute nodes, 484 relation types. All the 78,334 attribute triples are combined with the training set of FB15k-237.
Experimental Setup
The hyperparameters in our Conv-TransE and SACN models are determined by a grid search during the training. We manually specify the hyperparame- Here all the models use the WGCN with two layers. For different datasets, we have found that the following settings work well: for FB15k-237, set the dropout to 0.2, number of kernels to 100, learning rate to 0.003 and embedding size to 200 for SACN; for WN18RR dataset, set dropout to 0.2, number of kernels to 300, learning rate to 0.003, and embedding size to 200 for SACN. When using the Conv-TransEalone model, these settings still work well.
Each dataset is split into three sets for: training, validation and testing, which is same with the setting of the original ConvE. We use the adaptive moment (Adam) algorithm (Kingma and Ba 2014)
Results
Evaluation Protocol Our experiments use the the proportion of correct entities ranked in top 1,3 and 10 (Hits@1, Hits@3, Hits@10) and the mean reciprocal rank (MRR) as the metrics. In addition, since some corrupted triples exist in the knowledge graphs, we use the filtered setting (Bordes et al. 2013), i.e. we filter out all valid triples before ranking.
Link Prediction
Our results on the standard FB15k-237, WN18RR and FB15k-237-Attr are shown in Table 3. Table 3 reports Hits@10, Hits@3, Hits@1 and MRR results of four different baseline models and two our models on three knowledge graphs datasets. The FB15k-237-Attr dataset is used to prove the efficiency of node attributes. So we run our SACN in FB15k-237-Attr to do the comparison with SACN using FB15k-237.
We first compare our Conv-TransE model with the four baseline models. ConvE has the best performance comparing all baselines. In FB15k-237 dataset, our Conv-TransE model improves upon ConvE's Hits@10 by a margin of 4.1% , and upon ConvE's Hits@3 by a margin of 5.7% for the test. In WN18RR dataset, Conv-TransE improves upon ConvE's Hits@10 by a margin of 8.3% , and upon ConvE's Hits@3 by a margin of 9.3% for the test. For these results, we conclude that Conv-TransE using neural network keeps the translational characteristic between entities and relations and achieve better performance.
Second, the structure information is added into our SACN model. In Table 3, SACN also get the best performances in the test dataset comparing all baseline methods. In FB15k-237, comparing ConvE, our SACN model improves Hits@10 value by a margin of 10.2%, Hits@3 value by a margin of 11.4%, Hits@1 value by a margin of 8.3% and MRR value by a margin of 9.4% for the test. In WN18RR dataset, comparing ConvE, our SACN model improves Hits@10 value by a margin of 12.5%, Hits@3 value by a margin of 11.6%, Hits@1 value by a margin of 10.3% and MRR value by a margin of 2.2% for the test. So our method has significant improvements over ConvE without attributes.
Third, we add node attributes into our SACN model, i.e. we use the FB15k-237-Attr to train SACN. Note that SACN has significant improvements over ConvE without attributes. Adding attributes improves performance again. Our model using attributes improves upon ConvE's Hits@10 by a margin of 12.2% , Hits@3 by a margin of 14.3%, Hits@1 by a margin of 12.5% and MRR by a margin of 12.5%. In addition, our SACN using attributes improved Hits@10 by a margin of 1.9% , Hits@3 by a margin of 2.6%, Hits@1 by a margin of 3.8% and MRR by a margin of 2.9% comparing with SACN without attributes.
In order to better compare with ConvE, we also use the attributes into ConvE. Here the attributes will be treated as the entity triplets. Following the official ConvE code with default setting, the test result in FB15k-237-Attr was: 0.46 (Hits@10), 0.33 (Hits@3), 0.22 (Hits@1) and 0.30 (MRR). Comparing to the performance without the attributes, adding the attributes into the ConvE didn't improve performance. Figure 3 shows the convergence of the three models. We can see that the SACN (the red line) is always better than Conv-TransE (the yellow line) after several epochs. And the performance of SACN keeps increasing after around 120 epochs. However, the Conv-TransE has achieved the best performance after around 120 epochs. The gap between these two models proves the usefulness of structural information. When using the FB15k-237-Attr dataset, the performance of "SACN + Attr" is better than Table 4, different kernel sizes are examined in our models. The kernel of "2 × 1" means the knowledge or information translating between one attribute of entity vector and the corresponding attribute of relation vector. If we increase the kernel size to "2 × k" where k = {3, 5}, the information is translated between a combination of s attributes in entity vector and a combination of k attributes in relation vector. The larger view to collect attribute information can help to increase the performance as shown in Table 4. All the values of Hits@1, Hits@3, Hits@10 and MRR can be improved by increasing the kernel size in the FB15k-237 and FB15k-237-Attr datasets. However, the optimal kernel size may be task dependent.
Convergence Analysis
Node Indegree Analysis The indegree of the node in knowledge graph is the number of edges connected to the node. The node with larger degree means it have more neighboring nodes, and this kind of nodes can receive more information from neighboring nodes than other nodes with smaller degree. As shown in Table 5, we present the results for different sets of nodes with different indegree scopes. The average Hits@10 and Hits@3 scores are calculated. Along the increasing of indegree scope, the average value of Hits@10 and Hits@3 will be increased. First for a node with small indegree, it benefits from aggregation of neighbor information from the WGCN layers of SACN. Its embedding can be estimated robustly. Second for a node with high indegree, it means that a lot more information is aggregated through GCN, and the estimation of its embedding is substantially smoothed among neighbors. Thus the embedding learned from SACN is worse than that from Conv-TransE. One solution to this problem would be neighbor selection as in (Ying et al. 2018).
Conclusion and Future Work
We have introduced an end-to-end structure-aware convolutional network (SACN). The encoding network is a weighted graph convolutional network, utilizing knowledge graph connectivity structure, node attributes and relation types. WGCN with learnable weights has the benefit of collecting adaptive amount of information from neighboring graph nodes. In addition, the entity attributes are added as the nodes in the network so that attributes are transformed into knowledge structure information, which is easily integrated into the node embedding. The scoring network of SACN is a convolutional neural model, called Conv-TransE. It uses a convolutional network to model the relationship as the translation operation and capture the translational characteristic between entities and relations. We also prove that Conv-TransE alone has already achieved the state of the art performance. The performance of SACN achieves overall about 10% improvement than the state of the art such as ConvE.
In the future, we would like to incorporate the neighbor selection idea into our training framework, such as, importance pooling in (Ying et al. 2018) which takes into account the importance of neighbors when aggregating the vector representations of neighbors. We would also like to extend our model to be scalable with larger knowledge graphs encouraged by the results in (Ying et al. 2018).
| 4,074 |
1811.04441
|
2950393809
|
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10 relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
|
The most recent KG embedding models are ConvE @cite_14 and ConvKB @cite_20 . ConvE was the first model using 2D convolutions over embeddings of different embedding dimensions, with the hope of extracting more feature interactions. ConvKB replaced 2D convolutions in ConvE with 1D convolutions, which constrains the convolutions to be the same embedding dimensions and keeps the translational property of TransE. ConvKB can be considered as a special case of Conv-TransE that only uses filters with width equal to @math . Although ConvKB was shown to be better than ConvE , the results on two datasets (FB15k-237 and WN18RR) were not consistent, so we leave these results out of our comparison table. The other major difference of ConvE and ConvKB is on the loss functions used in the models. ConvE used the cross-entropy loss that could be sped up with 1-N scoring in the decoder, while ConvKB used a hinge loss that was computed from positive examples and sampled negative examples. We take the decoder from ConvE because we can easily integrate the encoder of GCN and the decoder of ConvE into an end-to-end training framework, while ConvKB is not suitable for our approach.
|
{
"abstract": [
"Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models -- which potentially limits performance. In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set -- however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -- deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.",
"We introduce a novel embedding method for knowledge base completion task. Our approach advances state-of-the-art (SOTA) by employing a convolutional neural network (CNN) for the task which can capture global relationships and transitional characteristics. We represent each triple (head entity, relation, tail entity) as a 3-column matrix which is the input for the convolution layer. Different filters having a same shape of 1x3 are operated over the input matrix to produce different feature maps which are then concatenated into a single feature vector. This vector is used to return a score for the triple via a dot product. The returned score is used to predict whether the triple is valid or not. Experiments show that ConvKB achieves better link prediction results than previous SOTA models on two current benchmark datasets WN18RR and FB15k-237."
],
"cite_N": [
"@cite_14",
"@cite_20"
],
"mid": [
"2728059831",
"2774837955"
]
}
|
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
|
Over the recent years, large-scale knowledge bases (KBs), such as Freebase (Bollacker et al. 2008), DBpedia (Auer et al. 2007), NELL (Carlson et al. 2010) and YAGO3 (Mahdisoltani, Biega, and Suchanek 2013), have been built to store structured information about common facts. KBs are multirelational graphs whose nodes represent entities and edges represent relationships between entities, and the edges are labeled with different relations. The relationships are organized in the forms of (s, r, o) triplets (e.g. entity s = Abraham Lincoln, relation r = DateOfBirth, entity o = 02-12-1809). These KBs are extensively used for web search, rec-ommendation and question answering. Although these KBs have already contained millions of entities and triplets, they are far from complete compared to existing facts and newly added knowledge of the real world. Therefore knowledge base completion is important in order to predict new triplets based on existing ones and thus further expand KBs.
One of the recent active research areas for knowledge base completion is knowledge graph embedding: it encodes the semantics of entities and relations in a continuous lowdimensional vector space (called embeddings). These embeddings are then used for predicting new relations. Started from a simple and effective approach called TransE (Bordes et al. 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al. 2014), TransR (Lin et al. 2015), DistMult (Yang et al. 2014), TransD (Ji et al. 2015), ComplEx (Trouillon et al. 2016), STransE (Nguyen et al. 2016). Some surveys (Nguyen 2017;Wang et al. 2017) give details and comparisons of these embedding methods.
The most recent ConvE (Dettmers et al. 2017) model uses 2D convolution over embeddings and multiple layers of nonlinear features, and achieves the state-of-the-art performance on common benchmark datasets for knowledge graph link prediction. In ConvE, the embeddings of s and r are reshaped and concatenated into an input matrix and fed to the convolution layer. Convolutional filters of n × n are used to output feature maps that are across different dimensional embedding entries. Thus ConvE does not keep the translational property as TransE which is an additive embedding vector operation: e s + e r ≈ e o ( ). In this paper, we remove the reshape step of ConvE and operate convolutional filters directly in the same dimensions of s and r. This modification gives better performance compared with the original ConvE, and has an intuitive interpretation which keeps the global learning metric the same for s, r, and o in an embedding triple (e s , e r , e o ). We name this embedding as Conv-TransE.
ConvE also does not incorporate connectivity structure in the knowledge graph into the embedding space. In contrast, graph convolutional network (GCN) has been an effective tool to create node embeddings which aggregate local information in the graph neighborhood for each node (Kipf and Welling 2016b;Hamilton, Ying, and Leskovec 2017a;Kipf and Welling 2016a;Pham et al. 2017;Shang et al. 2018). GCN models have additional benefits (Hamilton, Ying, and Leskovec 2017b), such as leveraging the attributes associated with nodes. They can also impose the same aggregation scheme when computing the convolution for each node, which can be considered a method of regularization, and improves efficiency. Although scalability is originally an issue for GCN models, the latest data-efficient GCN, Pin-Sage (Ying et al. 2018), is able to handle billions of nodes and edges.
In this paper, we propose an end-to-end graph Structure-Aware Convolutional Networks (SACN) that take all benefits of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and relation types. It has learnable weights to determine the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes are added to WGCN as additional for easy integration. The output of WGCN becomes the input of the decoder Conv-TransE. Conv-TransE is similar to ConvE but with the difference that Conv-TransE keeps the translational characteristic between entities and relations. We show that Conv-TransE performs better than ConvE, and our SACN improves further on top of Conv-TransE in the standard benchmark datasets. The code for our model and experiments is publicly available 1 .
Our contributions are summarized as follows:
• We present an end-to-end network learning framework SACN that takes benefit of both GCN and Conv-TransE. The encoder GCN model leverages graph structure and attributes of graph nodes. The decoder Conv-TransE simplifies ConvE with special convolutions and keeps the translational property of TransE and the prediction performance of ConvE;
• We demonstrate the effectiveness of our proposed SACN on the standard FB15k-237 and WN18RR datasets, and show about 10% relative improvement over the stateof-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
Method
In this section, we describe the proposed end-to-end SACN. The encoder WGCN is focused on representing entities by aggregating connected entities as specified by the relations in the KB. With node embeddings as the input, the decoder Conv-TransE network aims to represent the relations more accurately by recovering the original triplets in the KB. Both encoder and decoder are trained jointly by minimizing the discrepancy (cross-entropy) between the embeddings e s +e r and e o to preserve the translational property e s + e r ≈ e o . We consider an undirected graph G = (V, E) throughout this section, where V is a set of nodes with |V | = N , and
E ⊆ V × V is a set of edges with |E| = M .
Weighted Graph Convolutional Layer
The WGCN is an extension of classic GCN (Kipf and Welling 2016b) in the way that it weighs the different types of relations differently when aggregating and the weights are adaptively learned during the training of the network. By this adaptation, the WGCN can control the amount of information from neighboring nodes used in aggregation. Roughly speaking, the WGCN treats a multi-relational KB graph as multiple single-relational subgraphs where each subgraph entails a specific type of relations. The WGCN determines how much weights to give to each subgraph when combining the GCN embeddings for a node.
The l-th WGCN layer takes the output vector of length F l for each node from the previous layer as inputs and generates a new representation comprising F l+1 elements. Let h l i represent the input (row) vector of the node v i in the l-th layer, and thus H l ∈ R N ×F l be the input matrix for this layer. The initial embedding H 1 is randomly drawn from Gaussian. If there are a total of L layers in the WGCN, the output H L+1 of the L-th layer is the final embedding. Let the total number of edge types be T in a multi-relational KB graph with E edges. The interaction strength between two adjacent nodes is determined by their relation type and this strength is specified by a parameter {α t , 1 ≤ t ≤ T } for each edge type, which is automatically learned in the neural network. Figure 1 illustrates the entire process of SACN. In this example, the WGCN layers of the network compute the embeddings for the red node in the middle graph. These layers aggregate the embeddings of neighboring entity nodes as specified in the KB relations. Three colors (blue, yellow and green) of the edges indicate three different relation types in the graph. The corresponding three entity nodes are summed up with different weights according to α t in this layer to obtain the embedding of the red node. The edges with the same color (same relation type) use the same α t . Each layer has its own set of relation weights α l t . Hence, the output of the l-th layer for the node v i can be written as follows:
h l+1 i = σ j∈N i α l t g(h l i , h l j ) ,(1)
where h l j ∈ R F l is the input for node v j , and v j is a node in the neighbor N i of node v i . The g function specifies how to incorporate neighboring information. Note that the activation function σ here is applied to every component of its input vector. Although any function g suitable for a KB embedding can be used in conjunction with the proposed framework, we implement the following g function:
g(h l i , h l j ) = h l j W l ,(2)
where W l ∈ R F l ×F l+1 is the connection coefficient matrix and used to linearly transform h l i to h l+1 i ∈ R F l+1 . In Eq. (1), the input vectors of all neighboring nodes are summed up but not the node v i itself, hence self-loops are enforced in the network. For node v i , the propagation process is defined as:
h l+1 i = σ( j∈N i α l t h l j W l + h l i W l ).(3)
The output of the layer l is a node feature matrix: H l+1 ∈ R N ×F l+1 , and h l+1 i is the i-th row of H l+1 , which represents features of the node v i in the (l + 1)-th layer.
The above process can be organized as a matrix multiplication as shown in Figure 2 to simultaneously compute embeddings for all nodes through an adjacency matrix. For each relation (edge) type, an adjacency matrix A t is a binary matrix whose ij-th entry is 1 if an edge connecting v i and v j exists or 0 otherwise. The final adjacency matrix is written as follows:
A l = T t=1 (α l t A t ) + I,(4)
where I is the identity matrix of size N × N . Basically, the A l is the weighted sum of the adjacency matrices of subgraphs plus self-connections. In our implementation, we consider all first-order neighbors in the linear transformation for each layer as shown in Figure 2:
H l+1 = σ(A l H l W l ).(5)
Node Attributes. In a KB graph, nodes are often associated with several attributes in the form of (entity, relation, attribute). For example, (s = Tom, r = people.person.gender, a = male) is an instance where gender is an attribute associated with a person. If a vector representation is used for node attributes, there would be two potential problems. First, the number of attributes for each node is usually small, and differs from one to another.
Hence, the attribute vector would be very sparse. Second, the value of zero in the attribute vectors may have ambiguous meanings: the node does not have the specific attribute, or the node misses the value for this attribute. These zeros would affect the accuracy of the embedding. In this work, the entity attributes in the knowledge graph are represented by another set of nodes in the network called attribute nodes. Attribute nodes act as the "bridges" to link the related entities. The entity embeddings can be transported over these "bridges" to incorporate the entity's attribute into its embedding. Because these attributes exhibit in triplets, we represent the attributes similarly to the representation of the entity o in relation triplets. Note that each type of attribute corresponds to a node. For instance, in our example, gender is represented by a single node rather than two nodes for "male" and "female". In this way, the WGCN not only utilizes the graph connectivity structure (relations and relation types), but also leverages the node attributes (a kind of graph structure) effectively. That is why we name our WGCN as a structure-aware convolution network.
Conv-TransE
We develop the Conv-TransE model as a decoder that is based on ConvE but with the translational property of TransE: e s + e r ≈ e o . The key difference of our approach from ConvE is that there is no reshaping after stacking e s and e r . Filters (or kernels) of size 2 × k , k ∈ {1, 2, 3, ...}, are used in the convolution. The example in Figure 1 uses 2 × 3 kernels to compute 2D convolutions. We experimented with several of such settings in our empirical study.
Note that in the encoder of SACN, the dimension of the relation embedding is commonly chosen to be the same as the dimension of the entity embedding, so in other words, is equal to F L . Hence, the two embeddings can be stacked. For the decoder, the inputs are two embedding matrices: one R N ×F L from WGCN for all entity nodes, and the other R M ×F L for relation embedding matrix which is trained as well. Because we use a mini-batch stochastic training algorithm, the first step of the decoder performs a look-up operation upon the embedding matrices to retrieve the input e s and e r for the triplets in the mini-batch.
More precisely, given C different kernels where the c-th kernel is parameterized by ω c , the convolution in the decoder is computed as follows:
m c (e s , e r , n) = K−1 τ =0 ω c (τ, 0)ê s (n + τ ) + ω c (τ, 1)ê r (n + τ ),(6)
where K is the kernel width, n indexes the entries in the output vector and n ∈ [0, F L − 1], and the kernel parameters ω c are trainable.ê s andê r are padding version of e s and e r respectively. If the dimension s of kernel is odd, the first K/2 and last K/2 components are filled with 0. Here value returns the floor of value. Otherwise, the first K/2 − 1 and last K/2 components are filled with 0. Other components are copied from e s and e r directly. As shown in Eq. (6) the convolution operation amounts to
where W ∈ R CF L ×F L is a matrix for the linear transformation, and f denotes a non-linear function. The feature map matrix is reshaped into a vector vec(M) ∈ R CF L and projected into a F L dimensional space using W for linear transformation. Then the calculated embedding is matched to e o by an appropriate distance metric. During the training in our experiments, we apply the logistic sigmoid function to the scoring:
p(e s , e r , e o ) = σ(ψ(e s , e o )).
(8) In Table 1, we summarize the scoring functions used by several state of the art models. The vector e s and e o are the subject and object embedding respectively, e r is the relation embedding, "concat" means concatenates the inputs, and "*" denotes the convolution operator.
In summary, the proposed SACN model takes advantage of knowledge graph node connectivity, node attributes and relation types. The learnable weights in WGCN help to collect adaptive amount of information from neighboring graph nodes. The entity attributes are added as additional nodes in the network and are easily integrated into the WGCN. Conv-TransE keeps the translational property between entities and relations to learn node embeddings for the link prediction. We also emphasize that our SACN has significant improvements over ConvE with or without the use of node attributes.
Experiments Benchmark Datasets
Three benchmark datasets (FB15k-237, WN18RR and FB15k-237-Attr) are utilized in this study to evaluate the performance of link prediction.
FB15k-237. The FB15k-237 (Toutanova and Chen 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs, as used in the work published in (Toutanova and Chen 2015). The knowledge base triples are a subset of the FB15K (Bordes et al. 2013),
Data Construction
Most of the previous methods only model the entities and relations, and ignore the abundant entity attributes. Our method can easily model a large number of entity attribute triples. In order to prove the efficiency, we extract the attribute triples from the FB24k (Lin, Liu, and Sun 2016) dataset to build the evaluation dataset called FB15k-237-Attr.
FB24k. FB24k (Lin, Liu, and Sun 2016) is built based on Freebase dataset. FB24k only selects the entities and relations which constitute at least 30 triples. The number of entities is 23,634, and the number of relations is 673. In addition, the reversed relations are removed from the original dataset. In the FB24k datasets, the attribute triples are provided. FB24k contains 207,151 attribute triples and 314 attributes. FB15k-237-Attr. We extract the attribute triples of entities in FB15k-237 from FB24k. During the mapping, there are 7,589 nodes from the original 14,541 entities which have the node attributes. Finally, we extract 78,334 attribute triples from FB24k. These triples include 203 attributes and 247 relations. Based on these triples, we create the "FB15k-237-Attr" dataset, which includes 14,541 entity nodes, 203 attribute nodes, 484 relation types. All the 78,334 attribute triples are combined with the training set of FB15k-237.
Experimental Setup
The hyperparameters in our Conv-TransE and SACN models are determined by a grid search during the training. We manually specify the hyperparame- Here all the models use the WGCN with two layers. For different datasets, we have found that the following settings work well: for FB15k-237, set the dropout to 0.2, number of kernels to 100, learning rate to 0.003 and embedding size to 200 for SACN; for WN18RR dataset, set dropout to 0.2, number of kernels to 300, learning rate to 0.003, and embedding size to 200 for SACN. When using the Conv-TransEalone model, these settings still work well.
Each dataset is split into three sets for: training, validation and testing, which is same with the setting of the original ConvE. We use the adaptive moment (Adam) algorithm (Kingma and Ba 2014)
Results
Evaluation Protocol Our experiments use the the proportion of correct entities ranked in top 1,3 and 10 (Hits@1, Hits@3, Hits@10) and the mean reciprocal rank (MRR) as the metrics. In addition, since some corrupted triples exist in the knowledge graphs, we use the filtered setting (Bordes et al. 2013), i.e. we filter out all valid triples before ranking.
Link Prediction
Our results on the standard FB15k-237, WN18RR and FB15k-237-Attr are shown in Table 3. Table 3 reports Hits@10, Hits@3, Hits@1 and MRR results of four different baseline models and two our models on three knowledge graphs datasets. The FB15k-237-Attr dataset is used to prove the efficiency of node attributes. So we run our SACN in FB15k-237-Attr to do the comparison with SACN using FB15k-237.
We first compare our Conv-TransE model with the four baseline models. ConvE has the best performance comparing all baselines. In FB15k-237 dataset, our Conv-TransE model improves upon ConvE's Hits@10 by a margin of 4.1% , and upon ConvE's Hits@3 by a margin of 5.7% for the test. In WN18RR dataset, Conv-TransE improves upon ConvE's Hits@10 by a margin of 8.3% , and upon ConvE's Hits@3 by a margin of 9.3% for the test. For these results, we conclude that Conv-TransE using neural network keeps the translational characteristic between entities and relations and achieve better performance.
Second, the structure information is added into our SACN model. In Table 3, SACN also get the best performances in the test dataset comparing all baseline methods. In FB15k-237, comparing ConvE, our SACN model improves Hits@10 value by a margin of 10.2%, Hits@3 value by a margin of 11.4%, Hits@1 value by a margin of 8.3% and MRR value by a margin of 9.4% for the test. In WN18RR dataset, comparing ConvE, our SACN model improves Hits@10 value by a margin of 12.5%, Hits@3 value by a margin of 11.6%, Hits@1 value by a margin of 10.3% and MRR value by a margin of 2.2% for the test. So our method has significant improvements over ConvE without attributes.
Third, we add node attributes into our SACN model, i.e. we use the FB15k-237-Attr to train SACN. Note that SACN has significant improvements over ConvE without attributes. Adding attributes improves performance again. Our model using attributes improves upon ConvE's Hits@10 by a margin of 12.2% , Hits@3 by a margin of 14.3%, Hits@1 by a margin of 12.5% and MRR by a margin of 12.5%. In addition, our SACN using attributes improved Hits@10 by a margin of 1.9% , Hits@3 by a margin of 2.6%, Hits@1 by a margin of 3.8% and MRR by a margin of 2.9% comparing with SACN without attributes.
In order to better compare with ConvE, we also use the attributes into ConvE. Here the attributes will be treated as the entity triplets. Following the official ConvE code with default setting, the test result in FB15k-237-Attr was: 0.46 (Hits@10), 0.33 (Hits@3), 0.22 (Hits@1) and 0.30 (MRR). Comparing to the performance without the attributes, adding the attributes into the ConvE didn't improve performance. Figure 3 shows the convergence of the three models. We can see that the SACN (the red line) is always better than Conv-TransE (the yellow line) after several epochs. And the performance of SACN keeps increasing after around 120 epochs. However, the Conv-TransE has achieved the best performance after around 120 epochs. The gap between these two models proves the usefulness of structural information. When using the FB15k-237-Attr dataset, the performance of "SACN + Attr" is better than Table 4, different kernel sizes are examined in our models. The kernel of "2 × 1" means the knowledge or information translating between one attribute of entity vector and the corresponding attribute of relation vector. If we increase the kernel size to "2 × k" where k = {3, 5}, the information is translated between a combination of s attributes in entity vector and a combination of k attributes in relation vector. The larger view to collect attribute information can help to increase the performance as shown in Table 4. All the values of Hits@1, Hits@3, Hits@10 and MRR can be improved by increasing the kernel size in the FB15k-237 and FB15k-237-Attr datasets. However, the optimal kernel size may be task dependent.
Convergence Analysis
Node Indegree Analysis The indegree of the node in knowledge graph is the number of edges connected to the node. The node with larger degree means it have more neighboring nodes, and this kind of nodes can receive more information from neighboring nodes than other nodes with smaller degree. As shown in Table 5, we present the results for different sets of nodes with different indegree scopes. The average Hits@10 and Hits@3 scores are calculated. Along the increasing of indegree scope, the average value of Hits@10 and Hits@3 will be increased. First for a node with small indegree, it benefits from aggregation of neighbor information from the WGCN layers of SACN. Its embedding can be estimated robustly. Second for a node with high indegree, it means that a lot more information is aggregated through GCN, and the estimation of its embedding is substantially smoothed among neighbors. Thus the embedding learned from SACN is worse than that from Conv-TransE. One solution to this problem would be neighbor selection as in (Ying et al. 2018).
Conclusion and Future Work
We have introduced an end-to-end structure-aware convolutional network (SACN). The encoding network is a weighted graph convolutional network, utilizing knowledge graph connectivity structure, node attributes and relation types. WGCN with learnable weights has the benefit of collecting adaptive amount of information from neighboring graph nodes. In addition, the entity attributes are added as the nodes in the network so that attributes are transformed into knowledge structure information, which is easily integrated into the node embedding. The scoring network of SACN is a convolutional neural model, called Conv-TransE. It uses a convolutional network to model the relationship as the translation operation and capture the translational characteristic between entities and relations. We also prove that Conv-TransE alone has already achieved the state of the art performance. The performance of SACN achieves overall about 10% improvement than the state of the art such as ConvE.
In the future, we would like to incorporate the neighbor selection idea into our training framework, such as, importance pooling in (Ying et al. 2018) which takes into account the importance of neighbors when aggregating the vector representations of neighbors. We would also like to extend our model to be scalable with larger knowledge graphs encouraged by the results in (Ying et al. 2018).
| 4,074 |
1811.04441
|
2950393809
|
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10 relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
|
GCNs were first proposed in @cite_27 where graph convolutional operations were defined in the Fourier domain. The eigendecomposition of the graph Laplacian caused intense computation. Later, smooth parametric spectral filters @cite_7 @cite_16 were introduced to achieve localization in the spatial domain and improve computational efficiency. Recently, @cite_9 simplified these spectral methods by a first-order approximation with the Chebyshev polynomials. The spatial graph convolution approaches @cite_3 define convolutions directly on graph, which sum up node features over all spatial neighbors using adjacency matrix.
|
{
"abstract": [
"Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
""
],
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_27",
"@cite_16"
],
"mid": [
"637153065",
"2519887557",
"2962767366",
"1662382123",
""
]
}
|
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
|
Over the recent years, large-scale knowledge bases (KBs), such as Freebase (Bollacker et al. 2008), DBpedia (Auer et al. 2007), NELL (Carlson et al. 2010) and YAGO3 (Mahdisoltani, Biega, and Suchanek 2013), have been built to store structured information about common facts. KBs are multirelational graphs whose nodes represent entities and edges represent relationships between entities, and the edges are labeled with different relations. The relationships are organized in the forms of (s, r, o) triplets (e.g. entity s = Abraham Lincoln, relation r = DateOfBirth, entity o = 02-12-1809). These KBs are extensively used for web search, rec-ommendation and question answering. Although these KBs have already contained millions of entities and triplets, they are far from complete compared to existing facts and newly added knowledge of the real world. Therefore knowledge base completion is important in order to predict new triplets based on existing ones and thus further expand KBs.
One of the recent active research areas for knowledge base completion is knowledge graph embedding: it encodes the semantics of entities and relations in a continuous lowdimensional vector space (called embeddings). These embeddings are then used for predicting new relations. Started from a simple and effective approach called TransE (Bordes et al. 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al. 2014), TransR (Lin et al. 2015), DistMult (Yang et al. 2014), TransD (Ji et al. 2015), ComplEx (Trouillon et al. 2016), STransE (Nguyen et al. 2016). Some surveys (Nguyen 2017;Wang et al. 2017) give details and comparisons of these embedding methods.
The most recent ConvE (Dettmers et al. 2017) model uses 2D convolution over embeddings and multiple layers of nonlinear features, and achieves the state-of-the-art performance on common benchmark datasets for knowledge graph link prediction. In ConvE, the embeddings of s and r are reshaped and concatenated into an input matrix and fed to the convolution layer. Convolutional filters of n × n are used to output feature maps that are across different dimensional embedding entries. Thus ConvE does not keep the translational property as TransE which is an additive embedding vector operation: e s + e r ≈ e o ( ). In this paper, we remove the reshape step of ConvE and operate convolutional filters directly in the same dimensions of s and r. This modification gives better performance compared with the original ConvE, and has an intuitive interpretation which keeps the global learning metric the same for s, r, and o in an embedding triple (e s , e r , e o ). We name this embedding as Conv-TransE.
ConvE also does not incorporate connectivity structure in the knowledge graph into the embedding space. In contrast, graph convolutional network (GCN) has been an effective tool to create node embeddings which aggregate local information in the graph neighborhood for each node (Kipf and Welling 2016b;Hamilton, Ying, and Leskovec 2017a;Kipf and Welling 2016a;Pham et al. 2017;Shang et al. 2018). GCN models have additional benefits (Hamilton, Ying, and Leskovec 2017b), such as leveraging the attributes associated with nodes. They can also impose the same aggregation scheme when computing the convolution for each node, which can be considered a method of regularization, and improves efficiency. Although scalability is originally an issue for GCN models, the latest data-efficient GCN, Pin-Sage (Ying et al. 2018), is able to handle billions of nodes and edges.
In this paper, we propose an end-to-end graph Structure-Aware Convolutional Networks (SACN) that take all benefits of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and relation types. It has learnable weights to determine the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes are added to WGCN as additional for easy integration. The output of WGCN becomes the input of the decoder Conv-TransE. Conv-TransE is similar to ConvE but with the difference that Conv-TransE keeps the translational characteristic between entities and relations. We show that Conv-TransE performs better than ConvE, and our SACN improves further on top of Conv-TransE in the standard benchmark datasets. The code for our model and experiments is publicly available 1 .
Our contributions are summarized as follows:
• We present an end-to-end network learning framework SACN that takes benefit of both GCN and Conv-TransE. The encoder GCN model leverages graph structure and attributes of graph nodes. The decoder Conv-TransE simplifies ConvE with special convolutions and keeps the translational property of TransE and the prediction performance of ConvE;
• We demonstrate the effectiveness of our proposed SACN on the standard FB15k-237 and WN18RR datasets, and show about 10% relative improvement over the stateof-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
Method
In this section, we describe the proposed end-to-end SACN. The encoder WGCN is focused on representing entities by aggregating connected entities as specified by the relations in the KB. With node embeddings as the input, the decoder Conv-TransE network aims to represent the relations more accurately by recovering the original triplets in the KB. Both encoder and decoder are trained jointly by minimizing the discrepancy (cross-entropy) between the embeddings e s +e r and e o to preserve the translational property e s + e r ≈ e o . We consider an undirected graph G = (V, E) throughout this section, where V is a set of nodes with |V | = N , and
E ⊆ V × V is a set of edges with |E| = M .
Weighted Graph Convolutional Layer
The WGCN is an extension of classic GCN (Kipf and Welling 2016b) in the way that it weighs the different types of relations differently when aggregating and the weights are adaptively learned during the training of the network. By this adaptation, the WGCN can control the amount of information from neighboring nodes used in aggregation. Roughly speaking, the WGCN treats a multi-relational KB graph as multiple single-relational subgraphs where each subgraph entails a specific type of relations. The WGCN determines how much weights to give to each subgraph when combining the GCN embeddings for a node.
The l-th WGCN layer takes the output vector of length F l for each node from the previous layer as inputs and generates a new representation comprising F l+1 elements. Let h l i represent the input (row) vector of the node v i in the l-th layer, and thus H l ∈ R N ×F l be the input matrix for this layer. The initial embedding H 1 is randomly drawn from Gaussian. If there are a total of L layers in the WGCN, the output H L+1 of the L-th layer is the final embedding. Let the total number of edge types be T in a multi-relational KB graph with E edges. The interaction strength between two adjacent nodes is determined by their relation type and this strength is specified by a parameter {α t , 1 ≤ t ≤ T } for each edge type, which is automatically learned in the neural network. Figure 1 illustrates the entire process of SACN. In this example, the WGCN layers of the network compute the embeddings for the red node in the middle graph. These layers aggregate the embeddings of neighboring entity nodes as specified in the KB relations. Three colors (blue, yellow and green) of the edges indicate three different relation types in the graph. The corresponding three entity nodes are summed up with different weights according to α t in this layer to obtain the embedding of the red node. The edges with the same color (same relation type) use the same α t . Each layer has its own set of relation weights α l t . Hence, the output of the l-th layer for the node v i can be written as follows:
h l+1 i = σ j∈N i α l t g(h l i , h l j ) ,(1)
where h l j ∈ R F l is the input for node v j , and v j is a node in the neighbor N i of node v i . The g function specifies how to incorporate neighboring information. Note that the activation function σ here is applied to every component of its input vector. Although any function g suitable for a KB embedding can be used in conjunction with the proposed framework, we implement the following g function:
g(h l i , h l j ) = h l j W l ,(2)
where W l ∈ R F l ×F l+1 is the connection coefficient matrix and used to linearly transform h l i to h l+1 i ∈ R F l+1 . In Eq. (1), the input vectors of all neighboring nodes are summed up but not the node v i itself, hence self-loops are enforced in the network. For node v i , the propagation process is defined as:
h l+1 i = σ( j∈N i α l t h l j W l + h l i W l ).(3)
The output of the layer l is a node feature matrix: H l+1 ∈ R N ×F l+1 , and h l+1 i is the i-th row of H l+1 , which represents features of the node v i in the (l + 1)-th layer.
The above process can be organized as a matrix multiplication as shown in Figure 2 to simultaneously compute embeddings for all nodes through an adjacency matrix. For each relation (edge) type, an adjacency matrix A t is a binary matrix whose ij-th entry is 1 if an edge connecting v i and v j exists or 0 otherwise. The final adjacency matrix is written as follows:
A l = T t=1 (α l t A t ) + I,(4)
where I is the identity matrix of size N × N . Basically, the A l is the weighted sum of the adjacency matrices of subgraphs plus self-connections. In our implementation, we consider all first-order neighbors in the linear transformation for each layer as shown in Figure 2:
H l+1 = σ(A l H l W l ).(5)
Node Attributes. In a KB graph, nodes are often associated with several attributes in the form of (entity, relation, attribute). For example, (s = Tom, r = people.person.gender, a = male) is an instance where gender is an attribute associated with a person. If a vector representation is used for node attributes, there would be two potential problems. First, the number of attributes for each node is usually small, and differs from one to another.
Hence, the attribute vector would be very sparse. Second, the value of zero in the attribute vectors may have ambiguous meanings: the node does not have the specific attribute, or the node misses the value for this attribute. These zeros would affect the accuracy of the embedding. In this work, the entity attributes in the knowledge graph are represented by another set of nodes in the network called attribute nodes. Attribute nodes act as the "bridges" to link the related entities. The entity embeddings can be transported over these "bridges" to incorporate the entity's attribute into its embedding. Because these attributes exhibit in triplets, we represent the attributes similarly to the representation of the entity o in relation triplets. Note that each type of attribute corresponds to a node. For instance, in our example, gender is represented by a single node rather than two nodes for "male" and "female". In this way, the WGCN not only utilizes the graph connectivity structure (relations and relation types), but also leverages the node attributes (a kind of graph structure) effectively. That is why we name our WGCN as a structure-aware convolution network.
Conv-TransE
We develop the Conv-TransE model as a decoder that is based on ConvE but with the translational property of TransE: e s + e r ≈ e o . The key difference of our approach from ConvE is that there is no reshaping after stacking e s and e r . Filters (or kernels) of size 2 × k , k ∈ {1, 2, 3, ...}, are used in the convolution. The example in Figure 1 uses 2 × 3 kernels to compute 2D convolutions. We experimented with several of such settings in our empirical study.
Note that in the encoder of SACN, the dimension of the relation embedding is commonly chosen to be the same as the dimension of the entity embedding, so in other words, is equal to F L . Hence, the two embeddings can be stacked. For the decoder, the inputs are two embedding matrices: one R N ×F L from WGCN for all entity nodes, and the other R M ×F L for relation embedding matrix which is trained as well. Because we use a mini-batch stochastic training algorithm, the first step of the decoder performs a look-up operation upon the embedding matrices to retrieve the input e s and e r for the triplets in the mini-batch.
More precisely, given C different kernels where the c-th kernel is parameterized by ω c , the convolution in the decoder is computed as follows:
m c (e s , e r , n) = K−1 τ =0 ω c (τ, 0)ê s (n + τ ) + ω c (τ, 1)ê r (n + τ ),(6)
where K is the kernel width, n indexes the entries in the output vector and n ∈ [0, F L − 1], and the kernel parameters ω c are trainable.ê s andê r are padding version of e s and e r respectively. If the dimension s of kernel is odd, the first K/2 and last K/2 components are filled with 0. Here value returns the floor of value. Otherwise, the first K/2 − 1 and last K/2 components are filled with 0. Other components are copied from e s and e r directly. As shown in Eq. (6) the convolution operation amounts to
where W ∈ R CF L ×F L is a matrix for the linear transformation, and f denotes a non-linear function. The feature map matrix is reshaped into a vector vec(M) ∈ R CF L and projected into a F L dimensional space using W for linear transformation. Then the calculated embedding is matched to e o by an appropriate distance metric. During the training in our experiments, we apply the logistic sigmoid function to the scoring:
p(e s , e r , e o ) = σ(ψ(e s , e o )).
(8) In Table 1, we summarize the scoring functions used by several state of the art models. The vector e s and e o are the subject and object embedding respectively, e r is the relation embedding, "concat" means concatenates the inputs, and "*" denotes the convolution operator.
In summary, the proposed SACN model takes advantage of knowledge graph node connectivity, node attributes and relation types. The learnable weights in WGCN help to collect adaptive amount of information from neighboring graph nodes. The entity attributes are added as additional nodes in the network and are easily integrated into the WGCN. Conv-TransE keeps the translational property between entities and relations to learn node embeddings for the link prediction. We also emphasize that our SACN has significant improvements over ConvE with or without the use of node attributes.
Experiments Benchmark Datasets
Three benchmark datasets (FB15k-237, WN18RR and FB15k-237-Attr) are utilized in this study to evaluate the performance of link prediction.
FB15k-237. The FB15k-237 (Toutanova and Chen 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs, as used in the work published in (Toutanova and Chen 2015). The knowledge base triples are a subset of the FB15K (Bordes et al. 2013),
Data Construction
Most of the previous methods only model the entities and relations, and ignore the abundant entity attributes. Our method can easily model a large number of entity attribute triples. In order to prove the efficiency, we extract the attribute triples from the FB24k (Lin, Liu, and Sun 2016) dataset to build the evaluation dataset called FB15k-237-Attr.
FB24k. FB24k (Lin, Liu, and Sun 2016) is built based on Freebase dataset. FB24k only selects the entities and relations which constitute at least 30 triples. The number of entities is 23,634, and the number of relations is 673. In addition, the reversed relations are removed from the original dataset. In the FB24k datasets, the attribute triples are provided. FB24k contains 207,151 attribute triples and 314 attributes. FB15k-237-Attr. We extract the attribute triples of entities in FB15k-237 from FB24k. During the mapping, there are 7,589 nodes from the original 14,541 entities which have the node attributes. Finally, we extract 78,334 attribute triples from FB24k. These triples include 203 attributes and 247 relations. Based on these triples, we create the "FB15k-237-Attr" dataset, which includes 14,541 entity nodes, 203 attribute nodes, 484 relation types. All the 78,334 attribute triples are combined with the training set of FB15k-237.
Experimental Setup
The hyperparameters in our Conv-TransE and SACN models are determined by a grid search during the training. We manually specify the hyperparame- Here all the models use the WGCN with two layers. For different datasets, we have found that the following settings work well: for FB15k-237, set the dropout to 0.2, number of kernels to 100, learning rate to 0.003 and embedding size to 200 for SACN; for WN18RR dataset, set dropout to 0.2, number of kernels to 300, learning rate to 0.003, and embedding size to 200 for SACN. When using the Conv-TransEalone model, these settings still work well.
Each dataset is split into three sets for: training, validation and testing, which is same with the setting of the original ConvE. We use the adaptive moment (Adam) algorithm (Kingma and Ba 2014)
Results
Evaluation Protocol Our experiments use the the proportion of correct entities ranked in top 1,3 and 10 (Hits@1, Hits@3, Hits@10) and the mean reciprocal rank (MRR) as the metrics. In addition, since some corrupted triples exist in the knowledge graphs, we use the filtered setting (Bordes et al. 2013), i.e. we filter out all valid triples before ranking.
Link Prediction
Our results on the standard FB15k-237, WN18RR and FB15k-237-Attr are shown in Table 3. Table 3 reports Hits@10, Hits@3, Hits@1 and MRR results of four different baseline models and two our models on three knowledge graphs datasets. The FB15k-237-Attr dataset is used to prove the efficiency of node attributes. So we run our SACN in FB15k-237-Attr to do the comparison with SACN using FB15k-237.
We first compare our Conv-TransE model with the four baseline models. ConvE has the best performance comparing all baselines. In FB15k-237 dataset, our Conv-TransE model improves upon ConvE's Hits@10 by a margin of 4.1% , and upon ConvE's Hits@3 by a margin of 5.7% for the test. In WN18RR dataset, Conv-TransE improves upon ConvE's Hits@10 by a margin of 8.3% , and upon ConvE's Hits@3 by a margin of 9.3% for the test. For these results, we conclude that Conv-TransE using neural network keeps the translational characteristic between entities and relations and achieve better performance.
Second, the structure information is added into our SACN model. In Table 3, SACN also get the best performances in the test dataset comparing all baseline methods. In FB15k-237, comparing ConvE, our SACN model improves Hits@10 value by a margin of 10.2%, Hits@3 value by a margin of 11.4%, Hits@1 value by a margin of 8.3% and MRR value by a margin of 9.4% for the test. In WN18RR dataset, comparing ConvE, our SACN model improves Hits@10 value by a margin of 12.5%, Hits@3 value by a margin of 11.6%, Hits@1 value by a margin of 10.3% and MRR value by a margin of 2.2% for the test. So our method has significant improvements over ConvE without attributes.
Third, we add node attributes into our SACN model, i.e. we use the FB15k-237-Attr to train SACN. Note that SACN has significant improvements over ConvE without attributes. Adding attributes improves performance again. Our model using attributes improves upon ConvE's Hits@10 by a margin of 12.2% , Hits@3 by a margin of 14.3%, Hits@1 by a margin of 12.5% and MRR by a margin of 12.5%. In addition, our SACN using attributes improved Hits@10 by a margin of 1.9% , Hits@3 by a margin of 2.6%, Hits@1 by a margin of 3.8% and MRR by a margin of 2.9% comparing with SACN without attributes.
In order to better compare with ConvE, we also use the attributes into ConvE. Here the attributes will be treated as the entity triplets. Following the official ConvE code with default setting, the test result in FB15k-237-Attr was: 0.46 (Hits@10), 0.33 (Hits@3), 0.22 (Hits@1) and 0.30 (MRR). Comparing to the performance without the attributes, adding the attributes into the ConvE didn't improve performance. Figure 3 shows the convergence of the three models. We can see that the SACN (the red line) is always better than Conv-TransE (the yellow line) after several epochs. And the performance of SACN keeps increasing after around 120 epochs. However, the Conv-TransE has achieved the best performance after around 120 epochs. The gap between these two models proves the usefulness of structural information. When using the FB15k-237-Attr dataset, the performance of "SACN + Attr" is better than Table 4, different kernel sizes are examined in our models. The kernel of "2 × 1" means the knowledge or information translating between one attribute of entity vector and the corresponding attribute of relation vector. If we increase the kernel size to "2 × k" where k = {3, 5}, the information is translated between a combination of s attributes in entity vector and a combination of k attributes in relation vector. The larger view to collect attribute information can help to increase the performance as shown in Table 4. All the values of Hits@1, Hits@3, Hits@10 and MRR can be improved by increasing the kernel size in the FB15k-237 and FB15k-237-Attr datasets. However, the optimal kernel size may be task dependent.
Convergence Analysis
Node Indegree Analysis The indegree of the node in knowledge graph is the number of edges connected to the node. The node with larger degree means it have more neighboring nodes, and this kind of nodes can receive more information from neighboring nodes than other nodes with smaller degree. As shown in Table 5, we present the results for different sets of nodes with different indegree scopes. The average Hits@10 and Hits@3 scores are calculated. Along the increasing of indegree scope, the average value of Hits@10 and Hits@3 will be increased. First for a node with small indegree, it benefits from aggregation of neighbor information from the WGCN layers of SACN. Its embedding can be estimated robustly. Second for a node with high indegree, it means that a lot more information is aggregated through GCN, and the estimation of its embedding is substantially smoothed among neighbors. Thus the embedding learned from SACN is worse than that from Conv-TransE. One solution to this problem would be neighbor selection as in (Ying et al. 2018).
Conclusion and Future Work
We have introduced an end-to-end structure-aware convolutional network (SACN). The encoding network is a weighted graph convolutional network, utilizing knowledge graph connectivity structure, node attributes and relation types. WGCN with learnable weights has the benefit of collecting adaptive amount of information from neighboring graph nodes. In addition, the entity attributes are added as the nodes in the network so that attributes are transformed into knowledge structure information, which is easily integrated into the node embedding. The scoring network of SACN is a convolutional neural model, called Conv-TransE. It uses a convolutional network to model the relationship as the translation operation and capture the translational characteristic between entities and relations. We also prove that Conv-TransE alone has already achieved the state of the art performance. The performance of SACN achieves overall about 10% improvement than the state of the art such as ConvE.
In the future, we would like to incorporate the neighbor selection idea into our training framework, such as, importance pooling in (Ying et al. 2018) which takes into account the importance of neighbors when aggregating the vector representations of neighbors. We would also like to extend our model to be scalable with larger knowledge graphs encouraged by the results in (Ying et al. 2018).
| 4,074 |
1811.04441
|
2950393809
|
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10 relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
|
GCN models were mostly criticized for its huge memory requirement to scale to massive graphs. However, @cite_26 developed a data efficient GCN algorithm called PinSage, which combined efficient random walks and graph convolutions to generate embeddings of nodes that incorporated both graph structure as well as node features. The experiments on Pinterest data were the largest application of deep graph embeddings to date with 3 billion nodes and 18 billion edges @cite_26 . This success paves the way for a new generation of web-scale recommender systems based on GCNs. Therefore we believe that our proposed model could take advantage of huge graph structures and high computational efficiency of Conv-TransE .
|
{
"abstract": [
"Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures."
],
"cite_N": [
"@cite_26"
],
"mid": [
"2807021761"
]
}
|
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
|
Over the recent years, large-scale knowledge bases (KBs), such as Freebase (Bollacker et al. 2008), DBpedia (Auer et al. 2007), NELL (Carlson et al. 2010) and YAGO3 (Mahdisoltani, Biega, and Suchanek 2013), have been built to store structured information about common facts. KBs are multirelational graphs whose nodes represent entities and edges represent relationships between entities, and the edges are labeled with different relations. The relationships are organized in the forms of (s, r, o) triplets (e.g. entity s = Abraham Lincoln, relation r = DateOfBirth, entity o = 02-12-1809). These KBs are extensively used for web search, rec-ommendation and question answering. Although these KBs have already contained millions of entities and triplets, they are far from complete compared to existing facts and newly added knowledge of the real world. Therefore knowledge base completion is important in order to predict new triplets based on existing ones and thus further expand KBs.
One of the recent active research areas for knowledge base completion is knowledge graph embedding: it encodes the semantics of entities and relations in a continuous lowdimensional vector space (called embeddings). These embeddings are then used for predicting new relations. Started from a simple and effective approach called TransE (Bordes et al. 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al. 2014), TransR (Lin et al. 2015), DistMult (Yang et al. 2014), TransD (Ji et al. 2015), ComplEx (Trouillon et al. 2016), STransE (Nguyen et al. 2016). Some surveys (Nguyen 2017;Wang et al. 2017) give details and comparisons of these embedding methods.
The most recent ConvE (Dettmers et al. 2017) model uses 2D convolution over embeddings and multiple layers of nonlinear features, and achieves the state-of-the-art performance on common benchmark datasets for knowledge graph link prediction. In ConvE, the embeddings of s and r are reshaped and concatenated into an input matrix and fed to the convolution layer. Convolutional filters of n × n are used to output feature maps that are across different dimensional embedding entries. Thus ConvE does not keep the translational property as TransE which is an additive embedding vector operation: e s + e r ≈ e o ( ). In this paper, we remove the reshape step of ConvE and operate convolutional filters directly in the same dimensions of s and r. This modification gives better performance compared with the original ConvE, and has an intuitive interpretation which keeps the global learning metric the same for s, r, and o in an embedding triple (e s , e r , e o ). We name this embedding as Conv-TransE.
ConvE also does not incorporate connectivity structure in the knowledge graph into the embedding space. In contrast, graph convolutional network (GCN) has been an effective tool to create node embeddings which aggregate local information in the graph neighborhood for each node (Kipf and Welling 2016b;Hamilton, Ying, and Leskovec 2017a;Kipf and Welling 2016a;Pham et al. 2017;Shang et al. 2018). GCN models have additional benefits (Hamilton, Ying, and Leskovec 2017b), such as leveraging the attributes associated with nodes. They can also impose the same aggregation scheme when computing the convolution for each node, which can be considered a method of regularization, and improves efficiency. Although scalability is originally an issue for GCN models, the latest data-efficient GCN, Pin-Sage (Ying et al. 2018), is able to handle billions of nodes and edges.
In this paper, we propose an end-to-end graph Structure-Aware Convolutional Networks (SACN) that take all benefits of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and relation types. It has learnable weights to determine the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes are added to WGCN as additional for easy integration. The output of WGCN becomes the input of the decoder Conv-TransE. Conv-TransE is similar to ConvE but with the difference that Conv-TransE keeps the translational characteristic between entities and relations. We show that Conv-TransE performs better than ConvE, and our SACN improves further on top of Conv-TransE in the standard benchmark datasets. The code for our model and experiments is publicly available 1 .
Our contributions are summarized as follows:
• We present an end-to-end network learning framework SACN that takes benefit of both GCN and Conv-TransE. The encoder GCN model leverages graph structure and attributes of graph nodes. The decoder Conv-TransE simplifies ConvE with special convolutions and keeps the translational property of TransE and the prediction performance of ConvE;
• We demonstrate the effectiveness of our proposed SACN on the standard FB15k-237 and WN18RR datasets, and show about 10% relative improvement over the stateof-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.
Method
In this section, we describe the proposed end-to-end SACN. The encoder WGCN is focused on representing entities by aggregating connected entities as specified by the relations in the KB. With node embeddings as the input, the decoder Conv-TransE network aims to represent the relations more accurately by recovering the original triplets in the KB. Both encoder and decoder are trained jointly by minimizing the discrepancy (cross-entropy) between the embeddings e s +e r and e o to preserve the translational property e s + e r ≈ e o . We consider an undirected graph G = (V, E) throughout this section, where V is a set of nodes with |V | = N , and
E ⊆ V × V is a set of edges with |E| = M .
Weighted Graph Convolutional Layer
The WGCN is an extension of classic GCN (Kipf and Welling 2016b) in the way that it weighs the different types of relations differently when aggregating and the weights are adaptively learned during the training of the network. By this adaptation, the WGCN can control the amount of information from neighboring nodes used in aggregation. Roughly speaking, the WGCN treats a multi-relational KB graph as multiple single-relational subgraphs where each subgraph entails a specific type of relations. The WGCN determines how much weights to give to each subgraph when combining the GCN embeddings for a node.
The l-th WGCN layer takes the output vector of length F l for each node from the previous layer as inputs and generates a new representation comprising F l+1 elements. Let h l i represent the input (row) vector of the node v i in the l-th layer, and thus H l ∈ R N ×F l be the input matrix for this layer. The initial embedding H 1 is randomly drawn from Gaussian. If there are a total of L layers in the WGCN, the output H L+1 of the L-th layer is the final embedding. Let the total number of edge types be T in a multi-relational KB graph with E edges. The interaction strength between two adjacent nodes is determined by their relation type and this strength is specified by a parameter {α t , 1 ≤ t ≤ T } for each edge type, which is automatically learned in the neural network. Figure 1 illustrates the entire process of SACN. In this example, the WGCN layers of the network compute the embeddings for the red node in the middle graph. These layers aggregate the embeddings of neighboring entity nodes as specified in the KB relations. Three colors (blue, yellow and green) of the edges indicate three different relation types in the graph. The corresponding three entity nodes are summed up with different weights according to α t in this layer to obtain the embedding of the red node. The edges with the same color (same relation type) use the same α t . Each layer has its own set of relation weights α l t . Hence, the output of the l-th layer for the node v i can be written as follows:
h l+1 i = σ j∈N i α l t g(h l i , h l j ) ,(1)
where h l j ∈ R F l is the input for node v j , and v j is a node in the neighbor N i of node v i . The g function specifies how to incorporate neighboring information. Note that the activation function σ here is applied to every component of its input vector. Although any function g suitable for a KB embedding can be used in conjunction with the proposed framework, we implement the following g function:
g(h l i , h l j ) = h l j W l ,(2)
where W l ∈ R F l ×F l+1 is the connection coefficient matrix and used to linearly transform h l i to h l+1 i ∈ R F l+1 . In Eq. (1), the input vectors of all neighboring nodes are summed up but not the node v i itself, hence self-loops are enforced in the network. For node v i , the propagation process is defined as:
h l+1 i = σ( j∈N i α l t h l j W l + h l i W l ).(3)
The output of the layer l is a node feature matrix: H l+1 ∈ R N ×F l+1 , and h l+1 i is the i-th row of H l+1 , which represents features of the node v i in the (l + 1)-th layer.
The above process can be organized as a matrix multiplication as shown in Figure 2 to simultaneously compute embeddings for all nodes through an adjacency matrix. For each relation (edge) type, an adjacency matrix A t is a binary matrix whose ij-th entry is 1 if an edge connecting v i and v j exists or 0 otherwise. The final adjacency matrix is written as follows:
A l = T t=1 (α l t A t ) + I,(4)
where I is the identity matrix of size N × N . Basically, the A l is the weighted sum of the adjacency matrices of subgraphs plus self-connections. In our implementation, we consider all first-order neighbors in the linear transformation for each layer as shown in Figure 2:
H l+1 = σ(A l H l W l ).(5)
Node Attributes. In a KB graph, nodes are often associated with several attributes in the form of (entity, relation, attribute). For example, (s = Tom, r = people.person.gender, a = male) is an instance where gender is an attribute associated with a person. If a vector representation is used for node attributes, there would be two potential problems. First, the number of attributes for each node is usually small, and differs from one to another.
Hence, the attribute vector would be very sparse. Second, the value of zero in the attribute vectors may have ambiguous meanings: the node does not have the specific attribute, or the node misses the value for this attribute. These zeros would affect the accuracy of the embedding. In this work, the entity attributes in the knowledge graph are represented by another set of nodes in the network called attribute nodes. Attribute nodes act as the "bridges" to link the related entities. The entity embeddings can be transported over these "bridges" to incorporate the entity's attribute into its embedding. Because these attributes exhibit in triplets, we represent the attributes similarly to the representation of the entity o in relation triplets. Note that each type of attribute corresponds to a node. For instance, in our example, gender is represented by a single node rather than two nodes for "male" and "female". In this way, the WGCN not only utilizes the graph connectivity structure (relations and relation types), but also leverages the node attributes (a kind of graph structure) effectively. That is why we name our WGCN as a structure-aware convolution network.
Conv-TransE
We develop the Conv-TransE model as a decoder that is based on ConvE but with the translational property of TransE: e s + e r ≈ e o . The key difference of our approach from ConvE is that there is no reshaping after stacking e s and e r . Filters (or kernels) of size 2 × k , k ∈ {1, 2, 3, ...}, are used in the convolution. The example in Figure 1 uses 2 × 3 kernels to compute 2D convolutions. We experimented with several of such settings in our empirical study.
Note that in the encoder of SACN, the dimension of the relation embedding is commonly chosen to be the same as the dimension of the entity embedding, so in other words, is equal to F L . Hence, the two embeddings can be stacked. For the decoder, the inputs are two embedding matrices: one R N ×F L from WGCN for all entity nodes, and the other R M ×F L for relation embedding matrix which is trained as well. Because we use a mini-batch stochastic training algorithm, the first step of the decoder performs a look-up operation upon the embedding matrices to retrieve the input e s and e r for the triplets in the mini-batch.
More precisely, given C different kernels where the c-th kernel is parameterized by ω c , the convolution in the decoder is computed as follows:
m c (e s , e r , n) = K−1 τ =0 ω c (τ, 0)ê s (n + τ ) + ω c (τ, 1)ê r (n + τ ),(6)
where K is the kernel width, n indexes the entries in the output vector and n ∈ [0, F L − 1], and the kernel parameters ω c are trainable.ê s andê r are padding version of e s and e r respectively. If the dimension s of kernel is odd, the first K/2 and last K/2 components are filled with 0. Here value returns the floor of value. Otherwise, the first K/2 − 1 and last K/2 components are filled with 0. Other components are copied from e s and e r directly. As shown in Eq. (6) the convolution operation amounts to
where W ∈ R CF L ×F L is a matrix for the linear transformation, and f denotes a non-linear function. The feature map matrix is reshaped into a vector vec(M) ∈ R CF L and projected into a F L dimensional space using W for linear transformation. Then the calculated embedding is matched to e o by an appropriate distance metric. During the training in our experiments, we apply the logistic sigmoid function to the scoring:
p(e s , e r , e o ) = σ(ψ(e s , e o )).
(8) In Table 1, we summarize the scoring functions used by several state of the art models. The vector e s and e o are the subject and object embedding respectively, e r is the relation embedding, "concat" means concatenates the inputs, and "*" denotes the convolution operator.
In summary, the proposed SACN model takes advantage of knowledge graph node connectivity, node attributes and relation types. The learnable weights in WGCN help to collect adaptive amount of information from neighboring graph nodes. The entity attributes are added as additional nodes in the network and are easily integrated into the WGCN. Conv-TransE keeps the translational property between entities and relations to learn node embeddings for the link prediction. We also emphasize that our SACN has significant improvements over ConvE with or without the use of node attributes.
Experiments Benchmark Datasets
Three benchmark datasets (FB15k-237, WN18RR and FB15k-237-Attr) are utilized in this study to evaluate the performance of link prediction.
FB15k-237. The FB15k-237 (Toutanova and Chen 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs, as used in the work published in (Toutanova and Chen 2015). The knowledge base triples are a subset of the FB15K (Bordes et al. 2013),
Data Construction
Most of the previous methods only model the entities and relations, and ignore the abundant entity attributes. Our method can easily model a large number of entity attribute triples. In order to prove the efficiency, we extract the attribute triples from the FB24k (Lin, Liu, and Sun 2016) dataset to build the evaluation dataset called FB15k-237-Attr.
FB24k. FB24k (Lin, Liu, and Sun 2016) is built based on Freebase dataset. FB24k only selects the entities and relations which constitute at least 30 triples. The number of entities is 23,634, and the number of relations is 673. In addition, the reversed relations are removed from the original dataset. In the FB24k datasets, the attribute triples are provided. FB24k contains 207,151 attribute triples and 314 attributes. FB15k-237-Attr. We extract the attribute triples of entities in FB15k-237 from FB24k. During the mapping, there are 7,589 nodes from the original 14,541 entities which have the node attributes. Finally, we extract 78,334 attribute triples from FB24k. These triples include 203 attributes and 247 relations. Based on these triples, we create the "FB15k-237-Attr" dataset, which includes 14,541 entity nodes, 203 attribute nodes, 484 relation types. All the 78,334 attribute triples are combined with the training set of FB15k-237.
Experimental Setup
The hyperparameters in our Conv-TransE and SACN models are determined by a grid search during the training. We manually specify the hyperparame- Here all the models use the WGCN with two layers. For different datasets, we have found that the following settings work well: for FB15k-237, set the dropout to 0.2, number of kernels to 100, learning rate to 0.003 and embedding size to 200 for SACN; for WN18RR dataset, set dropout to 0.2, number of kernels to 300, learning rate to 0.003, and embedding size to 200 for SACN. When using the Conv-TransEalone model, these settings still work well.
Each dataset is split into three sets for: training, validation and testing, which is same with the setting of the original ConvE. We use the adaptive moment (Adam) algorithm (Kingma and Ba 2014)
Results
Evaluation Protocol Our experiments use the the proportion of correct entities ranked in top 1,3 and 10 (Hits@1, Hits@3, Hits@10) and the mean reciprocal rank (MRR) as the metrics. In addition, since some corrupted triples exist in the knowledge graphs, we use the filtered setting (Bordes et al. 2013), i.e. we filter out all valid triples before ranking.
Link Prediction
Our results on the standard FB15k-237, WN18RR and FB15k-237-Attr are shown in Table 3. Table 3 reports Hits@10, Hits@3, Hits@1 and MRR results of four different baseline models and two our models on three knowledge graphs datasets. The FB15k-237-Attr dataset is used to prove the efficiency of node attributes. So we run our SACN in FB15k-237-Attr to do the comparison with SACN using FB15k-237.
We first compare our Conv-TransE model with the four baseline models. ConvE has the best performance comparing all baselines. In FB15k-237 dataset, our Conv-TransE model improves upon ConvE's Hits@10 by a margin of 4.1% , and upon ConvE's Hits@3 by a margin of 5.7% for the test. In WN18RR dataset, Conv-TransE improves upon ConvE's Hits@10 by a margin of 8.3% , and upon ConvE's Hits@3 by a margin of 9.3% for the test. For these results, we conclude that Conv-TransE using neural network keeps the translational characteristic between entities and relations and achieve better performance.
Second, the structure information is added into our SACN model. In Table 3, SACN also get the best performances in the test dataset comparing all baseline methods. In FB15k-237, comparing ConvE, our SACN model improves Hits@10 value by a margin of 10.2%, Hits@3 value by a margin of 11.4%, Hits@1 value by a margin of 8.3% and MRR value by a margin of 9.4% for the test. In WN18RR dataset, comparing ConvE, our SACN model improves Hits@10 value by a margin of 12.5%, Hits@3 value by a margin of 11.6%, Hits@1 value by a margin of 10.3% and MRR value by a margin of 2.2% for the test. So our method has significant improvements over ConvE without attributes.
Third, we add node attributes into our SACN model, i.e. we use the FB15k-237-Attr to train SACN. Note that SACN has significant improvements over ConvE without attributes. Adding attributes improves performance again. Our model using attributes improves upon ConvE's Hits@10 by a margin of 12.2% , Hits@3 by a margin of 14.3%, Hits@1 by a margin of 12.5% and MRR by a margin of 12.5%. In addition, our SACN using attributes improved Hits@10 by a margin of 1.9% , Hits@3 by a margin of 2.6%, Hits@1 by a margin of 3.8% and MRR by a margin of 2.9% comparing with SACN without attributes.
In order to better compare with ConvE, we also use the attributes into ConvE. Here the attributes will be treated as the entity triplets. Following the official ConvE code with default setting, the test result in FB15k-237-Attr was: 0.46 (Hits@10), 0.33 (Hits@3), 0.22 (Hits@1) and 0.30 (MRR). Comparing to the performance without the attributes, adding the attributes into the ConvE didn't improve performance. Figure 3 shows the convergence of the three models. We can see that the SACN (the red line) is always better than Conv-TransE (the yellow line) after several epochs. And the performance of SACN keeps increasing after around 120 epochs. However, the Conv-TransE has achieved the best performance after around 120 epochs. The gap between these two models proves the usefulness of structural information. When using the FB15k-237-Attr dataset, the performance of "SACN + Attr" is better than Table 4, different kernel sizes are examined in our models. The kernel of "2 × 1" means the knowledge or information translating between one attribute of entity vector and the corresponding attribute of relation vector. If we increase the kernel size to "2 × k" where k = {3, 5}, the information is translated between a combination of s attributes in entity vector and a combination of k attributes in relation vector. The larger view to collect attribute information can help to increase the performance as shown in Table 4. All the values of Hits@1, Hits@3, Hits@10 and MRR can be improved by increasing the kernel size in the FB15k-237 and FB15k-237-Attr datasets. However, the optimal kernel size may be task dependent.
Convergence Analysis
Node Indegree Analysis The indegree of the node in knowledge graph is the number of edges connected to the node. The node with larger degree means it have more neighboring nodes, and this kind of nodes can receive more information from neighboring nodes than other nodes with smaller degree. As shown in Table 5, we present the results for different sets of nodes with different indegree scopes. The average Hits@10 and Hits@3 scores are calculated. Along the increasing of indegree scope, the average value of Hits@10 and Hits@3 will be increased. First for a node with small indegree, it benefits from aggregation of neighbor information from the WGCN layers of SACN. Its embedding can be estimated robustly. Second for a node with high indegree, it means that a lot more information is aggregated through GCN, and the estimation of its embedding is substantially smoothed among neighbors. Thus the embedding learned from SACN is worse than that from Conv-TransE. One solution to this problem would be neighbor selection as in (Ying et al. 2018).
Conclusion and Future Work
We have introduced an end-to-end structure-aware convolutional network (SACN). The encoding network is a weighted graph convolutional network, utilizing knowledge graph connectivity structure, node attributes and relation types. WGCN with learnable weights has the benefit of collecting adaptive amount of information from neighboring graph nodes. In addition, the entity attributes are added as the nodes in the network so that attributes are transformed into knowledge structure information, which is easily integrated into the node embedding. The scoring network of SACN is a convolutional neural model, called Conv-TransE. It uses a convolutional network to model the relationship as the translation operation and capture the translational characteristic between entities and relations. We also prove that Conv-TransE alone has already achieved the state of the art performance. The performance of SACN achieves overall about 10% improvement than the state of the art such as ConvE.
In the future, we would like to incorporate the neighbor selection idea into our training framework, such as, importance pooling in (Ying et al. 2018) which takes into account the importance of neighbors when aggregating the vector representations of neighbors. We would also like to extend our model to be scalable with larger knowledge graphs encouraged by the results in (Ying et al. 2018).
| 4,074 |
1811.03492
|
2900151392
|
Generative Adversarial Networks have shown impressive results for the task of object translation, including face-to-face translation. A key component behind the success of recent approaches is the self-consistency loss, which encourages a network to recover the original input image when the output generated for a desired attribute is itself passed through the same network, but with the target attribute inverted. While the self-consistency loss yields photo-realistic results, it can be shown that the input and target domains, supposed to be close, differ substantially. This is empirically found by observing that a network recovers the input image even if attributes other than the inversion of the original goal are set as target. This stops one combining networks for different tasks, or using a network to do progressive forward passes. In this paper, we show empirical evidence of this effect, and propose a new loss to bridge the gap between the distributions of the input and target domains. This "triple consistency loss", aims to minimise the distance between the outputs generated by the network for different routes to the target, independent of any intermediate steps. To show this is effective, we incorporate the triple consistency loss into the training of a new landmark-guided face to face synthesis, where, contrary to previous works, the generated images can simultaneously undergo a large transformation in both expression and pose. To the best of our knowledge, we are the first to tackle the problem of mismatching distributions in self-domain synthesis, and to propose "in-the-wild" landmark-guided synthesis. Code will be available at this https URL
|
Generative Adversarial Networks (GANs) @cite_40 have proven to be a powerful tool in many Computer Vision disciplines, such as image generation @cite_14 , style transfer @cite_34 , or super-resolution @cite_31 . In the context of image to image translation @cite_32 @cite_18 , GANs are composed of a generator that aims to reproduce the target domain, and a discriminator that tells whether the output of the generator is close to the target distribution or not. Both are learnt simultaneously using the minimax strategy. Since the introduction of GANs, many improvements on adversarial learning have been proposed, including the Least-Squares GAN @cite_27 , the Wasserstein GAN @cite_16 @cite_22 , the Geometric GAN @cite_13 , or Spectral Normalisation @cite_0 @cite_8 @cite_29 , however there is no consensus as to which exhibits a systematic improvement.
|
{
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"",
"In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.",
"One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local operation computes the response at a position as a weighted sum of the features at all positions. This building block can be plugged into many computer vision architectures. On the task of video classification, even without any bells and whistles, our non-local models can compete or outperform current competition winners on both Kinetics and Charades datasets. In static image recognition, our non-local models improve object detection segmentation and pose estimation on the COCO suite of tasks. Code is available at this https URL .",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
""
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_29",
"@cite_32",
"@cite_16",
"@cite_0",
"@cite_40",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2962793481",
"2173520492",
"",
"2950893734",
"2785678896",
"2552465644",
"2605135824",
"2770201307",
"2099471712",
"2593414223",
"2523714292",
"2950689937",
""
]
}
|
Triple consistency loss for pairing distributions in GAN-based face synthesis
|
Recent advances in Generative Adversarial Networks (GANs [7]) have found a broad range of applications in the domain of face synthesis or face-to-face translation [12,5,6,16,26]. Most face-to-face synthesis scenarios trans-1 https://youtu.be/-8r7zexg4yg late a target set of attributes [5], landmarks [34], or expressions [26], onto the face present in the input image, where an additional goal is to preserve the identity of the input face. One would expect the generated images to follow a similar probability distribution as that of the input images. However, while the generated images can be said to be photo-realistic, the distributions generated by the currents state of the art differ in an important way from the corresponding input domain. Upon closer inspection, we reveal an interesting phenomena: when the generated images are re-introduced to the network with a new set of target attributes, the network yields poor results, and occasionally fails to produce even photo-realistic images. We will refer to this way of generating images one after another as "progressive image translation", or simply progressive.
This major problem has remained unnoticed mainly due to the fact that existing approaches deploy a one-to-many image translation, where the target domain is always over- Figure 2. This paper illustrates the drawback of the self-consistency loss for identity preserving. The top two rows show the results of the original StarGAN [5] when the input image is translated into the attributes "Black Hair" or "Blonde Hair". The second row illustrates the result of generating the "Blonde Hair" attribute using as input the image generated from the "Black Hair" column. We can see that the output of the "Blonde Hair" recovers the input image with just subtle changes in the contrast, i.e. it fails to generate the target attribute. In the bottom rows, we illustrate the same process where the StarGAN is trained using the triple consistency loss presented in this paper. It can be seen that now the new attributes are correctly placed even after a first pass to the network. laid onto the input image. However, the problem leaves existing approaches with no hope solving the goal of achieving step-wise attribute translation, i.e. progressive image translation. Consider the following example goal: Can we use a network to convert the hair of a given person in an image from blonde to brown, and then use another network to modify their corresponding facial expression? With the flaws of the current methods based on the self-consistency loss, the answer is no.
In this paper, we argue that the reason behind this mismatch rises from the recently introduced self-consistency loss, where the network is expected to recover the input image from the generated one, if the inverted attribute is set as the target. This loss is used to enforce the network to preserve identity. However, we observe that when this condition is met, the input image is well recovered, no matter what target attribute is given back to the network, i.e. it appears that the network leaves a footprint in the generated image, not perceptible to the human eye, but that is evident when the generated image is re-introduced to the network. This is illustrated in Fig. 2 (second row), where we show the effect of using the pre-trained StarGAN [5] to progressively generate the "Blonde Hair" attribute after first having generated the "Black Hair" attribute.
Having first discovered this problem, this paper presents a first approach to tackle this problem by introducing a new consistency loss, which we coin triple consistency loss. This triple consistency loss ( Fig. 1) aims to bridge the gap between the input and target distributions by imposing that any generated image has to be the same no matter if it is targeted by the network in one step or two. After retraining the StarGAN network with our triple consistency loss, we can see that the "Blonde Hair" attribute is correctly placed even when using as input the output of the network after generating the "Black Hair" attribute ( Fig. 2, bottom row). In addition, we present our novel approach to unconstrained landmark guided face-to-face synthesis, which we name GANnotation, and use this to illustrate the efficacy of the triple consistency loss.
GANnotation translates a given face to a set of target landmarks, given to the network in the form of heatmaps. To the best of our knowledge, our GANnotation is the first network that allows synthesising faces in a wide range of poses and expressions, and can be used to construct personspecific datasets with very little supervision. An example is depicted in Fig. 1, where the input image I is translated into the target point configurations s 1 (and s 2 ). We show that the target points become the ground-truth at the generated images, thus being practical for face alignment applications. We will release our code and models for the community to construct their own datasets as well as to encourage further research in this topic.
In short, the contributions of this paper can be summarised as follows:
• We propose a triple consistency loss to bridge the gap between the distributions of the input and generated images. This enables the training of networks that not only reproduce photo-realistic images, but are also suitable for its use in combination with other networks.
To the best of our knowledge, we are the first to introduce a triple consistency loss, which better represents the target distribution, allowing progressive image translation.
• We propose GANnotation, the first network that applies a face-to-face synthesis with simultaneous change in pose and expression. GANnotation is a network that can synthesise faces for a set of unconstrained target landmark annotations, whereby the given landmarks correspond to the ground-truth points in the generated images.
Proposed approach
Our goal is to generate (synthesise) a set of person-specific images driven by a set of landmarks, so that these become the ground-truth landmarks in the generated image. Contrary to previous works, we want our network to allow for simultaneous changes in both pose and expression. In addition, we want the generated images to be not only photo- . The concatenated volume is sent to the generator G to produce the imagê I, we overlay the target points on the generated image to illustrate the main task of the network.
realistic, but also to be distribution-wise close to the input images. To the best of our knowledge, this is the first work that directly permits changes in pose and expression simultaneously, and that reduces the gap between the input and target distributions. An overall description of our proposed approach is depicted in Fig. 3.
Notation
Let I ∈ I be a w × h pixels face image, for which a set of n indexed points s i ∈ R 2n is available. I represents the space of original images of size w × h. The generator is a function G : I × H →Î, withÎ the space of generated images, that receives as input an image I and a set of heatmaps H(s t ) ∈ H encoding a target shape s t , and outputs the warped image. In particular, the estimated imageÎ is defined as:
Î = G (I; H(s t ))(1)
where ; indicates that I and H are concatenated. The notation H(s) is used to represent the dependence of the heatmaps w.r.t. the shape s. In particular, H(s) ∈ R n×w×h is defined as a set of n heatmaps (one for each facial point), each itself being a w × h map, in which a unit Gaussian is centred at its corresponding landmark. In general, we will assume that images I are drawn from a real distribution P I , and that generated imagesÎ are said to be drawn from PÎ.
In some scenarios, like the one presented in this paper, we want PÎ to match P I as closely as possible. The discriminator will be defined as a function D that receives as input an estimated imageÎ, or a real image I, and aims to label them as real or fake.
Architecture
The generator is adapted from the architecture successfully proposed for the task of neural transfer [14], and later adapted to the image-to-image translation task [13,41]. This architecture has also proven successful for the task of face synthesis [26,21,11], and basically consists of two spatial downsampling convolutions, followed by a set of residual blocks [10], and two spatial upsampling blocks with 1/2 strided convolutions. The generator is modified to account for the 3 + n input channels, defined by the RGB input image and the heatmaps corresponding to the target landmark locations. As in [26], we adopt a mask-based approach, by splitting the last layer of the generator into a colour image C and a mask M . The output of the generator is thus defined as:
I = (1 − M ) • C + M • I,(2)
where • represents an element-wise product. Without loss of generality, we refer toÎ as the output of the generator. Further details of the network can be found in the main project site. The discriminator is adopted from the PatchGAN [13,41] architecture, and consists of several convolution-based downsampling blocks, each increasing the number of channels to 512, and followed by a LeakyReLU [22]. For an input resolution of 128 × 128 this network yields an output volume of 4 × 4 × 512, which is forwarded to a FCN to give a final score.
Training
The loss function we aim to optimise consists of seven terms. Below we give a mathematical formulation for each, and introduce the triple-consistency loss, which is our main contribution.
Adversarial loss
We adopt the hinge adversarial loss proposed in [19], which is shown to require less updates in the discriminator per update in the generator, also allowing a faster learning [38,24]. The loss for the discriminator is defined as:
L adv (D) = − EÎ ∼PÎ [min(0, −1 + D(Î))] − E I∼P I [min(0, −D(I) − 1)],(3)
whereas the loss for the generator is defined as:
L adv (G) = −E I∼P I [D(Î)].(4)
Pixel Loss
In order to make the network learn the target representation, we use a pixel reconstruction loss. In particular, for a given input image I and target points s t , corresponding to the ground-truth points of a "target" image I t , the pixel loss is defined as 2 :
L pix = G(I; H(s t )) − I t 2 2 .(5)
2 For the sake of clarity, we will onward omit the expectation term This loss is used along with a total variation regularisation loss, L tv [1,14], which encourages the output of the network to yield smoothness in the generated images.
Consistency Loss
In the context of face to face synthesis, the generator is expected to be able to invert the transform applied to the input image. In practice, this is accomplished by feeding the generator with its output when a given image with target points is given. This loss is also referred to as the identity loss in [26,5], and is defined as:
L self = G (G(I; H(s t )); H(s i )) − I 2 .(6)
where s i represents the ground-truth points for the image I.
In practice, the consistency loss is obtained by first passing the input image with the target landmarks to the generator, and then by passing the corresponding output with the initial landmarks back to the generator.
Triple Consistency Loss
The self-consistency loss shown above was presented in [26,5] to enforce the network to preserve identity. This means that the network will recover the original image when the original expression is given as a target to the output of a first pass. In [26,5] this approach is specifically defined as in Eq. 6. However, we have noticed that this loss causes the network to recover the input image no matter what further target is considered, when we expect this to only happen with the further target set as the inverse of the original target. Rather than capturing the input distribution, the network translates images into a domain that encodes the input image along with the output, i.e., PÎ P I . We conjecture that this problem has so far remained undiscovered due to the fact that existing works set a neutral-to-expression synthesis goal rather than expression-to-expression, which means that the input and output spaces do not need to overlap. However, we want the network to not only produce photorealistic images, but also to make them reusable, and therefore the input and output domains need to be similar.
In order to solve this problem, and allow progressive image generation, we introduce the triple-consistency loss. In particular, when an image is sent to a target location, and its output is re-sent to another location, we expect the network to also do so in a single pass. Given the input image I, and target points s t , the output of the generator iŝ I = G(I; H(s t )). Now, we observe that sending I andÎ to another target location s n should result in similar outputs. That is to say, we want G(Î; H(s n )) to be similar to G(I; H(s n )). The triple-consistency loss is thus defined as: L triple = G(Î; H(s n )) − G(I; H(s n )) 2
The overall idea of the triple consistency loss is depicted in Fig. 1. This loss will try to enforce PÎ ∼ P I .
Identity preserving loss
In order to enforce the network to preserve the identity wherever the target points allow the generated image to do so, we also use the identity preserving network, coined Light CNN, presented in [36]. We use a similar approach to [11,12] and define the identity loss as the l 1 norm between the features extracted at the last two layers of the Light CNN w.r.t. both the generated and the real images.
In particular, denoting f c and p as the fully connected layer and last pooling layer of the Light CNN network, respectively, and Φ l CN N the features extracted at the layer l = {f c, p}, the identity loss is defined as:
L id = l=f c,p Φ l CN N (I) − Φ l CN N (Î)(8)
Perceptual loss
In order to provide the network with the ability to generate subtle details, we follow the line of recent approaches in super resolution and style transfer [18,4], and use the perceptual loss defined by [14]. The perceptual loss enforces the features at the generated images to be similar to those of the real images when forwarded through a VGG-19 [31] network. The perceptual loss is split between the feature reconstruction loss and the style reconstruction loss. The feature reconstruction loss is computed as the l 1 -norm of the difference between the features Φ l V GG computed at the layers l = {relu1 2, relu2 2, relu3 3, relu4 3} of the input and generated images. The style reconstruction loss is computed as the Frobenius norm of the difference between the Gram matrices, Γ, of the output and target images, computed from the features extracted at the relu3 3 layer:
L pp = l Φ l V GG (I) − Φ l V GG (Î) + Γ(Φ relu3 3 V GG (I)) − Γ(Φ relu3 3 V GG (Î)) F .(9)
Full loss
The full loss for the generator is then defined as:
L(G) = λ adv L adv + λ pix L pix + λ self L self +λ triple L triple + λ id L id + λ pp L pp + λ tv L tv ,(10)
where, in our set-up, λ adv = 1, λ pix = 10, λ self = 100, λ triple = 100, λ id = 1, λ pp = 10, and λ tv = 10 −4 .
Training Datasets
Training the network requires the use of paired data, i.e. pairs of images from the same subject for which the points are known. However, we approach the training with triplets rather than pairs of images, in order to also be able to compare the output of the network after one and two passes with the ground-truth images. To this end, we use the training partition of the 300VW [30], which is composed of annotated videos of 50 people. For each video, we choose a set of 3000 triplets, where each triplet is composed of random samples from the video. In addition, we use the public partition of the BP4D dataset [39,33], which is composed of videos of 40 subjects performing 8 different tasks. For each of the BP4D videos, we select 500 triplets. We found that using only 90 subjects results in overfitting, which causes the network to lose its ability to preserve identity. To overcome this problem, we augment our training set with unpaired data. In particular, we use a subset of ∼8000 images collected from datasets that are annotated in a similar fashion to that of the 300VW. We use Helen [17], LFPW [3], AFW [42], IBUG [29], and a subset of Mul-tiPIE [8]. To ensure label consistency across datasets we used the facial landmark annotations provided by the 300W challenge [29]. To generate triplets on this data, we apply random affine transformations to the images and points, as well as a random image mirroring. This makes every image to be "paired" with random affine perturbations on the landmarks. While the network will learn to translate non-rigid deformations from the 300VW subset, it will learn to preserve identity and be robust to rigid perturbations, including mirroring, from the subset of unpaired data.
Experiments
All the experiments are implemented in PyTorch [25], using the Adam optimiser [15], with β 1 = 0.5 and β 2 = 0.9999 . The input images are cropped according to a bounding box defined by the ground-truth landmarks with an added margin of 10 pixels each side, and then re-scaled to be 128x128.
The model is trained for 30 epochs, each consisting of 10, 000 iterations, which takes approximately 24 hours to be completed with two NVIDIA Titan X GPU cards. The batch size is 16, and the learning rate is set to 10 −4 , and it is linearly decreased over 20 epochs to 10 −6 . The size of the heatmaps is 6 pixels, corresponding to a unit 2D Gaussian. For each iteration a random batch is taken from either the paired or unpaired data, as described in Section 4.
On the use of a triple consistency loss
First, we want to validate the contribution of the triple consistency loss independently of our proposed approach. To do so, we re-use the StarGAN [5] implementation, as it is accompanied with author's trained model. We appended to the training the triple consistency loss and we compare the results of the retrained network with that provided by the corresponding authors. The original StarGAN model was trained on the Celeb-A dataset [20], and it applies to a given face a set of attributes, namely "Black Hair", "Blonde Hair", "Brown Hair", "Gender", and "Age". The attributes of "Gender" and "Age" have to be understood as generating the opposite attribute to the one given in the input image. We show some results generated by the model in the first row of each example on Fig. 4 and Fig. 2. In these examples, the same input image is used to generate all the target attributes. Then, using the same network, we apply a progressive image generation, whereby the output image after inserting the first attribute is forwarded to the network to create the second attribute, and so forth. In other words, the network takes as input the output of the network w.r.t. the previous attribute. The results of this progressive attribute translation are shown in the second row of Fig. 4. We can see that the images degrade substantially as soon as the the network has to deal with a couple of generated images, generating burning-like artifacts. Then, we have re-trained the StarGAN network just including the triple consistency loss, and repeated the same process as before. The corresponding results are shown in the bottom rows of Fig. 4. As it can be seen in the third row, the StarGAN trained with a triple consistency loss keeps a high-level image generation with the target attributes, while having a distribution that is closer to that of the input images. This is illustrated in the bottom row.
Next, we show the contribution of the triple consistency loss within our GANnotation. We train two models under the same conditions, with, and without the triple consistency loss. At test time, we use a set of images from the test partition of 300VW [30] for which there are available points. Each image is first frontalised using the given landmarks (see Section 5.2 for further details), and then sent to a pose-specific angle. The results are shown in Fig. 5, where the top rows correspond to the images generated by a model trained with the triple consistency loss and the bottom rows represent the images generated by a model trained without the triple consistency loss. We show how after the first map, both images look alike, being similar to the input image. However, after the second pass, the generator trained without the triple consistency loss recovers the input images, with subtle changes in the contrast. This effect is not occurring with the images generated by the network trained with the triple consistency loss, where the images are correctly mapped. We also show how the network produces similar results after the first pass. We will release both models for further validation. As it can be seen, while both networks generate plausible images at a first pass, the former fails after subsequent forwards.
GANnotation
We now evaluate the consistency of our GANnotation for the task of landmark-guided face synthesis. In order to compare our GANnotation w.r.t. most recent works, we apply a landmark-guided multi-view synthesis, and compare our results against the publicly available code of CR-GAN [32]. We compare our method in the test partition of the 300VW [30]. To generate pose-specific landmarks, we use a shape model trained on the datasets described in Section 4. The shape model includes a set of specific parameters that allow manipulating the in-plane rotation, as well as the view angle (pose). Using the shape model, we first remove both the in-plane rotation and the pose, resulting in the frontalised image given in the middle column. Then, the pose specific parameter is manipulated to generate the synthetic poses shown in the left and right columns w.r.t. the frontalised face. In addition, when generating the posespecific landmarks, we randomly perturb the expresion related parameters, so as to generate different faces. The results are shown in Fig. 6. We show both the results of a progressive image generation (first and second rows), as well as the one-to-one mapping (third row). Finally, we compare the results w.r.t. those given by the CR-GAN model. To show the performance of our GANnotation, we attach Figure 6. Landmark-guided multi-view synthesis and comparison with CR-GAN [32]. The first and second rows correspond to a progressive image generation (with and without the landmarks for a clear visualisation), whereby the input image (leftmost) is first frontalised (middle column), and then sent progressively to the corresponding views. The third row corresponds to a one-to-one mapping. a video with a reenactment experiment 3 , where the appearance of a given face is transferred to the points extracted from each frame of another video.
Remarks
We have shown that our network yields photo-realistic results whilst maintaining certain consistency when applying multiple passes to the same network. In this Section, we want to remark an important aspect that needs consideration when using the triple consistency loss, as well as discuss to which extent the network will preserve the identity. The effectiveness of the triple consistency loss. This loss, when used with no self-consistency loss, can overcome the degradation problem completely. However, we have observed that when the self-consistency loss is removed from the training, the network is prone to failure at preserving identity. Therefore, while the triple consistency loss pulls the input image out of the target domain, the selfconsistency loss is needed to better preserve identity.
Preserving identity vs. preserving the landmarks. While our proposed approach can preserve identity in most cases, it is important to remark some cases where the network will likely fail: 1) When the target points force the network to do so. The network will generate plausible faces and will prioritise the target locations over the identity and even gender. An example is depicted in the most extreme views shown in Fig. 6, where the network enforces to locate the eyes where they are targeted, even when it means a 3 https://youtu.be/-8r7zexg4yg less realistic face. Thus, if the target landmarks do not show identity consistency, the network will likely fail to preserve it. 2) When there is a big mismatch between the groundtruth points at the given image and the target landmarks. Given that the network is not provided with any attention mechanism, one of its tasks is to locate which information needs to be transferred to the target points. When the network fails to do so or the target points are displaced substantially from the input then identity can be poorly preserved.
Conclusion
In this paper, we have illustrated a drawback of face-to-face synthesis methods that aim to preserve identity by using a self-consistency loss. We have shown that despite images being realistic, they cannot be reused by the network for further tasks. Based on this evidence, we have introduced a triple consistency loss, which attempts to make the network reproduce similar results independently of the number of steps used to reach the target. We have incorporated this loss into a new landmark-guided face synthesis, coined GANnotation, which allows for high-quality image synthesis even from low resolution images. We showed how the target landmarks become the ground-truth points, thus making GANnotation a powerful tool. We believe this paper opens the research question of pairing distributions even when the results support plausible images. The models used to generate the images of this paper will be made publicly available.
| 4,379 |
1811.03492
|
2900151392
|
Generative Adversarial Networks have shown impressive results for the task of object translation, including face-to-face translation. A key component behind the success of recent approaches is the self-consistency loss, which encourages a network to recover the original input image when the output generated for a desired attribute is itself passed through the same network, but with the target attribute inverted. While the self-consistency loss yields photo-realistic results, it can be shown that the input and target domains, supposed to be close, differ substantially. This is empirically found by observing that a network recovers the input image even if attributes other than the inversion of the original goal are set as target. This stops one combining networks for different tasks, or using a network to do progressive forward passes. In this paper, we show empirical evidence of this effect, and propose a new loss to bridge the gap between the distributions of the input and target domains. This "triple consistency loss", aims to minimise the distance between the outputs generated by the network for different routes to the target, independent of any intermediate steps. To show this is effective, we incorporate the triple consistency loss into the training of a new landmark-guided face to face synthesis, where, contrary to previous works, the generated images can simultaneously undergo a large transformation in both expression and pose. To the best of our knowledge, we are the first to tackle the problem of mismatching distributions in self-domain synthesis, and to propose "in-the-wild" landmark-guided synthesis. Code will be available at this https URL
|
Most of the methods mentioned above apply a self-consistency loss to preserve identity. As shown in Fig. , this loss limits existing methods to one-to-one mappings, and render generated images that are unable to be used as the basis for generating further images. These methods might leave a neutral face to a given expression @cite_17 , or a non-frontal face to a frontal one @cite_25 . In either case, the network is not required to perform more than one forward pass from a given image. Thus, a self-consistency loss is applied to either preserve appearance or identity. While this yields impressive results, it causes a mismatch between the input and target distributions, when a desired property would be to actually make them match. As we shall see, our proposed GANnotation does have this property, and thanks to that .
|
{
"abstract": [
"Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.",
"Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN, that conditions GANs’ generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild."
],
"cite_N": [
"@cite_25",
"@cite_17"
],
"mid": [
"2964337551",
"2883861033"
]
}
|
Triple consistency loss for pairing distributions in GAN-based face synthesis
|
Recent advances in Generative Adversarial Networks (GANs [7]) have found a broad range of applications in the domain of face synthesis or face-to-face translation [12,5,6,16,26]. Most face-to-face synthesis scenarios trans-1 https://youtu.be/-8r7zexg4yg late a target set of attributes [5], landmarks [34], or expressions [26], onto the face present in the input image, where an additional goal is to preserve the identity of the input face. One would expect the generated images to follow a similar probability distribution as that of the input images. However, while the generated images can be said to be photo-realistic, the distributions generated by the currents state of the art differ in an important way from the corresponding input domain. Upon closer inspection, we reveal an interesting phenomena: when the generated images are re-introduced to the network with a new set of target attributes, the network yields poor results, and occasionally fails to produce even photo-realistic images. We will refer to this way of generating images one after another as "progressive image translation", or simply progressive.
This major problem has remained unnoticed mainly due to the fact that existing approaches deploy a one-to-many image translation, where the target domain is always over- Figure 2. This paper illustrates the drawback of the self-consistency loss for identity preserving. The top two rows show the results of the original StarGAN [5] when the input image is translated into the attributes "Black Hair" or "Blonde Hair". The second row illustrates the result of generating the "Blonde Hair" attribute using as input the image generated from the "Black Hair" column. We can see that the output of the "Blonde Hair" recovers the input image with just subtle changes in the contrast, i.e. it fails to generate the target attribute. In the bottom rows, we illustrate the same process where the StarGAN is trained using the triple consistency loss presented in this paper. It can be seen that now the new attributes are correctly placed even after a first pass to the network. laid onto the input image. However, the problem leaves existing approaches with no hope solving the goal of achieving step-wise attribute translation, i.e. progressive image translation. Consider the following example goal: Can we use a network to convert the hair of a given person in an image from blonde to brown, and then use another network to modify their corresponding facial expression? With the flaws of the current methods based on the self-consistency loss, the answer is no.
In this paper, we argue that the reason behind this mismatch rises from the recently introduced self-consistency loss, where the network is expected to recover the input image from the generated one, if the inverted attribute is set as the target. This loss is used to enforce the network to preserve identity. However, we observe that when this condition is met, the input image is well recovered, no matter what target attribute is given back to the network, i.e. it appears that the network leaves a footprint in the generated image, not perceptible to the human eye, but that is evident when the generated image is re-introduced to the network. This is illustrated in Fig. 2 (second row), where we show the effect of using the pre-trained StarGAN [5] to progressively generate the "Blonde Hair" attribute after first having generated the "Black Hair" attribute.
Having first discovered this problem, this paper presents a first approach to tackle this problem by introducing a new consistency loss, which we coin triple consistency loss. This triple consistency loss ( Fig. 1) aims to bridge the gap between the input and target distributions by imposing that any generated image has to be the same no matter if it is targeted by the network in one step or two. After retraining the StarGAN network with our triple consistency loss, we can see that the "Blonde Hair" attribute is correctly placed even when using as input the output of the network after generating the "Black Hair" attribute ( Fig. 2, bottom row). In addition, we present our novel approach to unconstrained landmark guided face-to-face synthesis, which we name GANnotation, and use this to illustrate the efficacy of the triple consistency loss.
GANnotation translates a given face to a set of target landmarks, given to the network in the form of heatmaps. To the best of our knowledge, our GANnotation is the first network that allows synthesising faces in a wide range of poses and expressions, and can be used to construct personspecific datasets with very little supervision. An example is depicted in Fig. 1, where the input image I is translated into the target point configurations s 1 (and s 2 ). We show that the target points become the ground-truth at the generated images, thus being practical for face alignment applications. We will release our code and models for the community to construct their own datasets as well as to encourage further research in this topic.
In short, the contributions of this paper can be summarised as follows:
• We propose a triple consistency loss to bridge the gap between the distributions of the input and generated images. This enables the training of networks that not only reproduce photo-realistic images, but are also suitable for its use in combination with other networks.
To the best of our knowledge, we are the first to introduce a triple consistency loss, which better represents the target distribution, allowing progressive image translation.
• We propose GANnotation, the first network that applies a face-to-face synthesis with simultaneous change in pose and expression. GANnotation is a network that can synthesise faces for a set of unconstrained target landmark annotations, whereby the given landmarks correspond to the ground-truth points in the generated images.
Proposed approach
Our goal is to generate (synthesise) a set of person-specific images driven by a set of landmarks, so that these become the ground-truth landmarks in the generated image. Contrary to previous works, we want our network to allow for simultaneous changes in both pose and expression. In addition, we want the generated images to be not only photo- . The concatenated volume is sent to the generator G to produce the imagê I, we overlay the target points on the generated image to illustrate the main task of the network.
realistic, but also to be distribution-wise close to the input images. To the best of our knowledge, this is the first work that directly permits changes in pose and expression simultaneously, and that reduces the gap between the input and target distributions. An overall description of our proposed approach is depicted in Fig. 3.
Notation
Let I ∈ I be a w × h pixels face image, for which a set of n indexed points s i ∈ R 2n is available. I represents the space of original images of size w × h. The generator is a function G : I × H →Î, withÎ the space of generated images, that receives as input an image I and a set of heatmaps H(s t ) ∈ H encoding a target shape s t , and outputs the warped image. In particular, the estimated imageÎ is defined as:
Î = G (I; H(s t ))(1)
where ; indicates that I and H are concatenated. The notation H(s) is used to represent the dependence of the heatmaps w.r.t. the shape s. In particular, H(s) ∈ R n×w×h is defined as a set of n heatmaps (one for each facial point), each itself being a w × h map, in which a unit Gaussian is centred at its corresponding landmark. In general, we will assume that images I are drawn from a real distribution P I , and that generated imagesÎ are said to be drawn from PÎ.
In some scenarios, like the one presented in this paper, we want PÎ to match P I as closely as possible. The discriminator will be defined as a function D that receives as input an estimated imageÎ, or a real image I, and aims to label them as real or fake.
Architecture
The generator is adapted from the architecture successfully proposed for the task of neural transfer [14], and later adapted to the image-to-image translation task [13,41]. This architecture has also proven successful for the task of face synthesis [26,21,11], and basically consists of two spatial downsampling convolutions, followed by a set of residual blocks [10], and two spatial upsampling blocks with 1/2 strided convolutions. The generator is modified to account for the 3 + n input channels, defined by the RGB input image and the heatmaps corresponding to the target landmark locations. As in [26], we adopt a mask-based approach, by splitting the last layer of the generator into a colour image C and a mask M . The output of the generator is thus defined as:
I = (1 − M ) • C + M • I,(2)
where • represents an element-wise product. Without loss of generality, we refer toÎ as the output of the generator. Further details of the network can be found in the main project site. The discriminator is adopted from the PatchGAN [13,41] architecture, and consists of several convolution-based downsampling blocks, each increasing the number of channels to 512, and followed by a LeakyReLU [22]. For an input resolution of 128 × 128 this network yields an output volume of 4 × 4 × 512, which is forwarded to a FCN to give a final score.
Training
The loss function we aim to optimise consists of seven terms. Below we give a mathematical formulation for each, and introduce the triple-consistency loss, which is our main contribution.
Adversarial loss
We adopt the hinge adversarial loss proposed in [19], which is shown to require less updates in the discriminator per update in the generator, also allowing a faster learning [38,24]. The loss for the discriminator is defined as:
L adv (D) = − EÎ ∼PÎ [min(0, −1 + D(Î))] − E I∼P I [min(0, −D(I) − 1)],(3)
whereas the loss for the generator is defined as:
L adv (G) = −E I∼P I [D(Î)].(4)
Pixel Loss
In order to make the network learn the target representation, we use a pixel reconstruction loss. In particular, for a given input image I and target points s t , corresponding to the ground-truth points of a "target" image I t , the pixel loss is defined as 2 :
L pix = G(I; H(s t )) − I t 2 2 .(5)
2 For the sake of clarity, we will onward omit the expectation term This loss is used along with a total variation regularisation loss, L tv [1,14], which encourages the output of the network to yield smoothness in the generated images.
Consistency Loss
In the context of face to face synthesis, the generator is expected to be able to invert the transform applied to the input image. In practice, this is accomplished by feeding the generator with its output when a given image with target points is given. This loss is also referred to as the identity loss in [26,5], and is defined as:
L self = G (G(I; H(s t )); H(s i )) − I 2 .(6)
where s i represents the ground-truth points for the image I.
In practice, the consistency loss is obtained by first passing the input image with the target landmarks to the generator, and then by passing the corresponding output with the initial landmarks back to the generator.
Triple Consistency Loss
The self-consistency loss shown above was presented in [26,5] to enforce the network to preserve identity. This means that the network will recover the original image when the original expression is given as a target to the output of a first pass. In [26,5] this approach is specifically defined as in Eq. 6. However, we have noticed that this loss causes the network to recover the input image no matter what further target is considered, when we expect this to only happen with the further target set as the inverse of the original target. Rather than capturing the input distribution, the network translates images into a domain that encodes the input image along with the output, i.e., PÎ P I . We conjecture that this problem has so far remained undiscovered due to the fact that existing works set a neutral-to-expression synthesis goal rather than expression-to-expression, which means that the input and output spaces do not need to overlap. However, we want the network to not only produce photorealistic images, but also to make them reusable, and therefore the input and output domains need to be similar.
In order to solve this problem, and allow progressive image generation, we introduce the triple-consistency loss. In particular, when an image is sent to a target location, and its output is re-sent to another location, we expect the network to also do so in a single pass. Given the input image I, and target points s t , the output of the generator iŝ I = G(I; H(s t )). Now, we observe that sending I andÎ to another target location s n should result in similar outputs. That is to say, we want G(Î; H(s n )) to be similar to G(I; H(s n )). The triple-consistency loss is thus defined as: L triple = G(Î; H(s n )) − G(I; H(s n )) 2
The overall idea of the triple consistency loss is depicted in Fig. 1. This loss will try to enforce PÎ ∼ P I .
Identity preserving loss
In order to enforce the network to preserve the identity wherever the target points allow the generated image to do so, we also use the identity preserving network, coined Light CNN, presented in [36]. We use a similar approach to [11,12] and define the identity loss as the l 1 norm between the features extracted at the last two layers of the Light CNN w.r.t. both the generated and the real images.
In particular, denoting f c and p as the fully connected layer and last pooling layer of the Light CNN network, respectively, and Φ l CN N the features extracted at the layer l = {f c, p}, the identity loss is defined as:
L id = l=f c,p Φ l CN N (I) − Φ l CN N (Î)(8)
Perceptual loss
In order to provide the network with the ability to generate subtle details, we follow the line of recent approaches in super resolution and style transfer [18,4], and use the perceptual loss defined by [14]. The perceptual loss enforces the features at the generated images to be similar to those of the real images when forwarded through a VGG-19 [31] network. The perceptual loss is split between the feature reconstruction loss and the style reconstruction loss. The feature reconstruction loss is computed as the l 1 -norm of the difference between the features Φ l V GG computed at the layers l = {relu1 2, relu2 2, relu3 3, relu4 3} of the input and generated images. The style reconstruction loss is computed as the Frobenius norm of the difference between the Gram matrices, Γ, of the output and target images, computed from the features extracted at the relu3 3 layer:
L pp = l Φ l V GG (I) − Φ l V GG (Î) + Γ(Φ relu3 3 V GG (I)) − Γ(Φ relu3 3 V GG (Î)) F .(9)
Full loss
The full loss for the generator is then defined as:
L(G) = λ adv L adv + λ pix L pix + λ self L self +λ triple L triple + λ id L id + λ pp L pp + λ tv L tv ,(10)
where, in our set-up, λ adv = 1, λ pix = 10, λ self = 100, λ triple = 100, λ id = 1, λ pp = 10, and λ tv = 10 −4 .
Training Datasets
Training the network requires the use of paired data, i.e. pairs of images from the same subject for which the points are known. However, we approach the training with triplets rather than pairs of images, in order to also be able to compare the output of the network after one and two passes with the ground-truth images. To this end, we use the training partition of the 300VW [30], which is composed of annotated videos of 50 people. For each video, we choose a set of 3000 triplets, where each triplet is composed of random samples from the video. In addition, we use the public partition of the BP4D dataset [39,33], which is composed of videos of 40 subjects performing 8 different tasks. For each of the BP4D videos, we select 500 triplets. We found that using only 90 subjects results in overfitting, which causes the network to lose its ability to preserve identity. To overcome this problem, we augment our training set with unpaired data. In particular, we use a subset of ∼8000 images collected from datasets that are annotated in a similar fashion to that of the 300VW. We use Helen [17], LFPW [3], AFW [42], IBUG [29], and a subset of Mul-tiPIE [8]. To ensure label consistency across datasets we used the facial landmark annotations provided by the 300W challenge [29]. To generate triplets on this data, we apply random affine transformations to the images and points, as well as a random image mirroring. This makes every image to be "paired" with random affine perturbations on the landmarks. While the network will learn to translate non-rigid deformations from the 300VW subset, it will learn to preserve identity and be robust to rigid perturbations, including mirroring, from the subset of unpaired data.
Experiments
All the experiments are implemented in PyTorch [25], using the Adam optimiser [15], with β 1 = 0.5 and β 2 = 0.9999 . The input images are cropped according to a bounding box defined by the ground-truth landmarks with an added margin of 10 pixels each side, and then re-scaled to be 128x128.
The model is trained for 30 epochs, each consisting of 10, 000 iterations, which takes approximately 24 hours to be completed with two NVIDIA Titan X GPU cards. The batch size is 16, and the learning rate is set to 10 −4 , and it is linearly decreased over 20 epochs to 10 −6 . The size of the heatmaps is 6 pixels, corresponding to a unit 2D Gaussian. For each iteration a random batch is taken from either the paired or unpaired data, as described in Section 4.
On the use of a triple consistency loss
First, we want to validate the contribution of the triple consistency loss independently of our proposed approach. To do so, we re-use the StarGAN [5] implementation, as it is accompanied with author's trained model. We appended to the training the triple consistency loss and we compare the results of the retrained network with that provided by the corresponding authors. The original StarGAN model was trained on the Celeb-A dataset [20], and it applies to a given face a set of attributes, namely "Black Hair", "Blonde Hair", "Brown Hair", "Gender", and "Age". The attributes of "Gender" and "Age" have to be understood as generating the opposite attribute to the one given in the input image. We show some results generated by the model in the first row of each example on Fig. 4 and Fig. 2. In these examples, the same input image is used to generate all the target attributes. Then, using the same network, we apply a progressive image generation, whereby the output image after inserting the first attribute is forwarded to the network to create the second attribute, and so forth. In other words, the network takes as input the output of the network w.r.t. the previous attribute. The results of this progressive attribute translation are shown in the second row of Fig. 4. We can see that the images degrade substantially as soon as the the network has to deal with a couple of generated images, generating burning-like artifacts. Then, we have re-trained the StarGAN network just including the triple consistency loss, and repeated the same process as before. The corresponding results are shown in the bottom rows of Fig. 4. As it can be seen in the third row, the StarGAN trained with a triple consistency loss keeps a high-level image generation with the target attributes, while having a distribution that is closer to that of the input images. This is illustrated in the bottom row.
Next, we show the contribution of the triple consistency loss within our GANnotation. We train two models under the same conditions, with, and without the triple consistency loss. At test time, we use a set of images from the test partition of 300VW [30] for which there are available points. Each image is first frontalised using the given landmarks (see Section 5.2 for further details), and then sent to a pose-specific angle. The results are shown in Fig. 5, where the top rows correspond to the images generated by a model trained with the triple consistency loss and the bottom rows represent the images generated by a model trained without the triple consistency loss. We show how after the first map, both images look alike, being similar to the input image. However, after the second pass, the generator trained without the triple consistency loss recovers the input images, with subtle changes in the contrast. This effect is not occurring with the images generated by the network trained with the triple consistency loss, where the images are correctly mapped. We also show how the network produces similar results after the first pass. We will release both models for further validation. As it can be seen, while both networks generate plausible images at a first pass, the former fails after subsequent forwards.
GANnotation
We now evaluate the consistency of our GANnotation for the task of landmark-guided face synthesis. In order to compare our GANnotation w.r.t. most recent works, we apply a landmark-guided multi-view synthesis, and compare our results against the publicly available code of CR-GAN [32]. We compare our method in the test partition of the 300VW [30]. To generate pose-specific landmarks, we use a shape model trained on the datasets described in Section 4. The shape model includes a set of specific parameters that allow manipulating the in-plane rotation, as well as the view angle (pose). Using the shape model, we first remove both the in-plane rotation and the pose, resulting in the frontalised image given in the middle column. Then, the pose specific parameter is manipulated to generate the synthetic poses shown in the left and right columns w.r.t. the frontalised face. In addition, when generating the posespecific landmarks, we randomly perturb the expresion related parameters, so as to generate different faces. The results are shown in Fig. 6. We show both the results of a progressive image generation (first and second rows), as well as the one-to-one mapping (third row). Finally, we compare the results w.r.t. those given by the CR-GAN model. To show the performance of our GANnotation, we attach Figure 6. Landmark-guided multi-view synthesis and comparison with CR-GAN [32]. The first and second rows correspond to a progressive image generation (with and without the landmarks for a clear visualisation), whereby the input image (leftmost) is first frontalised (middle column), and then sent progressively to the corresponding views. The third row corresponds to a one-to-one mapping. a video with a reenactment experiment 3 , where the appearance of a given face is transferred to the points extracted from each frame of another video.
Remarks
We have shown that our network yields photo-realistic results whilst maintaining certain consistency when applying multiple passes to the same network. In this Section, we want to remark an important aspect that needs consideration when using the triple consistency loss, as well as discuss to which extent the network will preserve the identity. The effectiveness of the triple consistency loss. This loss, when used with no self-consistency loss, can overcome the degradation problem completely. However, we have observed that when the self-consistency loss is removed from the training, the network is prone to failure at preserving identity. Therefore, while the triple consistency loss pulls the input image out of the target domain, the selfconsistency loss is needed to better preserve identity.
Preserving identity vs. preserving the landmarks. While our proposed approach can preserve identity in most cases, it is important to remark some cases where the network will likely fail: 1) When the target points force the network to do so. The network will generate plausible faces and will prioritise the target locations over the identity and even gender. An example is depicted in the most extreme views shown in Fig. 6, where the network enforces to locate the eyes where they are targeted, even when it means a 3 https://youtu.be/-8r7zexg4yg less realistic face. Thus, if the target landmarks do not show identity consistency, the network will likely fail to preserve it. 2) When there is a big mismatch between the groundtruth points at the given image and the target landmarks. Given that the network is not provided with any attention mechanism, one of its tasks is to locate which information needs to be transferred to the target points. When the network fails to do so or the target points are displaced substantially from the input then identity can be poorly preserved.
Conclusion
In this paper, we have illustrated a drawback of face-to-face synthesis methods that aim to preserve identity by using a self-consistency loss. We have shown that despite images being realistic, they cannot be reused by the network for further tasks. Based on this evidence, we have introduced a triple consistency loss, which attempts to make the network reproduce similar results independently of the number of steps used to reach the target. We have incorporated this loss into a new landmark-guided face synthesis, coined GANnotation, which allows for high-quality image synthesis even from low resolution images. We showed how the target landmarks become the ground-truth points, thus making GANnotation a powerful tool. We believe this paper opens the research question of pairing distributions even when the results support plausible images. The models used to generate the images of this paper will be made publicly available.
| 4,379 |
1906.09744
|
2952161373
|
We identify a fundamental issue in the popular Stochastic Neighbour Embedding (SNE and t-SNE), i.e., the "learned" similarity of any two points in high-dimensional space is not defined and cannot be computed. It underlines two previously unexplored issues in the algorithm which have undermined the quality of its final visualisation output and its ability to process large datasets. The issues are:(a) the reference probability in high-dimensional space is set based on entropy which has undefined relation with local density; and (b) the use of data independent kernel which leads to the need to determine n bandwidths for a dataset of n points. This paper establishes a principle to set the reference probability via a data-dependent kernel which has a well-defined kernel characteristic that linked directly to local density. A solution based on a recent data-dependent kernel called Isolation Kernel addresses the fundamental issue as well as its two ensuing issues. As a result, it significantly improves the quality of the final visualisation output and removes one obstacle that prevents t-SNE from processing large datasets. The solution is extremely simple, i.e., simply replacing the existing data independent kernel with Isolation Kernel, leaving the rest of the t-SNE procedure unchanged.
|
There are some improvements based on revised Gaussian Kernel functions in order to get better similarity measurements. @cite_8 proposes a symmetrised SNE, @cite_3 enable t-SNE to accommodate various heavy-tailed embedding similarity functions; and @cite_1 propose an algorithm based on similarity triplets of the form A is more similar to B than to C'' to model the local structure of the data more effectively.
|
{
"abstract": [
"This paper considers the problem of learning an embedding of data based on similarity triplets of the form “A is more similar to B than to C”. This learning setting is of relevance to scenarios in which we wish to model human judgements on the similarity of objects. We argue that in order to obtain a truthful embedding of the underlying data, it is insufficient for the embedding to satisfy the constraints encoded by the similarity triplets. In particular, we introduce a new technique called t-Distributed Stochastic Triplet Embedding (t-STE) that collapses similar points and repels dissimilar points in the embedding — even when all triplet constraints are satisfied. Our experimental evaluation on three data sets shows that as a result, t-STE is much better than existing techniques at revealing the underlying data structure.",
"Stochastic Neighbor Embedding (SNE) has shown to be quite promising for data visualization. Currently, the most popular implementation, t-SNE, is restricted to a particular Student t-distribution as its embedding distribution. Moreover, it uses a gradient descent algorithm that may require users to tune parameters such as the learning step size, momentum, etc., in finding its optimum. In this paper, we propose the Heavy-tailed Symmetric Stochastic Neighbor Embedding (HSSNE) method, which is a generalization of the t-SNE to accommodate various heavy-tailed embedding similarity functions. With this generalization, we are presented with two difficulties. The first is how to select the best embedding similarity among all heavy-tailed functions and the second is how to optimize the objective function once the heavy-tailed function has been selected. Our contributions then are: (1) we point out that various heavy-tailed embedding similarities can be characterized by their negative score functions. Based on this finding, we present a parameterized subset of similarity functions for choosing the best tail-heaviness for HSSNE; (2) we present a fixed-point optimization algorithm that can be applied to all heavy-tailed functions and does not require the user to set any parameters; and (3) we present two empirical studies, one for unsupervised visualization showing that our optimization algorithm runs as fast and as good as the best known t-SNE implementation and the other for semi-supervised visualization showing quantitative superiority using the homogeneity measure as well as qualitative advantage in cluster separation over t-SNE.",
"A perpetual calendar that has a flat frame with two cavities. When cards are arranged in each cavity of the frame it will display a specific month. To display a particular number of days in a month, select the proper 31 day card and arrange the vertical slits on the card with the horizontal slit on the frame to obtain 28, 29, 30 or 31 days. To display a particular month choose the name of the month card to be viewed and insert it in the position of the frame provided for it. The few cards that are not used will remain in the frame cavities behind the month being displayed. This perpetual calendar can also display three consecutive months at one time."
],
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_8"
],
"mid": [
"2088247287",
"2115584288",
"1539175566"
]
}
|
IMPROVING THE EFFECTIVENESS AND EFFICIENCY OF STOCHASTIC NEIGHBOUR EMBEDDING WITH ISOLATION KERNEL A PREPRINT
|
1 Introduction and Motivation t-SNE [18] has been a successful and popular dimensionality reduction method for visualisation. It aims to project high-dimensional datasets into lower-dimensional spaces while preserving the similarities between data points, as measured by the KL divergence. The original SNE [8] employs a Gaussian kernel to measure similarity in both high and low-dimensional spaces. t-SNE replaces the Gaussian kernel with the distance-based similarity (1 + d ij ) 2 (where d ij is the distance between instances i and j) in low-dimensional space, while retaining the Gaussian kernel for high-dimensional space.
When using the Gaussian kernel, t-SNE has to fine-tune a bandwidth of the Gaussian kernel centred at each point in the given dataset because Gaussian kernel is independent of data distribution. In other words, t-SNE must determine n bandwidths for a dataset of n points.
If we look into the bandwidth determination process, it is accompanied by using a heuristic search with a single global parameter called perplexity such that the Shannon entropy is fixed for all probability distributions at all points in adapting each bandwidth to the local density of the dataset. As the perplexity can be interpreted as a smooth measure of the effective number of neighbours [18], the method can be interpreted as using a user-specified number of nearest arXiv:1906.09744v3 [cs.LG] 8 Jul 2021 neighbours (aka kNN) in order to determine the n bandwidths (more on this point in the discussion section.) Whilst there is a single external parameter perplexity, a bandwidth setting must be optimised for each data point internally.
This becomes the first obstacle in dealing with large datasets due to massive computational cost of the bandwidth search process. In addition, the point-based bandwidth is also the cause of misrepresentation in high-dimensional space under some conditions.
To date, the common practice is still using Gaussian kernel in t-SNE on high-dimensional datasets. However, sound and workable solutions to its drawbacks mentioned above have not been brought up yet. The contributions of this paper are:
(1) Uncovering two deficiencies due to the use of the Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in high-dimensional space under some conditions. Second, the use of the data-independent kernel requires t-SNE to determine n bandwidths for a dataset of n points, despite the fact that a user needs to set one parameter only. This becomes one key obstacle in dealing with large datasets.
(2) Revealing the advantages of using a partition-based data-dependent kernel in t-SNE. First, this kernel represents the true structure(s) in the high-dimensional space under the same condition mentioned above. Second, the data-dependent similarity is set with a single parameter only; this allows it to be computed more efficiently. This enables t-SNE to deal with large-scale datasets without trading off accuracy with faster runtime, without resorting to approximation methods.
(3) Proposing an improvement to t-SNE by simply replacing the data-independent kernel with a data-dependent kernel, leaving the rest of the procedure unchanged.
(4) Verifying the effectiveness and efficiency of the data-dependent kernel in t-SNE.
The adopted data-dependent kernel is Isolation kernel [24,20] and the experiment result shows that using Isolation kernel will improve the performance of t-SNE and solve the issues brought by Gaussian kernel in t-SNE.
The rest of the paper is organised as follows. The current t-SNE and related work are described in Section 2. The deficiencies of using Gaussian kernel is presented in Section 3. In Section 4, we characterise the selected Isolation kernel and Section 5 presents the empirical evaluation of using Isolation kernel in t-SNE. Discussion and conclusions are given in the last two sections.
Basics of t-SNE
Given a dataset D = {x 1 , . . . , x n } in R d , t-SNE aims to map D ∈ R d to D ∈ R d where d d such that the similarities between points are preserved as much as possible from the high-dimensional space to the low-dimensional space. As t-SNE is meant for a visualisation tool, d = 2 usually.
The similarity between a pair of points x i , x j (resp. x i x j ) in a high (resp. low)-dimensional space is measured by a probability p ij (resp. p ij ) that point x i picks x j as its neighbour. The probability distributions are computed based on distance measures between the points in the respective space. The aim of this family of projection methods is to project the points from x to x in such a way that the probability distributions between p ij and p ij are as similar as possible.
The similarity between x i and x j is measured using a Gaussian kernel as follows:
K(x i , x j ) = exp( − x i − x j 2 2σ 2 i )(1)
t-SNE computes the conditional probability p j|i that x i would pick x j as its neighbour as follows:
p j|i = K(x i , x j ) k =i K(x i , x k )(2)
The probability p ij , a symmetric version of p j|i , is computed as:
p ij = p j|i + p i|j 2n(3)
t-SNE performs a binary search for the best value of σ i such that the perplexity of the conditional distribution equals a fixed perplexity specified by the user. Therefore, the bandwidth is adapted to the density of the data, i.e., small (large) values of σ i are used in dense (sparse) regions. The perplexity is defined as:
P erp(P i ) = 2 H(Pi) (4) where P i represents the conditional probability distribution over all other data points given data point x i and H(P i ) is the Shannon entropy:
H(P i ) = − j p j|i log 2 p j|i(5)
The perplexity is a smooth measure of the effective number of neighbours, similar to the number of nearest neighbours k used in kNN methods [8]. Thus, σ i is adapted to the density of the data, i.e., it becomes small for dense data since the k-nearest neighbourhood is small and vice versa. In addition, [18] point out that there is a monotonically increasing relationship between perplexity and the bandwidth σ i .
The similarity between x i and x j in the low-dimensional space is measured as:
s(x i , x k ) = (1+ x i − x j 2 ) −1
and the corresponding probability is defined as:
p ij = s(x i , x j ) k = s(x , x k )
The distance-based similarity s is used because it has heavy-tailed distribution, i.e., it approaches an inverse square law for large pairwise distances. This means that the far apart mapped points have p ij which are almost invariant to changes in the scale of the low-dimensional space [18].
Note that the probability distributions are defined in such a way that p ii = 0 and p ii = 0, i.e. a node does not pick itself as a neighbour.
The location of each point x ∈ D is determined by minimising a cost function based on the (non-symmetric) Kullback-Leibler divergence of the joint probability distribution P in the low-dimensional space from the joint distribution P in the high-dimensional space:
KL(P P ) = i =j p ij log p ij p ij
The use of the Gaussian kernel K sharpens the cost function in retaining the local structure of the data when mapping from the high-dimensional space to the low-dimensional space. The main computational step in applying t-SNE is to determine the value of bandwidth σ for each data point.
The procedure of t-SNE is provided in Algorithm 1. Note that m = n for small datasets. For large datasets, m n; and this is to be discussed in Section 5.4. [18] and its variations have been widely applied in dimensionality reduction and visualisation. In addition to t-SNE [18], which is one of the commonly used visualisation methods, many other variations have been proposed to improve SNE in different aspects.
Algorithm 1 t-SNE(D, P erp, m) Require: D -Dataset {x 1 , . . . ,
There are improvements based on some revised Gaussian kernel functions in order to get better similarity measurements. [5] propose a symmetrised SNE; [29] enable t-SNE to accommodate various heavy-tailed embedding similarity functions; and [26] propose an algorithm based on similarity triplets of the form "A is more similar to B than to C" so that it can model the local structure of the data more effectively.
Based on the concept of information retrieval, NeRV [27] uses a cost function to find a trade-off between precision and recall of "making true similarities visible and avoiding false similarities", when projecting data into 2-dimensional space for visualising similarity relationships. Unlike SNE which relies on a single Kullback-Leibler divergence, NeRV uses a weighted mixture of two dual Kullback-Leibler divergences in neighbourhood retrieval. Furthermore, JSE [11] enables t-SNE to use a different mixture of Kullback-Leibler divergences, a kind of generalised Jensen-Shannon divergence, to improve the embedding result.
To reduce the runtime of t-SNE, [25] explores tree-based indexing schemes and uses the Barnes-Hut approximation to reduce the time complexity to O(nlog(n)), where n is the data size. This gives a trade-off between speed and mapping quality. To further reduce the time complexity to O(n), [14] utilise a fast Fourier transform to dramatically reduce the time of computing the gradient during each iteration. The method uses vantage-point trees and approximates nearest neighbours in dissimilarity calculation with rigorous bounds on the approximation error.
Some works focus on analysing the heuristics methods for solving non-convex optimisation problems for the embedding [15,21]. Recently, [1] theoretically analyse this optimisation and provide a framework to make clusterable data visually identifiable in the 2-dimensional embedding space. These works focus on changing the optimisation problem and are not related to similarity measurements.
So far, however, none of these studies has investigated the suitability of Gaussian kernel in t-SNE. The following two sections will uncover the issues of using Gaussian kernel in t-SNE and propose to replace it with Isoaltion kernel.
Deficiencies of Gaussian kernel when used in t-SNE
Here we list two identified deficiencies of Gaussian kernel that cause poor visualisation outputs and high computational cost in t-SNE. As bandwidth σ i of the Gaussian kernel is fixed for each point x i , we identify the following observation: Observation 1. Gaussian kernel with point-based bandwidth can misrepresent the structure of a data distribution, having points significantly denser than the majority of the points in a sample generated from the distribution.
Intuitively, as each point-based bandwidth represents one local density only, the Gaussian kernel can misrepresent the relationship between multiple clusters in the joint distribution of the overlap region. We provide two example cases in which misrepresentation occurs, i.e., there are multiple subspace clusters; each is a Gaussian distribution of the same mean with: (i) different variances; and (ii) the same variance.
Let X 1 and X 2 be two subspace regions in a high-dimensional space, and points in the two clusters are generated from the Gaussian distributions N [0, v 1 ] and N [0, v 2 ], respectively; and the distributions only overlap at the origin O.
In case (i) where variance v 1 v 2 . Let point x k1 ∈ X 1 be the point closest to O in the dense cluster, and point x k2 ∈ X 2 be the point closest to O in the sparse cluster. Then,
K(O, x k1 ) K(O, x k2 ) because O − x k1 O − x k2 and K(O, ·) is inversely proportional to distance.
In case (ii) where v 1 = v 2 , using an appropriate setting in the current t-SNE procedure, each point x in either X 1 or X 2 would have learned approximately the same bandwidth σ, except the origin O because O has at least double the density than any point in either cluster. As a result, ∀x i , x j ∈ X 1 (or ∀x i , x j ∈ X 2 ) and
O − x i = x j − x i , K(O, x i ) K(x j , x i ) because σ O σ j .
This means that the origin is very dissimilar to any points in either cluster.
Simulations of the two cases are given below:
(i) Five subspace clusters having different variances in a 50-dimensional space (see the simulation details in the footnote 1 .) Table 1: Visualisation results of the t-SNE using Gaussian kernel and Isolation kernel on a 50-dimensional dataset with 5 subspace clusters, each in a different 10-dimensional subspace. The black cross indicates the mapped point of the origin in the high-dimensional space shared by three clusters in different subspaces. Note that in (c), all points of the red cluster (cluster 1) are concentrated and they overlap with the mapped origin. perplexity and ψ are the key parameters for Gaussian kernel and Isolation kernel, respectively.
Gaussian kernel (a) perplexity = 50 (b) perplexity = 250 (c) perplexity = 500
Isolation kernel
(d) ψ = 50 (e) ψ = 250 (f) ψ = 500
Using Gaussian kernel, SNE creates a misrepresentation of the structure in the high-dimensional space. The simulation result is shown in the first row in Table 1: t-SNE is unable to identify the joint component of the three clusters in different subspaces which share the same mean at the origin only in the high-dimensional space but nowhere else. Notice that the mapped origin point is misrepresented to be associated with one cluster only; and it is totally disassociated with the other two clusters. In contrast, the same t-SNE algorithm employing the Isolation kernel [24,20], instead of a Gaussian kernel, produces the mapping which truly represents the structure in the high-dimensional space: the three clusters are well separated and yet they share some common points, indicated by the mapped origin point as shown in the second row in Table 1.
(ii) Two subspace clusters in a 200-dimensional dataset with two subspace clusters having the same Gaussian distribution N [0, 1] but in different subspaces 2 . Table 2 shows the simulation results. When Gaussian kernel is used, the t-SNE with a small perplexity produces small bandwidth for every point-leading to each point has almost the same low similarity with every other point in the dataset, as shown in Figure (a) in Table 2. Note that the two clusters could not be distinguished in the visualization if the colors, indicating the ground truth labels, are not used in the plot. Yet, the t-SNE with a large perplexity produces large bandwidths for all points, except the origin which has a significantly smaller bandwidth-note that the origin (denoted as ×) and the rest of the points are at the opposite corners in Figure (c) in Table 2. This is because the origin, being the only overlap point between the two clusters, has a significantly higher density than all other points. As both clusters have the same variance, all their points have low density (relative to the origin) are 'learned' to have approximately the same bandwidth-which is significantly larger than that of the origin. As a result, the origin is very dissimilar to all other points; though all the other points are correctly clustered into two separate groups. Figure In contrast, when the Isolation kernel is used, the origin is always positioned in-between the two clusters, independent of the ψ parameter setting.
four Gaussian distributions. In other words, no clusters share a single relevant attribute. In addition, all clusters have significantly different variances (the variance of the 5th cluster is 625 times larger than that of the 1st cluster). The first three clusters share the same mean; but the last two have different means. Table 2: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 200-dimensional dataset with two equal density subspace clusters. Note that in (c), the origin is far away from both clusters, although there is a clear gap between the two clusters. The green box in (c) presents a zoom-in view of the two clusters.
Gaussian kernel (a) perplexity = 50
(b) perplexity = 210 (c) perplexity = 300
Isolation kernel
(d) ψ = 50 (e) ψ = 210 (f) ψ = 300
Note the above-mentioned deficiency is not restricted to subspace clusters without shared attributes. An example using subspace clusters with shared attributes can be found in Appendix A.
No need for point-based bandwidth in Isolation kernel
The space partitioning mechanism of the Isolation kernel [24,20] determines the size of the partitions in the local region: it produces large partitions in the sparse region and small partitions in the dense region (see Section 4.2 for more details.) As it is partition-based, points in the local neighbourhood are most likely to be in the same partition. As such, points in the intersection of clusters (in different subspaces as shown in Table 1) are almost always captured by the same partition of Isolation kernel.
An example distribution of similarities based on the dataset shown in Table 1 is given in Figure 1. Let x k1 be the origin O's closest point in the dense cluster (i.e., cluster 1); and x k2 be O's closest point in a sparse cluster (cluster 2 or 3). Figure 1b shows that K ψ (O, x k1 ) ≈ K ψ (O, x k2 ) when the Isolation kernel is used. When the Gaussian kernel is used,
K(O, x k1 ) K(O, x k2 )
, as shown in Figure 1a. This explains why the points in the intersection are better mapped in the low-dimensional space by using the Isolation kernel than using the Gaussian kernel.
In other words, the Isolation kernel ensures that the local structure is truly reflected in the similarities among local points in the high-dimensional space, unlike the misrepresentation exhibited in Table 1 and Table 2 when the Gaussian
O(rn 2 ) O(tψ) Step 2: Matrix calculation O(m 2 ) O(tψm 2 ) Step 3: t-SNE Mapping O(sm 2 )
kernel is used. As a result, t-SNE using the Isolation kernel produces the improved visualisation quality which has no misrepresentations.
The second deficiency
Low computational efficiency problem with Gaussian kernel
The use of a Gaussian kernel necessitates the search for a local bandwidth for each local point. t-SNE utilises a binary search for the value of σ i that makes the entropy of the distribution over neighbours equal to log K, where K is the effective number of local neighbours or "perplexity" [18]. This search is the key component that determines the success or failure of t-SNE. A gradient descent search has been used successfully to perform the search for n parameters for small datasets [18]. This formulation has two key limitations for large datasets. First, the need for n-parameters search poses a real limitation in terms of finding appropriate settings for a large number of parameters. Second, it cannot deal with large datasets because its low computational efficiency, i.e., the time complexity is O(n 2 ).
High computational efficiency with Isolation Kernel
The computational complexities of the Guassian kernel and Isolation kernel [24,20] used in t-SNE are shown in Table 3. 3 Although the parameter ψ of Isolation kernel corresponds to the bandwidth parameter of the Gaussian kernel, the Isolation kernel needs no optimisation to determine n bandwidths locally. This is because the partitioning mechanism used by the Isolation kernel produces small partitions in dense regions and large partitions in sparse regions; and the sizes of the partitions are monotonically decreasing with respect to ψ. As the local adaptation has already been done during the process of deriving the kernel, no further adaptation is required after the kernel is derived.
Though the Isolation kernel derivation from data takes constant O(tψ) time, it is significantly less than the optimisation required to determine n bandwidths which takes O(n 2 ) time in Gaussian kernel. For a large dataset, when using Gaussian kernel, it is infeasible to estimate a large number of bandwidths with an appropriate degree of accuracy, and its computational cost is prohibitively high. In contrast, the consequence of using Isolation kernel is that the runtime of step 1 in the t-SNE algorithm is significantly reduced. Thus, the Isolation kernel enables t-SNE to deal with large datasets. More experimental details are provided in Sections 5.4 and 6.3.
The proposed solution: using the Isolation kernel in t-SNE
Since t-SNE needs a data-dependent kernel, we propose to use a recent data-dependent kernel called Isolation kernel [24,20] to replace the data-independent Gaussian kernel in t-SNE.
The Isolation kernel is a perfect match for the task because a data-dependent kernel, by definition, adapts to local distribution without any additional optimisation. The kernel replacement is conducted in the component in the high-dimensional space only, leaving the other components of the t-SNE procedure unchanged.
Isolation kernel
The key idea of Isolation kernel is that using a space partitioning strategy to split the data space into different cells, e.g., we uniformly sample ψ points from the given dataset and generate ψ Voronoi cells, then the similarity between any two points is how likely the two points can be split into the same cell.
The details of Isolation kernel [24,20] are provided below.
Let D = {x 1 , . . . , x n }, x i ∈ R d be a dataset sampled from an unknown probability density function x i ∼ F . Moreover, let H ψ (D) denote the set of all partitionings H admissible for the given dataset D, where each H covers the entire space of R d ; and each of the ψ isolating partitions θ[z] ∈ H isolates one data point z from the rest of the points in a random subset D ⊂ D, and |D| = ψ. In our implementation, H is a Voronoi diagram generated from D. Definition 1. For any two points x, y ∈ R d , the Isolation kernel of x and y wrt D is defined to be the expectation taken over the probability distribution on all partitionings H ∈ H ψ (D) that both x and y fall into the same isolating partition θ[z] ∈ H, z ∈ D:
K ψ (x, y|D) = E H ψ (D) [1(x, y ∈ θ[z] | θ[z] ∈ H)](6)
where 1(·) is an indicator function.
In practice, the Isolation kernel K ψ is constructed using a finite number of partitionings H i , i = 1, . . . , t, where each H i is created using D i ⊂ D:
K ψ (x, y|D) = 1 t t i=1 1(x, y ∈ θ | θ ∈ H i ) = 1 t t i=1 θ∈Hi 1(x ∈ θ)1(y ∈ θ)(7)
where θ is a shorthand for θ[z]; and t can usually be set to a default value. ψ is the sharpness parameter and the only parameter of the Isolation kernel. The larger ψ is, the sharper the kernel distribution is. This corresponds to σ in the Gaussian kernel, i.e., the smaller σ is, the narrower the kernel distribution is. Note that t is the number of partitionings and t can be fixed to a large value to ensure the stability of the estimation.
As Equation (7) is quadratic, K ψ is a valid kernel. For brevity, K ψ (x, y) is used to denote K ψ (x, y|D) hereafter.
How Isolation kernel differs from Gaussian kernel
The key difference is that the Isolation kernel adapts to local density distribution, but the Gaussian kernel is independent of the data distribution.
In addition, the technical differences can be observed in two aspects. First, the Isolation kernel has no closed-form expression. Second, it is derived directly from a dataset, without explicit learning or optimisation. Its adaptation to local density is a direct outcome of its isolation mechanism used to partition space, i.e., the mechanism produces large partitions in sparse regions and small partitions in dense regions [24,20]. A natural isolation mechanism that has this characteristic is a Voronoi diagram. Given a sample of the underlining distribution, each Voronoi cell isolates a point from the rest of the points in the sample; and the cells are small in the dense region and large in the sparse region. Note that the Voronoi diagram is obtained very efficiently, i.e., given a sample, nothing else needs to be done in the training stage because boundaries in the Voronoi diagram can be obtained at the testing stage as the equal distance between the two nearest points in the given sample.
The Isolation kernel makes full use of the distributional information in small samples
The Isolation kernel only requires small samples (ψ) for the space partitioning without a computationally expensive process.
(a) ψ = 16 (b) ψ = 64 Figure 2: Two examples of partitioning H using the nearest neighbour (a Voronoi diagram) on a dataset having two regions of uniform densities, where the left half has a lower density than the right half.
A small sample of a dataset contains data distributional information which is sufficient to build a data-dependent kernel.
The Isolation kernel extracts this information in the form of a Voronoi diagram, which depicts the relative densities between regions.
In contrast, using a data-independent measure such as the Gaussian kernel, the distributional information in a dataset is ignored and each point in the input space is treated as an independent point. In order to get the distributional information in the form of variable bandwidths that are adaptive to the local distribution, a separate optimisation process is required, as conducted in step 1 of the t-SNE algorithm.
It is important to note that when they could not handle a large dataset, most methods may use small samples as a mitigation approach, and this inevitably trades off runtime with accuracy. But it is not the case for the Isolation kernel where small samples are the key in achieving high accuracy; and samples larger than the optimal ψ will degrade the accuracy of Isolation kernel. See further discussion on this issue in Section 6.
In other words, by using the Gaussian kernel, t-SNE must employ a computationally expensive approach to get the distributional information in a dataset. It does not exploit the same information which is freely available in small samples of the dataset. The Isolation kernel is a direct approach that makes full use of the distributional information freely available in small samples of a dataset.
The Isolation kernel is well-defined
The Isolation kernel has the following well-defined data-dependent characteristic: two points in a sparse region are more similar than two points of equal inter-point distance in a dense region [24].
Using a specific implementation of Isolation kernel (see Appendix B), [20] have provided the following Lemma (see its proof in their paper): Lemma 1.
[20] ∀x i , x j ∈ X S (sparse region) and ∀x k , x ∈ X T (dense region) such that ∀ y∈X S ,z∈X T ρ(y) < ρ(z), the nearest neighbour-induced Isolation kernel K ψ has the characteristic that for
x i − x j = x k − x implies K ψ (x i , x j ) > K ψ (x k , x )(8)
where x − y is the distance between x and y; and ρ(x) denotes the density at point x.
Let p b|a be the probability that x a would pick x b as its neighbour.
We provide two corollaries from Lemma 1 as follows. Corollary 1. x i is more likely to pick x j as a neighbour than x k is to pick x as a neighbour, i.e., p j|i > p |k for
∀ a,b p b|a ∝ K ψ (x a , x b ).
This is because x k in the dense region is more likely to pick a point closer than x as its neighbour, in comparison with x i picking x j as a neighbour in the sparse region, given that
x i − x j = x k − x . Corollary 2. ∀ a,b p b|a ∝ 1 ρ(X A )
, where x a , x b ∈ X A is a region in X ; andρ is an average density of a region.
Using a data-dependent kernel with a well-defined characteristic as specified in Lemma 1, we can establish that the probability that x a would pick x b , p b|a , is inversely proportional to the density of the local region.
This becomes the basis in setting a reference probability in the high-dimensional space.
It is interesting to note that the adaptation of Gaussian kernel by optimising n bandwidths attempts to achieve a similar outcome, as stipulated in Corollaries 1 and 2. Yet, it is unclear that a similar data-dependent characteristic, as stated in Lemma 1, can be formally stated for the adaptive Gaussian kernel. This is because the similarity cannot be computed for all x ∈ R d (except those in the given dataset.)
t-SNE with the Isolation kernel
We propose to replace K with K ψ in defining p j|i in Equation (2), i.e.,
p j|i = K ψ (x i , x j ) k =i K ψ (x i , x k ) .(9)
The rest of the procedure of t-SNE remains unchanged.
The procedure of t-SNE with the Isolation kernel is provided in Algorithm 2.
Note that the only difference between the two algorithms is step 1; and Eq 9 (instead of Eq 2) in step 2.
Empirical Evaluation
This section presents the three evaluation methods we adopt, evaluation results, runtime comparison and a scalability test.
Evaluation measures
We used a qualitative assessment R(k) to evaluate the preservation of k-ary neighbourhoods [12,11,10], defined as follows:
R(k) = (n − 1)Q(k) − k n − 1 − k(10)
where Q(k) = n i=1 R(k) measures the k-ary neighbourhood agreement between the HD and corresponding LD spaces. R(k) ∈ [0, 1]; and the higher the score is, the better the neighbourhoods preserved in the LD space. In our experiments, we recorded the assessment with k ∈ {0.01n, 0.03n, ..., 0.99n} and produced the curve, i.e., k vs R(k).
To aggregate the performance over the different k-ary neighbourhoods, we calculate the area under the R(k) curve in the log plot [11] as:
AU C RN X = k R(k)/k k 1/k(11)
AUC RN X assesses the average quality weighted by k, i.e., errors in large neighbourhoods with large k contribute less than that with small k to the average quality.
In addition, the purpose of many methods of dimensionality reduction is to identify HD clusters in the LD space such as in a 2-dimensional scatter plot. Since all the datasets we used for evaluation have ground truth (labels), we can use measures for clustering validation to evaluate whether all clusters can be correctly identified after they are projected into the LD space. Here we select two popular indices of cluster validation, i.e., Davies-Bouldin (DB) index [6] and Calinski-Harabasz (CH) index [3]. Their details are given as follows. Let x be an instance in a cluster C i which has n i instances with the centre as c i . The Davies-Bouldin (DB) index can be obtained as 12) where N C is the number of clusters in the dataset.
DB = 1 N C i max j,j =i {[ 1 n i x∈Ci ||x − c i || 2 + 1 n j x∈Cj ||x − c j || 2 ]/||c i − c j || 2 }(
Calinski-Harabasz (CH) index is calculated as
CH = i n i ||c i − c|| 2 /(N C − 1) i x∈Ci ||x − c i || 2 /(n − N C )(13)
where c is the centre of the dataset.
Both measures take the similarity of points within a cluster and the similarity between clusters into consideration, but in different ways. These measures assign the best score to the algorithm that produces clusters with low intra-cluster distances and high inter-cluster distances. Note that the higher the CH score, the better the cluster distribution; while the lower the DB score is, the better the cluster distribution is.
All algorithms used in the following experiments were implemented in Matlab 2019b and were run on a machine with 14 cores (Intel Xeon E5-2690 v4 @ 2.59 GHz) and 256GB memory. 4 All datasets were normalised using the min-max normalisation to yield each attribute to be in [0,1] before the experiments began. We also use the min-max normalisation on the t-SNE results before calculating DB and CH scores.
Evaluation results
This section presents the result of utility evaluation of isolation kernel and Gaussian kernel in t-SNE using 21 real-world datasets 5 with different data sizes and dimensions. We report the best performance of each algorithm with a systematic parameter search with the range shown in Table 4. 6 Note that there is only one manual parameter ψ to control the partitioning mechanism, and the other parameter t can be fixed to a default number. Table 5 shows the results of the two kernels used in t-SNE. The Isolation kernel performs better on 18 out of 21 datasets in terms of AU C RN X , which means that the Isolation kernel enables t-SNE to preserve the local neighbourhoods much better than the Gaussian kernel. With regard to the cluster quality, the Isolation kernel performs better than the Gaussian kernel on 18 out of 21 datasets in terms of both DB and CH. Notice that when the Gaussian kernel is better, the performance gaps are usually small in any of the three measures. Overall, the Isolation kernel is better than the Gaussian kernel on 16 out of 21 datasets in all three measures. The reverse is true on one dataset only, i.e., News20. The visualization result on New20binary indicates there are significant overlaps between the two clusters in this dataset. This is reflected in the AU C RN X results which are significantly less than a random assignment (AU C RN X = 0.5).
The visualization result of News20 is shown in Appendix C.
On the COIL20 dataset, we have identified a structural misrepresentation issue with the Gaussian kernel, similar to the one shown in Table 2. Table 6 shows the five clusters where the Gaussian kernel has misrepresented structures in the high-dimensional space. The 3-dimensional results denote that the Isolation kernel depicts a more nuanced structural relationship between the five clusters; whereas the Gaussian kernel depicts that they are disparate five clusters, shown in Table 6. Also, note that a reference point × is close to all five clusters when the Isolation kernel is used, but it is far from many clusters when a Gaussian kernel is used.
Runtime comparison
Generally, both Gaussian Kernal and Isolation Kerner have quadratic time and space complexities. However, the Gaussian kernel in the original t-SNE needs a large number of iterations to search for the optimal local bandwidth for each point. as a result, the Gaussian kernel takes a much longer time in step 1 of the algorithm than the Isolation kernel. Figure 3 presents the two runtime comparisons of t-SNE with the two kernels on a synthetic dataset. Figure 3(a) shows that the Gaussian kernel is much slower than the Isolation kernel in similarity calculations. This is mainly due to the search required to tune the n bandwidths in step 1 of the algorithm. It is interesting to note that though both similarities have n 2 time complexity, the constant is significantly lower in the Isolation kernel: if the data size is increased 10 times t-SNE in 2d t-SNE on 5 selected classes in 3d
Gaussian kernel
(a) (b)
Isolation kernel (c) (d) Table 6: (a) and (c) show the t-SNE visualisation results on COIL20 in a two-dimensional space. (b) and (d) show the five clusters and a reference point (indicated as × with the class label "R") on t-SNE visualisation results in a three-dimensional space. from 10,000 to 100,000, the Gaussian kernel increases its runtime 685 times; whereas the Isolation kernel increases 91 times only. As a result, with a dataset of 100,000 data points, the Isolation kernel 7 is two orders of magnitude faster than the Gaussian kernel (887 seconds versus 72,196 seconds). Figure 3(b) shows the runtime of the mapping process in step 3 of Algorithms 1 and 2 which is the same for both algorithms. It is not surprising that their runtime are about the same in this step, regardless of the kernel employed. Table 7 compared the CPU runtime of Gaussian kernel and Isolation kernel used in t-SNE on four real-world datasets. The t-SNE with the Isolation kernel is up to one order of magnitude faster than the t-SNE with Gaussian kernel in the first two steps. 7 In addition, the Isolation kernel is amenable to GPU acceleration [20]. Our experiment shows that the runtime of Isolation kernel can be sped up by two orders of magnitude with a GPU machine, e.g., from 54 CPU seconds to 0.24 GPU seconds for a dataset of 25,000 data points. Table 8: t-SNE visualisation results on the MNIST and MNIST8M datasets.
Scalability testing
Here we show that the Isolation kernel enables t-SNE to deal with large datasets because step 1 takes constant time (once the parameters are fixed), rather than n 2 when a Gaussian kernel is used.
This allows t-SNE to deal with a dataset with millions of data points in step 1, while using a subsample in steps 2 & 3 to visualise the dataset in a low-dimensional space.
To demonstrate this ability, we use the MNIST8M dataset [17] with 8.1 million points in step 1; and then use either the MNIST dataset or a subsample of 10,000 data points from MNIST8M in steps 2 & 3 of t-SNE. The results of t-SNE with the Isolation kernel are shown in the last two columns in Table 8. The results show that IK can get good CH scores with small ψ values. It took 334s (ψ = 2048) in steps 1 and 2, and 972s in step 3. Note that t-SNE with Gaussian kernel cannot be directly applied on this massive dataset in the same manner because it would take too long to complete step 1, as shown in Figure 3(a).
The use of a subsample in steps 2 and 3 was previously suggested by [18]. However, the suggestion was to replace the Gaussian kernel with a graph similarity that employs a random walk method. This graph similarity approach has the same limitation as the Gaussian kernel because of its high time complexity. It requires a neighbourhood graph to be generated before a random walk kernel (or any graph kernel) can be used to measure similarities. While many graph kernels (see e.g., [9]) may be applied here, the key obstacle is the generation of the neighbourhood graph which has at least O(n 2 ) time complexity.
In summary, employing Isolation kernel is the only method that takes constant time in step 1. Meanwhile, subsampling in step 2 and 3 enables t-SNE to process large-scale datasets without compromising the reference probability that needs to be established in step 1.
Discussion
The proposed method can benefit existing variants of t-SNE
The common feature of existing variants of t-SNE is that they all use the Gaussian kernel. 8 The proposed idea can be applied to variants of stochastic neighbour embedding, e.g., NeRV [27] and JSE [11], since they employ the same algorithm procedure as t-SNE. The only difference is the use of variants of cost function, i.e., type 1 or type 2 mixture of KL divergences.
In addition, Isolation kernel can be used in existing methods which aims to speed up t-SNE in step 3 of the algorithm. This is discussed in Section 6.3.
Isolation kernel performs optimally with small samples
The finding-small samples (as the ψ value) have better visualisation results than large samples-was formally analysed in the context of nearest neighbour anomaly detection [23]. The work is motivated by the previous finding that small samples can produce better detection accuracy for some anomaly detectors than large samples (e.g., [16,22].) The theoretical analysis based on computational geometry reveals that the geometry of data distribution has a direct impact on the sample size setting which is essential to produce an optimal nearest neighbour anomaly detector [23]. In a simple geometry such as a Gaussian distribution, a sample size of one data point (at the mean of Gaussian distribution) yields the optimal nearest neighbour anomaly detector; and a sample of more data points will produce a worse performing detector. In a more complex geometry of data distribution (e.g., a mixture of multiple Gaussian distributions), while the optimal sample size is more than one data point, a sample size over the optimal one also produces a worse performing detector. See [23] for details.
The above result can explain the effect of small samples in Isolation kernel described in Section 4.3: the optimal sample size is the representative sample for the underlying geometry of data distribution, allowing the Isolation kernel to model relative similarities between different regions most effectively.
In summary, most methods use small samples as a compromising approach when failing to handle large datasets. It comes at the cost of low accuracy and longer runtime. However, algorithms employing Isolation kernel can process large datasets without trading off accuracy and efficiency due to the resultant sample. While ψ of the Isolation kernel serves the primary purpose of a kernel parameter like the bandwidth parameter of Gaussian kernel, the resultant sample size enables algorithms that employ the Isolation kernel to deal with large datasets without compromising the accuracy of the task.
Methods to speed up t-SNE
Scalability is an open issue for applying unsupervised distance metric learning approaches on large datasets [28]. As mentioned before, currently, there are two ways to speed up t-SNE: subsampling (which is a mitigation approach discussed in Section 4.3), and another is via some approximation to reduce runtime in step 3.
The two approximation methods mentioned in the literature review are (i) the Barnes-Hut algorithm in conjunction with the dual-tree algorithm [25], and (ii) interpolating onto an equispaced grid in order to use the fast Fourier transform to perform the convolution required in step 3 of the t-SNE algorithm [14]. However, these approximation methods sacrifice accuracy for efficiency. For example, opt-SNE [2] utilises Kullback-Leibler divergence evaluation to automatically identify the tailored parameters in the optimisation procedure of t-SNE, in order to reduce the iteration time and improve the embedding quality. Nevertheless, all of these methods are still based on Gaussian kernel. Therefore, they still have the same deficiency of misrepresented structures as the original t-SNE, as discussed in Section 3.1.1. Appendix E and Appendix F show examples of these outcomes of FIt-SNE [14] and opt-SNE [2], respectively.
In a nutshell, the proposed method of using Isolation kernel in t-SNE offers (i) the only way to establish the reference probability in step 1 using a large dataset (without parallelisation); and (ii) a way to speed up t-SNE, which is an alternative to existing speedup methods. The use of a subsample, as a mitigation approach, in step 1 compromises the accuracy of reference probability. The use of an approximation method in step 3 reduces the quality of the dimensionality reduction. These existing methods in speeding up t-SNE still employ Gaussian kernel; and thus they fail to address the two deficiencies we have identified.
Conclusions
This paper identifies two deficiencies in t-SNE due to the use of Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in the given dataset under some conditions. Second, the data-independent Gaussian kernel largely increases the computation load resulted from the need in determining n bandwidths for a dataset of n points and thus unable to deal with large datasets. Though some methods have been suggested to trade off accuracy for faster running speed, the underlying issue due to the use of Gaussian kernel remains unresolved.
Since the root cause of these deficiencies is the use of a data-independent kernel, we propose to simply replace Gaussian kernel with a data-dependent kernel called Isolation kernel.
We show that the use of Isolation kernel in t-SNE overcomes the drawback of misrepresenting some structures in the data, which often occurs when Gaussian kernel is applied in t-SNE. Also, the use of Isolation kernel yields a more efficient similarity computation because data-dependent Isolation kernel has only one parameter that needs to be tuned. Unlike the existing methods in speeding up t-SNE, this efficient feature of Isolation kernel enables t-SNE to deal with large-scale datasets without trading off accuracy.
Appendix A. Visualisation results of t-SNE on subspace clusters having some shared attributes
Here we use a dataset with three subspace clusters where all clusters share two same attributes only. The three clusters have the same Gaussian distribution N [0, 1]. Cluster 1 has 500 points with relevant attributes from #1 to #51 dimensions; cluster 2 has 500 points with relevant attributes from #50 to #100 dimensions; and cluster 3 has 20 points with relevant attributes from #50 to #51 dimensions. All irrelevant attributes of each cluster having zero values. Because most attributes of cluster 3 are zero, the overall distance between cluster 3 and cluster 1 or cluster 2 is much smaller than the distance between cluster 1 and cluster 2. Table 9 shows the visualisation results of t-SNE with Gaussian kernel and Isolation kernel on the above-mentioned 100-dimensional dataset. It can be seen from the table that the Isolation kernel with small ψ values presents the cluster structure correctly, i.e., the third cluster is in the centre and close to clusters 1 and 2.
In contrast, t-SNE with Gaussian kernel using perplexity = 50 shows a small gap between clusters 1 and 2; the separation between cluster 3 and clusters 1 & 2 are not clear. If increasing perplexity to 250, three points that are close to the origin from cluster 3 (including the origin) become far away from clusters 1 and 2. This is because they got much smaller bandwidths than all other points due to the high density around the origin. As a result, they are very dissimilar to most other points. Table 9: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 100-dimensional dataset with three subspace clusters. Note that in (c), three points (including the origin) from cluster 3 are far away from clusters 1 and 2, as indicated with the red arrows. where p (x, y) is a distance function and we use p = 2 as Euclidean distance in this paper. Table 10 compares the contours of the Isolation kernel on two different data distributions with different ψ values. It shows that the Isolation kernel is adaptive to the local density. Under uniform data distribution, the Isolation kernel's contour is symmetric with respect to the reference point at (0.5, 0.5). However, on the Parkinson dataset, the contour shows that, for points having equal inter-point distance from the reference point x at (0.5, 0.5), points in the spare region are more similar to x than points in the dense region to x. In addition, the larger the ψ, the sharper the kernel distribution of the Isolation kernel, as shown in Table 10. This is because a larger ψ produces more partitions (or Voronoi cells) of smaller sizes. This means that two points are less likely to fall into the same cell unless they are very close.
While this implementation of the Isolation kernel produces its contour similar to that of an exponential kernel k(x, y) = exp(− x−y 2σ 2 ) under uniform density distribution, different implementations have different contours. For example, using axis-parallel partitionings to implement the Isolation kernel produces a contour (with the diamond shape) which is more akin to that of Laplacian kernel k(x, y) = exp(− x−y σ ) under uniform density distribution [24]. Of course, both the exponential and Laplacian kernels, like Gaussian kernel, are data-independent.
Appendix C. t-SNE visualisation on News20
We compare the visualisation results of News20 with different parameter settings in Table 11. It is interesting to note that t-SNE using the Isolation Kernel having a small ψ produces better visualisation results having more separable clusters than those using Gaussian kernel with high perplexity, although the IK got slightly lower evaluation measure values (compare Figures (c) and (d) in each of Tables 11.) However, the two clusters are significantly overlapped in most cases.
We suspect that the overlapping issue is caused by the sparsity. To verify it, we use the same data distribution from Table 1 and increase the dimensionality of the 5 subspace clusters. The results in Table 4 show that t-SNE with both Table 11: Visualisation result of t-SNE on News20. t-SNE produced the best DB scores when using Gaussian Kernel with perplexity = 3700 and Isolation Kernel with ψ = 85. Table 4.
Adaptive Gaussian kernel is defined as:
K AG (x, y) = exp −||x − y|| 2 σ x σ y (15) where σ x is the distance between x and x's k-th nearest neighbour.
However, replacing the Gaussian kernel in t-SNE with either of these kernels produces poor outcomes. For example, on the Segment and Spam datasets, the adaptive Gaussian kernel produced AUC RN X scores of 0.35 and 0.22, respectively; and the kNN kernel yielded AUC RN X scores of 0.38 and 0.28, respectively. They are significantly poorer than those produced using the Gaussian kernel or Isolation kernel shown in Table 5. We postulate that this is because a global k is unable to make these kernels sufficiently adaptive to local distribution.
It is interesting to note that the current method used to get a data-dependent kernel is to begin with a data-independent kernel such as Gaussian kernel. And then find ways to make the Gaussian kernel data-dependent. This is an indirect approach. The Isolation kernel is a direct approach in getting a data-dependent kernel, derived directly from a given dataset, without an intermediary of a data-independent kernel.
Appendix E. Visualisation results of Fast interpolation-based t-SNE
FIt-SNE [14] addresses the runtime issue in step 3 of the t-SNE algorithm only. Figure 5 demonstrates the visualisation results of FIt-SNE [14] on two datasets. It is clear that FIt-SNE has the same deficiency of misrepresented structures as in t-SNE, due to the use of Gaussian kernel, as discussed in Section 3.1.1.
(a) 5 subspace clusters connected at one point (b) COIL20 Figure 5: Visualisation of FIt-SNE on two datasets. Figure 6 shows the FIt-SNE results on the MNIST and MNIST8M datasets. 9 FIt-SNE's results are worse than those of t-SNE based on either GK or IK in terms of the CH scores on both the MNIST and MNIST8M datasets; so as the visualisation outcomes. Note that without the colours to differentiate between classes, most of the classes shown in Figure 6 cannot be identified as separate classes in the FIt-SNE results produced from the MNIST8M dataset.
FIt-SNE ran faster than t-SNE because of approximation using grid; and it is implemented with C++ with multi-threading. The price it paid to be more efficient using the approximation is worse visualisation outcomes.
Note that on the MNIST8M dataset, we could only use 2 million data points in FIt-SNE because of its high memory usage. In contrast, with the Isolation kernel, we could run t-SNE (in MatLab without multithreading) on the same machine using the entire 8.1 million data points of MNIST8M (shown in Table 8.) Figure 6 shows the visualisation results on three datasets using opt-SNE 10 . As expected, opt-SNE produced similar results as t-SNE, having misrepresented structures in Figures 7a and 7b. On MNIST, opt-SNE got a slightly worse result than t-SNE (CH=6129 versus CH=6452) because it split the green clusters into two parts, as shown in Figure 7c. 10 The source code is obtained from https://github.com/omiq-ai/Multicore-opt-SNE. All parameters in opt-SNE use the default settings except that we search for the best perplexity in the same range as t-SNE stated in Table 4.
| 8,697 |
1906.09744
|
2952161373
|
We identify a fundamental issue in the popular Stochastic Neighbour Embedding (SNE and t-SNE), i.e., the "learned" similarity of any two points in high-dimensional space is not defined and cannot be computed. It underlines two previously unexplored issues in the algorithm which have undermined the quality of its final visualisation output and its ability to process large datasets. The issues are:(a) the reference probability in high-dimensional space is set based on entropy which has undefined relation with local density; and (b) the use of data independent kernel which leads to the need to determine n bandwidths for a dataset of n points. This paper establishes a principle to set the reference probability via a data-dependent kernel which has a well-defined kernel characteristic that linked directly to local density. A solution based on a recent data-dependent kernel called Isolation Kernel addresses the fundamental issue as well as its two ensuing issues. As a result, it significantly improves the quality of the final visualisation output and removes one obstacle that prevents t-SNE from processing large datasets. The solution is extremely simple, i.e., simply replacing the existing data independent kernel with Isolation Kernel, leaving the rest of the t-SNE procedure unchanged.
|
To reduce the runtime of t-SNE, @cite_2 explores tree-based indexing schemes and uses the Barnes-Hut approximation to reduce the time complexity to @math . This gives a trade-off between speed and mapping quality. To further reduce the time complexity to @math , @cite_6 utilise a fast Fourier transform to dramatically reduce the time of computing the gradient during each iteration. The method uses vantage-point trees and approximate nearest neighbours in dissimilarity calculation with rigorous bounds on the approximation error.
|
{
"abstract": [
"t-distributed stochastic neighbor embedding (t-SNE) is widely used for visualizing single-cell RNA-sequencing (scRNA-seq) data, but it scales poorly to large datasets. We dramatically accelerate t-SNE, obviating the need for data downsampling, and hence allowing visualization of rare cell populations. Furthermore, we implement a heatmap-style visualization for scRNA-seq based on one-dimensional t-SNE for simultaneously visualizing the expression patterns of thousands of genes. Software is available at https: github.com KlugerLab FIt-SNE and https: github.com KlugerLab t-SNE-Heatmaps .",
"The paper investigates the acceleration of t-SNE--an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots--using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N log N). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant."
],
"cite_N": [
"@cite_6",
"@cite_2"
],
"mid": [
"2777699329",
"1875842236"
]
}
|
IMPROVING THE EFFECTIVENESS AND EFFICIENCY OF STOCHASTIC NEIGHBOUR EMBEDDING WITH ISOLATION KERNEL A PREPRINT
|
1 Introduction and Motivation t-SNE [18] has been a successful and popular dimensionality reduction method for visualisation. It aims to project high-dimensional datasets into lower-dimensional spaces while preserving the similarities between data points, as measured by the KL divergence. The original SNE [8] employs a Gaussian kernel to measure similarity in both high and low-dimensional spaces. t-SNE replaces the Gaussian kernel with the distance-based similarity (1 + d ij ) 2 (where d ij is the distance between instances i and j) in low-dimensional space, while retaining the Gaussian kernel for high-dimensional space.
When using the Gaussian kernel, t-SNE has to fine-tune a bandwidth of the Gaussian kernel centred at each point in the given dataset because Gaussian kernel is independent of data distribution. In other words, t-SNE must determine n bandwidths for a dataset of n points.
If we look into the bandwidth determination process, it is accompanied by using a heuristic search with a single global parameter called perplexity such that the Shannon entropy is fixed for all probability distributions at all points in adapting each bandwidth to the local density of the dataset. As the perplexity can be interpreted as a smooth measure of the effective number of neighbours [18], the method can be interpreted as using a user-specified number of nearest arXiv:1906.09744v3 [cs.LG] 8 Jul 2021 neighbours (aka kNN) in order to determine the n bandwidths (more on this point in the discussion section.) Whilst there is a single external parameter perplexity, a bandwidth setting must be optimised for each data point internally.
This becomes the first obstacle in dealing with large datasets due to massive computational cost of the bandwidth search process. In addition, the point-based bandwidth is also the cause of misrepresentation in high-dimensional space under some conditions.
To date, the common practice is still using Gaussian kernel in t-SNE on high-dimensional datasets. However, sound and workable solutions to its drawbacks mentioned above have not been brought up yet. The contributions of this paper are:
(1) Uncovering two deficiencies due to the use of the Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in high-dimensional space under some conditions. Second, the use of the data-independent kernel requires t-SNE to determine n bandwidths for a dataset of n points, despite the fact that a user needs to set one parameter only. This becomes one key obstacle in dealing with large datasets.
(2) Revealing the advantages of using a partition-based data-dependent kernel in t-SNE. First, this kernel represents the true structure(s) in the high-dimensional space under the same condition mentioned above. Second, the data-dependent similarity is set with a single parameter only; this allows it to be computed more efficiently. This enables t-SNE to deal with large-scale datasets without trading off accuracy with faster runtime, without resorting to approximation methods.
(3) Proposing an improvement to t-SNE by simply replacing the data-independent kernel with a data-dependent kernel, leaving the rest of the procedure unchanged.
(4) Verifying the effectiveness and efficiency of the data-dependent kernel in t-SNE.
The adopted data-dependent kernel is Isolation kernel [24,20] and the experiment result shows that using Isolation kernel will improve the performance of t-SNE and solve the issues brought by Gaussian kernel in t-SNE.
The rest of the paper is organised as follows. The current t-SNE and related work are described in Section 2. The deficiencies of using Gaussian kernel is presented in Section 3. In Section 4, we characterise the selected Isolation kernel and Section 5 presents the empirical evaluation of using Isolation kernel in t-SNE. Discussion and conclusions are given in the last two sections.
Basics of t-SNE
Given a dataset D = {x 1 , . . . , x n } in R d , t-SNE aims to map D ∈ R d to D ∈ R d where d d such that the similarities between points are preserved as much as possible from the high-dimensional space to the low-dimensional space. As t-SNE is meant for a visualisation tool, d = 2 usually.
The similarity between a pair of points x i , x j (resp. x i x j ) in a high (resp. low)-dimensional space is measured by a probability p ij (resp. p ij ) that point x i picks x j as its neighbour. The probability distributions are computed based on distance measures between the points in the respective space. The aim of this family of projection methods is to project the points from x to x in such a way that the probability distributions between p ij and p ij are as similar as possible.
The similarity between x i and x j is measured using a Gaussian kernel as follows:
K(x i , x j ) = exp( − x i − x j 2 2σ 2 i )(1)
t-SNE computes the conditional probability p j|i that x i would pick x j as its neighbour as follows:
p j|i = K(x i , x j ) k =i K(x i , x k )(2)
The probability p ij , a symmetric version of p j|i , is computed as:
p ij = p j|i + p i|j 2n(3)
t-SNE performs a binary search for the best value of σ i such that the perplexity of the conditional distribution equals a fixed perplexity specified by the user. Therefore, the bandwidth is adapted to the density of the data, i.e., small (large) values of σ i are used in dense (sparse) regions. The perplexity is defined as:
P erp(P i ) = 2 H(Pi) (4) where P i represents the conditional probability distribution over all other data points given data point x i and H(P i ) is the Shannon entropy:
H(P i ) = − j p j|i log 2 p j|i(5)
The perplexity is a smooth measure of the effective number of neighbours, similar to the number of nearest neighbours k used in kNN methods [8]. Thus, σ i is adapted to the density of the data, i.e., it becomes small for dense data since the k-nearest neighbourhood is small and vice versa. In addition, [18] point out that there is a monotonically increasing relationship between perplexity and the bandwidth σ i .
The similarity between x i and x j in the low-dimensional space is measured as:
s(x i , x k ) = (1+ x i − x j 2 ) −1
and the corresponding probability is defined as:
p ij = s(x i , x j ) k = s(x , x k )
The distance-based similarity s is used because it has heavy-tailed distribution, i.e., it approaches an inverse square law for large pairwise distances. This means that the far apart mapped points have p ij which are almost invariant to changes in the scale of the low-dimensional space [18].
Note that the probability distributions are defined in such a way that p ii = 0 and p ii = 0, i.e. a node does not pick itself as a neighbour.
The location of each point x ∈ D is determined by minimising a cost function based on the (non-symmetric) Kullback-Leibler divergence of the joint probability distribution P in the low-dimensional space from the joint distribution P in the high-dimensional space:
KL(P P ) = i =j p ij log p ij p ij
The use of the Gaussian kernel K sharpens the cost function in retaining the local structure of the data when mapping from the high-dimensional space to the low-dimensional space. The main computational step in applying t-SNE is to determine the value of bandwidth σ for each data point.
The procedure of t-SNE is provided in Algorithm 1. Note that m = n for small datasets. For large datasets, m n; and this is to be discussed in Section 5.4. [18] and its variations have been widely applied in dimensionality reduction and visualisation. In addition to t-SNE [18], which is one of the commonly used visualisation methods, many other variations have been proposed to improve SNE in different aspects.
Algorithm 1 t-SNE(D, P erp, m) Require: D -Dataset {x 1 , . . . ,
There are improvements based on some revised Gaussian kernel functions in order to get better similarity measurements. [5] propose a symmetrised SNE; [29] enable t-SNE to accommodate various heavy-tailed embedding similarity functions; and [26] propose an algorithm based on similarity triplets of the form "A is more similar to B than to C" so that it can model the local structure of the data more effectively.
Based on the concept of information retrieval, NeRV [27] uses a cost function to find a trade-off between precision and recall of "making true similarities visible and avoiding false similarities", when projecting data into 2-dimensional space for visualising similarity relationships. Unlike SNE which relies on a single Kullback-Leibler divergence, NeRV uses a weighted mixture of two dual Kullback-Leibler divergences in neighbourhood retrieval. Furthermore, JSE [11] enables t-SNE to use a different mixture of Kullback-Leibler divergences, a kind of generalised Jensen-Shannon divergence, to improve the embedding result.
To reduce the runtime of t-SNE, [25] explores tree-based indexing schemes and uses the Barnes-Hut approximation to reduce the time complexity to O(nlog(n)), where n is the data size. This gives a trade-off between speed and mapping quality. To further reduce the time complexity to O(n), [14] utilise a fast Fourier transform to dramatically reduce the time of computing the gradient during each iteration. The method uses vantage-point trees and approximates nearest neighbours in dissimilarity calculation with rigorous bounds on the approximation error.
Some works focus on analysing the heuristics methods for solving non-convex optimisation problems for the embedding [15,21]. Recently, [1] theoretically analyse this optimisation and provide a framework to make clusterable data visually identifiable in the 2-dimensional embedding space. These works focus on changing the optimisation problem and are not related to similarity measurements.
So far, however, none of these studies has investigated the suitability of Gaussian kernel in t-SNE. The following two sections will uncover the issues of using Gaussian kernel in t-SNE and propose to replace it with Isoaltion kernel.
Deficiencies of Gaussian kernel when used in t-SNE
Here we list two identified deficiencies of Gaussian kernel that cause poor visualisation outputs and high computational cost in t-SNE. As bandwidth σ i of the Gaussian kernel is fixed for each point x i , we identify the following observation: Observation 1. Gaussian kernel with point-based bandwidth can misrepresent the structure of a data distribution, having points significantly denser than the majority of the points in a sample generated from the distribution.
Intuitively, as each point-based bandwidth represents one local density only, the Gaussian kernel can misrepresent the relationship between multiple clusters in the joint distribution of the overlap region. We provide two example cases in which misrepresentation occurs, i.e., there are multiple subspace clusters; each is a Gaussian distribution of the same mean with: (i) different variances; and (ii) the same variance.
Let X 1 and X 2 be two subspace regions in a high-dimensional space, and points in the two clusters are generated from the Gaussian distributions N [0, v 1 ] and N [0, v 2 ], respectively; and the distributions only overlap at the origin O.
In case (i) where variance v 1 v 2 . Let point x k1 ∈ X 1 be the point closest to O in the dense cluster, and point x k2 ∈ X 2 be the point closest to O in the sparse cluster. Then,
K(O, x k1 ) K(O, x k2 ) because O − x k1 O − x k2 and K(O, ·) is inversely proportional to distance.
In case (ii) where v 1 = v 2 , using an appropriate setting in the current t-SNE procedure, each point x in either X 1 or X 2 would have learned approximately the same bandwidth σ, except the origin O because O has at least double the density than any point in either cluster. As a result, ∀x i , x j ∈ X 1 (or ∀x i , x j ∈ X 2 ) and
O − x i = x j − x i , K(O, x i ) K(x j , x i ) because σ O σ j .
This means that the origin is very dissimilar to any points in either cluster.
Simulations of the two cases are given below:
(i) Five subspace clusters having different variances in a 50-dimensional space (see the simulation details in the footnote 1 .) Table 1: Visualisation results of the t-SNE using Gaussian kernel and Isolation kernel on a 50-dimensional dataset with 5 subspace clusters, each in a different 10-dimensional subspace. The black cross indicates the mapped point of the origin in the high-dimensional space shared by three clusters in different subspaces. Note that in (c), all points of the red cluster (cluster 1) are concentrated and they overlap with the mapped origin. perplexity and ψ are the key parameters for Gaussian kernel and Isolation kernel, respectively.
Gaussian kernel (a) perplexity = 50 (b) perplexity = 250 (c) perplexity = 500
Isolation kernel
(d) ψ = 50 (e) ψ = 250 (f) ψ = 500
Using Gaussian kernel, SNE creates a misrepresentation of the structure in the high-dimensional space. The simulation result is shown in the first row in Table 1: t-SNE is unable to identify the joint component of the three clusters in different subspaces which share the same mean at the origin only in the high-dimensional space but nowhere else. Notice that the mapped origin point is misrepresented to be associated with one cluster only; and it is totally disassociated with the other two clusters. In contrast, the same t-SNE algorithm employing the Isolation kernel [24,20], instead of a Gaussian kernel, produces the mapping which truly represents the structure in the high-dimensional space: the three clusters are well separated and yet they share some common points, indicated by the mapped origin point as shown in the second row in Table 1.
(ii) Two subspace clusters in a 200-dimensional dataset with two subspace clusters having the same Gaussian distribution N [0, 1] but in different subspaces 2 . Table 2 shows the simulation results. When Gaussian kernel is used, the t-SNE with a small perplexity produces small bandwidth for every point-leading to each point has almost the same low similarity with every other point in the dataset, as shown in Figure (a) in Table 2. Note that the two clusters could not be distinguished in the visualization if the colors, indicating the ground truth labels, are not used in the plot. Yet, the t-SNE with a large perplexity produces large bandwidths for all points, except the origin which has a significantly smaller bandwidth-note that the origin (denoted as ×) and the rest of the points are at the opposite corners in Figure (c) in Table 2. This is because the origin, being the only overlap point between the two clusters, has a significantly higher density than all other points. As both clusters have the same variance, all their points have low density (relative to the origin) are 'learned' to have approximately the same bandwidth-which is significantly larger than that of the origin. As a result, the origin is very dissimilar to all other points; though all the other points are correctly clustered into two separate groups. Figure In contrast, when the Isolation kernel is used, the origin is always positioned in-between the two clusters, independent of the ψ parameter setting.
four Gaussian distributions. In other words, no clusters share a single relevant attribute. In addition, all clusters have significantly different variances (the variance of the 5th cluster is 625 times larger than that of the 1st cluster). The first three clusters share the same mean; but the last two have different means. Table 2: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 200-dimensional dataset with two equal density subspace clusters. Note that in (c), the origin is far away from both clusters, although there is a clear gap between the two clusters. The green box in (c) presents a zoom-in view of the two clusters.
Gaussian kernel (a) perplexity = 50
(b) perplexity = 210 (c) perplexity = 300
Isolation kernel
(d) ψ = 50 (e) ψ = 210 (f) ψ = 300
Note the above-mentioned deficiency is not restricted to subspace clusters without shared attributes. An example using subspace clusters with shared attributes can be found in Appendix A.
No need for point-based bandwidth in Isolation kernel
The space partitioning mechanism of the Isolation kernel [24,20] determines the size of the partitions in the local region: it produces large partitions in the sparse region and small partitions in the dense region (see Section 4.2 for more details.) As it is partition-based, points in the local neighbourhood are most likely to be in the same partition. As such, points in the intersection of clusters (in different subspaces as shown in Table 1) are almost always captured by the same partition of Isolation kernel.
An example distribution of similarities based on the dataset shown in Table 1 is given in Figure 1. Let x k1 be the origin O's closest point in the dense cluster (i.e., cluster 1); and x k2 be O's closest point in a sparse cluster (cluster 2 or 3). Figure 1b shows that K ψ (O, x k1 ) ≈ K ψ (O, x k2 ) when the Isolation kernel is used. When the Gaussian kernel is used,
K(O, x k1 ) K(O, x k2 )
, as shown in Figure 1a. This explains why the points in the intersection are better mapped in the low-dimensional space by using the Isolation kernel than using the Gaussian kernel.
In other words, the Isolation kernel ensures that the local structure is truly reflected in the similarities among local points in the high-dimensional space, unlike the misrepresentation exhibited in Table 1 and Table 2 when the Gaussian
O(rn 2 ) O(tψ) Step 2: Matrix calculation O(m 2 ) O(tψm 2 ) Step 3: t-SNE Mapping O(sm 2 )
kernel is used. As a result, t-SNE using the Isolation kernel produces the improved visualisation quality which has no misrepresentations.
The second deficiency
Low computational efficiency problem with Gaussian kernel
The use of a Gaussian kernel necessitates the search for a local bandwidth for each local point. t-SNE utilises a binary search for the value of σ i that makes the entropy of the distribution over neighbours equal to log K, where K is the effective number of local neighbours or "perplexity" [18]. This search is the key component that determines the success or failure of t-SNE. A gradient descent search has been used successfully to perform the search for n parameters for small datasets [18]. This formulation has two key limitations for large datasets. First, the need for n-parameters search poses a real limitation in terms of finding appropriate settings for a large number of parameters. Second, it cannot deal with large datasets because its low computational efficiency, i.e., the time complexity is O(n 2 ).
High computational efficiency with Isolation Kernel
The computational complexities of the Guassian kernel and Isolation kernel [24,20] used in t-SNE are shown in Table 3. 3 Although the parameter ψ of Isolation kernel corresponds to the bandwidth parameter of the Gaussian kernel, the Isolation kernel needs no optimisation to determine n bandwidths locally. This is because the partitioning mechanism used by the Isolation kernel produces small partitions in dense regions and large partitions in sparse regions; and the sizes of the partitions are monotonically decreasing with respect to ψ. As the local adaptation has already been done during the process of deriving the kernel, no further adaptation is required after the kernel is derived.
Though the Isolation kernel derivation from data takes constant O(tψ) time, it is significantly less than the optimisation required to determine n bandwidths which takes O(n 2 ) time in Gaussian kernel. For a large dataset, when using Gaussian kernel, it is infeasible to estimate a large number of bandwidths with an appropriate degree of accuracy, and its computational cost is prohibitively high. In contrast, the consequence of using Isolation kernel is that the runtime of step 1 in the t-SNE algorithm is significantly reduced. Thus, the Isolation kernel enables t-SNE to deal with large datasets. More experimental details are provided in Sections 5.4 and 6.3.
The proposed solution: using the Isolation kernel in t-SNE
Since t-SNE needs a data-dependent kernel, we propose to use a recent data-dependent kernel called Isolation kernel [24,20] to replace the data-independent Gaussian kernel in t-SNE.
The Isolation kernel is a perfect match for the task because a data-dependent kernel, by definition, adapts to local distribution without any additional optimisation. The kernel replacement is conducted in the component in the high-dimensional space only, leaving the other components of the t-SNE procedure unchanged.
Isolation kernel
The key idea of Isolation kernel is that using a space partitioning strategy to split the data space into different cells, e.g., we uniformly sample ψ points from the given dataset and generate ψ Voronoi cells, then the similarity between any two points is how likely the two points can be split into the same cell.
The details of Isolation kernel [24,20] are provided below.
Let D = {x 1 , . . . , x n }, x i ∈ R d be a dataset sampled from an unknown probability density function x i ∼ F . Moreover, let H ψ (D) denote the set of all partitionings H admissible for the given dataset D, where each H covers the entire space of R d ; and each of the ψ isolating partitions θ[z] ∈ H isolates one data point z from the rest of the points in a random subset D ⊂ D, and |D| = ψ. In our implementation, H is a Voronoi diagram generated from D. Definition 1. For any two points x, y ∈ R d , the Isolation kernel of x and y wrt D is defined to be the expectation taken over the probability distribution on all partitionings H ∈ H ψ (D) that both x and y fall into the same isolating partition θ[z] ∈ H, z ∈ D:
K ψ (x, y|D) = E H ψ (D) [1(x, y ∈ θ[z] | θ[z] ∈ H)](6)
where 1(·) is an indicator function.
In practice, the Isolation kernel K ψ is constructed using a finite number of partitionings H i , i = 1, . . . , t, where each H i is created using D i ⊂ D:
K ψ (x, y|D) = 1 t t i=1 1(x, y ∈ θ | θ ∈ H i ) = 1 t t i=1 θ∈Hi 1(x ∈ θ)1(y ∈ θ)(7)
where θ is a shorthand for θ[z]; and t can usually be set to a default value. ψ is the sharpness parameter and the only parameter of the Isolation kernel. The larger ψ is, the sharper the kernel distribution is. This corresponds to σ in the Gaussian kernel, i.e., the smaller σ is, the narrower the kernel distribution is. Note that t is the number of partitionings and t can be fixed to a large value to ensure the stability of the estimation.
As Equation (7) is quadratic, K ψ is a valid kernel. For brevity, K ψ (x, y) is used to denote K ψ (x, y|D) hereafter.
How Isolation kernel differs from Gaussian kernel
The key difference is that the Isolation kernel adapts to local density distribution, but the Gaussian kernel is independent of the data distribution.
In addition, the technical differences can be observed in two aspects. First, the Isolation kernel has no closed-form expression. Second, it is derived directly from a dataset, without explicit learning or optimisation. Its adaptation to local density is a direct outcome of its isolation mechanism used to partition space, i.e., the mechanism produces large partitions in sparse regions and small partitions in dense regions [24,20]. A natural isolation mechanism that has this characteristic is a Voronoi diagram. Given a sample of the underlining distribution, each Voronoi cell isolates a point from the rest of the points in the sample; and the cells are small in the dense region and large in the sparse region. Note that the Voronoi diagram is obtained very efficiently, i.e., given a sample, nothing else needs to be done in the training stage because boundaries in the Voronoi diagram can be obtained at the testing stage as the equal distance between the two nearest points in the given sample.
The Isolation kernel makes full use of the distributional information in small samples
The Isolation kernel only requires small samples (ψ) for the space partitioning without a computationally expensive process.
(a) ψ = 16 (b) ψ = 64 Figure 2: Two examples of partitioning H using the nearest neighbour (a Voronoi diagram) on a dataset having two regions of uniform densities, where the left half has a lower density than the right half.
A small sample of a dataset contains data distributional information which is sufficient to build a data-dependent kernel.
The Isolation kernel extracts this information in the form of a Voronoi diagram, which depicts the relative densities between regions.
In contrast, using a data-independent measure such as the Gaussian kernel, the distributional information in a dataset is ignored and each point in the input space is treated as an independent point. In order to get the distributional information in the form of variable bandwidths that are adaptive to the local distribution, a separate optimisation process is required, as conducted in step 1 of the t-SNE algorithm.
It is important to note that when they could not handle a large dataset, most methods may use small samples as a mitigation approach, and this inevitably trades off runtime with accuracy. But it is not the case for the Isolation kernel where small samples are the key in achieving high accuracy; and samples larger than the optimal ψ will degrade the accuracy of Isolation kernel. See further discussion on this issue in Section 6.
In other words, by using the Gaussian kernel, t-SNE must employ a computationally expensive approach to get the distributional information in a dataset. It does not exploit the same information which is freely available in small samples of the dataset. The Isolation kernel is a direct approach that makes full use of the distributional information freely available in small samples of a dataset.
The Isolation kernel is well-defined
The Isolation kernel has the following well-defined data-dependent characteristic: two points in a sparse region are more similar than two points of equal inter-point distance in a dense region [24].
Using a specific implementation of Isolation kernel (see Appendix B), [20] have provided the following Lemma (see its proof in their paper): Lemma 1.
[20] ∀x i , x j ∈ X S (sparse region) and ∀x k , x ∈ X T (dense region) such that ∀ y∈X S ,z∈X T ρ(y) < ρ(z), the nearest neighbour-induced Isolation kernel K ψ has the characteristic that for
x i − x j = x k − x implies K ψ (x i , x j ) > K ψ (x k , x )(8)
where x − y is the distance between x and y; and ρ(x) denotes the density at point x.
Let p b|a be the probability that x a would pick x b as its neighbour.
We provide two corollaries from Lemma 1 as follows. Corollary 1. x i is more likely to pick x j as a neighbour than x k is to pick x as a neighbour, i.e., p j|i > p |k for
∀ a,b p b|a ∝ K ψ (x a , x b ).
This is because x k in the dense region is more likely to pick a point closer than x as its neighbour, in comparison with x i picking x j as a neighbour in the sparse region, given that
x i − x j = x k − x . Corollary 2. ∀ a,b p b|a ∝ 1 ρ(X A )
, where x a , x b ∈ X A is a region in X ; andρ is an average density of a region.
Using a data-dependent kernel with a well-defined characteristic as specified in Lemma 1, we can establish that the probability that x a would pick x b , p b|a , is inversely proportional to the density of the local region.
This becomes the basis in setting a reference probability in the high-dimensional space.
It is interesting to note that the adaptation of Gaussian kernel by optimising n bandwidths attempts to achieve a similar outcome, as stipulated in Corollaries 1 and 2. Yet, it is unclear that a similar data-dependent characteristic, as stated in Lemma 1, can be formally stated for the adaptive Gaussian kernel. This is because the similarity cannot be computed for all x ∈ R d (except those in the given dataset.)
t-SNE with the Isolation kernel
We propose to replace K with K ψ in defining p j|i in Equation (2), i.e.,
p j|i = K ψ (x i , x j ) k =i K ψ (x i , x k ) .(9)
The rest of the procedure of t-SNE remains unchanged.
The procedure of t-SNE with the Isolation kernel is provided in Algorithm 2.
Note that the only difference between the two algorithms is step 1; and Eq 9 (instead of Eq 2) in step 2.
Empirical Evaluation
This section presents the three evaluation methods we adopt, evaluation results, runtime comparison and a scalability test.
Evaluation measures
We used a qualitative assessment R(k) to evaluate the preservation of k-ary neighbourhoods [12,11,10], defined as follows:
R(k) = (n − 1)Q(k) − k n − 1 − k(10)
where Q(k) = n i=1 R(k) measures the k-ary neighbourhood agreement between the HD and corresponding LD spaces. R(k) ∈ [0, 1]; and the higher the score is, the better the neighbourhoods preserved in the LD space. In our experiments, we recorded the assessment with k ∈ {0.01n, 0.03n, ..., 0.99n} and produced the curve, i.e., k vs R(k).
To aggregate the performance over the different k-ary neighbourhoods, we calculate the area under the R(k) curve in the log plot [11] as:
AU C RN X = k R(k)/k k 1/k(11)
AUC RN X assesses the average quality weighted by k, i.e., errors in large neighbourhoods with large k contribute less than that with small k to the average quality.
In addition, the purpose of many methods of dimensionality reduction is to identify HD clusters in the LD space such as in a 2-dimensional scatter plot. Since all the datasets we used for evaluation have ground truth (labels), we can use measures for clustering validation to evaluate whether all clusters can be correctly identified after they are projected into the LD space. Here we select two popular indices of cluster validation, i.e., Davies-Bouldin (DB) index [6] and Calinski-Harabasz (CH) index [3]. Their details are given as follows. Let x be an instance in a cluster C i which has n i instances with the centre as c i . The Davies-Bouldin (DB) index can be obtained as 12) where N C is the number of clusters in the dataset.
DB = 1 N C i max j,j =i {[ 1 n i x∈Ci ||x − c i || 2 + 1 n j x∈Cj ||x − c j || 2 ]/||c i − c j || 2 }(
Calinski-Harabasz (CH) index is calculated as
CH = i n i ||c i − c|| 2 /(N C − 1) i x∈Ci ||x − c i || 2 /(n − N C )(13)
where c is the centre of the dataset.
Both measures take the similarity of points within a cluster and the similarity between clusters into consideration, but in different ways. These measures assign the best score to the algorithm that produces clusters with low intra-cluster distances and high inter-cluster distances. Note that the higher the CH score, the better the cluster distribution; while the lower the DB score is, the better the cluster distribution is.
All algorithms used in the following experiments were implemented in Matlab 2019b and were run on a machine with 14 cores (Intel Xeon E5-2690 v4 @ 2.59 GHz) and 256GB memory. 4 All datasets were normalised using the min-max normalisation to yield each attribute to be in [0,1] before the experiments began. We also use the min-max normalisation on the t-SNE results before calculating DB and CH scores.
Evaluation results
This section presents the result of utility evaluation of isolation kernel and Gaussian kernel in t-SNE using 21 real-world datasets 5 with different data sizes and dimensions. We report the best performance of each algorithm with a systematic parameter search with the range shown in Table 4. 6 Note that there is only one manual parameter ψ to control the partitioning mechanism, and the other parameter t can be fixed to a default number. Table 5 shows the results of the two kernels used in t-SNE. The Isolation kernel performs better on 18 out of 21 datasets in terms of AU C RN X , which means that the Isolation kernel enables t-SNE to preserve the local neighbourhoods much better than the Gaussian kernel. With regard to the cluster quality, the Isolation kernel performs better than the Gaussian kernel on 18 out of 21 datasets in terms of both DB and CH. Notice that when the Gaussian kernel is better, the performance gaps are usually small in any of the three measures. Overall, the Isolation kernel is better than the Gaussian kernel on 16 out of 21 datasets in all three measures. The reverse is true on one dataset only, i.e., News20. The visualization result on New20binary indicates there are significant overlaps between the two clusters in this dataset. This is reflected in the AU C RN X results which are significantly less than a random assignment (AU C RN X = 0.5).
The visualization result of News20 is shown in Appendix C.
On the COIL20 dataset, we have identified a structural misrepresentation issue with the Gaussian kernel, similar to the one shown in Table 2. Table 6 shows the five clusters where the Gaussian kernel has misrepresented structures in the high-dimensional space. The 3-dimensional results denote that the Isolation kernel depicts a more nuanced structural relationship between the five clusters; whereas the Gaussian kernel depicts that they are disparate five clusters, shown in Table 6. Also, note that a reference point × is close to all five clusters when the Isolation kernel is used, but it is far from many clusters when a Gaussian kernel is used.
Runtime comparison
Generally, both Gaussian Kernal and Isolation Kerner have quadratic time and space complexities. However, the Gaussian kernel in the original t-SNE needs a large number of iterations to search for the optimal local bandwidth for each point. as a result, the Gaussian kernel takes a much longer time in step 1 of the algorithm than the Isolation kernel. Figure 3 presents the two runtime comparisons of t-SNE with the two kernels on a synthetic dataset. Figure 3(a) shows that the Gaussian kernel is much slower than the Isolation kernel in similarity calculations. This is mainly due to the search required to tune the n bandwidths in step 1 of the algorithm. It is interesting to note that though both similarities have n 2 time complexity, the constant is significantly lower in the Isolation kernel: if the data size is increased 10 times t-SNE in 2d t-SNE on 5 selected classes in 3d
Gaussian kernel
(a) (b)
Isolation kernel (c) (d) Table 6: (a) and (c) show the t-SNE visualisation results on COIL20 in a two-dimensional space. (b) and (d) show the five clusters and a reference point (indicated as × with the class label "R") on t-SNE visualisation results in a three-dimensional space. from 10,000 to 100,000, the Gaussian kernel increases its runtime 685 times; whereas the Isolation kernel increases 91 times only. As a result, with a dataset of 100,000 data points, the Isolation kernel 7 is two orders of magnitude faster than the Gaussian kernel (887 seconds versus 72,196 seconds). Figure 3(b) shows the runtime of the mapping process in step 3 of Algorithms 1 and 2 which is the same for both algorithms. It is not surprising that their runtime are about the same in this step, regardless of the kernel employed. Table 7 compared the CPU runtime of Gaussian kernel and Isolation kernel used in t-SNE on four real-world datasets. The t-SNE with the Isolation kernel is up to one order of magnitude faster than the t-SNE with Gaussian kernel in the first two steps. 7 In addition, the Isolation kernel is amenable to GPU acceleration [20]. Our experiment shows that the runtime of Isolation kernel can be sped up by two orders of magnitude with a GPU machine, e.g., from 54 CPU seconds to 0.24 GPU seconds for a dataset of 25,000 data points. Table 8: t-SNE visualisation results on the MNIST and MNIST8M datasets.
Scalability testing
Here we show that the Isolation kernel enables t-SNE to deal with large datasets because step 1 takes constant time (once the parameters are fixed), rather than n 2 when a Gaussian kernel is used.
This allows t-SNE to deal with a dataset with millions of data points in step 1, while using a subsample in steps 2 & 3 to visualise the dataset in a low-dimensional space.
To demonstrate this ability, we use the MNIST8M dataset [17] with 8.1 million points in step 1; and then use either the MNIST dataset or a subsample of 10,000 data points from MNIST8M in steps 2 & 3 of t-SNE. The results of t-SNE with the Isolation kernel are shown in the last two columns in Table 8. The results show that IK can get good CH scores with small ψ values. It took 334s (ψ = 2048) in steps 1 and 2, and 972s in step 3. Note that t-SNE with Gaussian kernel cannot be directly applied on this massive dataset in the same manner because it would take too long to complete step 1, as shown in Figure 3(a).
The use of a subsample in steps 2 and 3 was previously suggested by [18]. However, the suggestion was to replace the Gaussian kernel with a graph similarity that employs a random walk method. This graph similarity approach has the same limitation as the Gaussian kernel because of its high time complexity. It requires a neighbourhood graph to be generated before a random walk kernel (or any graph kernel) can be used to measure similarities. While many graph kernels (see e.g., [9]) may be applied here, the key obstacle is the generation of the neighbourhood graph which has at least O(n 2 ) time complexity.
In summary, employing Isolation kernel is the only method that takes constant time in step 1. Meanwhile, subsampling in step 2 and 3 enables t-SNE to process large-scale datasets without compromising the reference probability that needs to be established in step 1.
Discussion
The proposed method can benefit existing variants of t-SNE
The common feature of existing variants of t-SNE is that they all use the Gaussian kernel. 8 The proposed idea can be applied to variants of stochastic neighbour embedding, e.g., NeRV [27] and JSE [11], since they employ the same algorithm procedure as t-SNE. The only difference is the use of variants of cost function, i.e., type 1 or type 2 mixture of KL divergences.
In addition, Isolation kernel can be used in existing methods which aims to speed up t-SNE in step 3 of the algorithm. This is discussed in Section 6.3.
Isolation kernel performs optimally with small samples
The finding-small samples (as the ψ value) have better visualisation results than large samples-was formally analysed in the context of nearest neighbour anomaly detection [23]. The work is motivated by the previous finding that small samples can produce better detection accuracy for some anomaly detectors than large samples (e.g., [16,22].) The theoretical analysis based on computational geometry reveals that the geometry of data distribution has a direct impact on the sample size setting which is essential to produce an optimal nearest neighbour anomaly detector [23]. In a simple geometry such as a Gaussian distribution, a sample size of one data point (at the mean of Gaussian distribution) yields the optimal nearest neighbour anomaly detector; and a sample of more data points will produce a worse performing detector. In a more complex geometry of data distribution (e.g., a mixture of multiple Gaussian distributions), while the optimal sample size is more than one data point, a sample size over the optimal one also produces a worse performing detector. See [23] for details.
The above result can explain the effect of small samples in Isolation kernel described in Section 4.3: the optimal sample size is the representative sample for the underlying geometry of data distribution, allowing the Isolation kernel to model relative similarities between different regions most effectively.
In summary, most methods use small samples as a compromising approach when failing to handle large datasets. It comes at the cost of low accuracy and longer runtime. However, algorithms employing Isolation kernel can process large datasets without trading off accuracy and efficiency due to the resultant sample. While ψ of the Isolation kernel serves the primary purpose of a kernel parameter like the bandwidth parameter of Gaussian kernel, the resultant sample size enables algorithms that employ the Isolation kernel to deal with large datasets without compromising the accuracy of the task.
Methods to speed up t-SNE
Scalability is an open issue for applying unsupervised distance metric learning approaches on large datasets [28]. As mentioned before, currently, there are two ways to speed up t-SNE: subsampling (which is a mitigation approach discussed in Section 4.3), and another is via some approximation to reduce runtime in step 3.
The two approximation methods mentioned in the literature review are (i) the Barnes-Hut algorithm in conjunction with the dual-tree algorithm [25], and (ii) interpolating onto an equispaced grid in order to use the fast Fourier transform to perform the convolution required in step 3 of the t-SNE algorithm [14]. However, these approximation methods sacrifice accuracy for efficiency. For example, opt-SNE [2] utilises Kullback-Leibler divergence evaluation to automatically identify the tailored parameters in the optimisation procedure of t-SNE, in order to reduce the iteration time and improve the embedding quality. Nevertheless, all of these methods are still based on Gaussian kernel. Therefore, they still have the same deficiency of misrepresented structures as the original t-SNE, as discussed in Section 3.1.1. Appendix E and Appendix F show examples of these outcomes of FIt-SNE [14] and opt-SNE [2], respectively.
In a nutshell, the proposed method of using Isolation kernel in t-SNE offers (i) the only way to establish the reference probability in step 1 using a large dataset (without parallelisation); and (ii) a way to speed up t-SNE, which is an alternative to existing speedup methods. The use of a subsample, as a mitigation approach, in step 1 compromises the accuracy of reference probability. The use of an approximation method in step 3 reduces the quality of the dimensionality reduction. These existing methods in speeding up t-SNE still employ Gaussian kernel; and thus they fail to address the two deficiencies we have identified.
Conclusions
This paper identifies two deficiencies in t-SNE due to the use of Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in the given dataset under some conditions. Second, the data-independent Gaussian kernel largely increases the computation load resulted from the need in determining n bandwidths for a dataset of n points and thus unable to deal with large datasets. Though some methods have been suggested to trade off accuracy for faster running speed, the underlying issue due to the use of Gaussian kernel remains unresolved.
Since the root cause of these deficiencies is the use of a data-independent kernel, we propose to simply replace Gaussian kernel with a data-dependent kernel called Isolation kernel.
We show that the use of Isolation kernel in t-SNE overcomes the drawback of misrepresenting some structures in the data, which often occurs when Gaussian kernel is applied in t-SNE. Also, the use of Isolation kernel yields a more efficient similarity computation because data-dependent Isolation kernel has only one parameter that needs to be tuned. Unlike the existing methods in speeding up t-SNE, this efficient feature of Isolation kernel enables t-SNE to deal with large-scale datasets without trading off accuracy.
Appendix A. Visualisation results of t-SNE on subspace clusters having some shared attributes
Here we use a dataset with three subspace clusters where all clusters share two same attributes only. The three clusters have the same Gaussian distribution N [0, 1]. Cluster 1 has 500 points with relevant attributes from #1 to #51 dimensions; cluster 2 has 500 points with relevant attributes from #50 to #100 dimensions; and cluster 3 has 20 points with relevant attributes from #50 to #51 dimensions. All irrelevant attributes of each cluster having zero values. Because most attributes of cluster 3 are zero, the overall distance between cluster 3 and cluster 1 or cluster 2 is much smaller than the distance between cluster 1 and cluster 2. Table 9 shows the visualisation results of t-SNE with Gaussian kernel and Isolation kernel on the above-mentioned 100-dimensional dataset. It can be seen from the table that the Isolation kernel with small ψ values presents the cluster structure correctly, i.e., the third cluster is in the centre and close to clusters 1 and 2.
In contrast, t-SNE with Gaussian kernel using perplexity = 50 shows a small gap between clusters 1 and 2; the separation between cluster 3 and clusters 1 & 2 are not clear. If increasing perplexity to 250, three points that are close to the origin from cluster 3 (including the origin) become far away from clusters 1 and 2. This is because they got much smaller bandwidths than all other points due to the high density around the origin. As a result, they are very dissimilar to most other points. Table 9: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 100-dimensional dataset with three subspace clusters. Note that in (c), three points (including the origin) from cluster 3 are far away from clusters 1 and 2, as indicated with the red arrows. where p (x, y) is a distance function and we use p = 2 as Euclidean distance in this paper. Table 10 compares the contours of the Isolation kernel on two different data distributions with different ψ values. It shows that the Isolation kernel is adaptive to the local density. Under uniform data distribution, the Isolation kernel's contour is symmetric with respect to the reference point at (0.5, 0.5). However, on the Parkinson dataset, the contour shows that, for points having equal inter-point distance from the reference point x at (0.5, 0.5), points in the spare region are more similar to x than points in the dense region to x. In addition, the larger the ψ, the sharper the kernel distribution of the Isolation kernel, as shown in Table 10. This is because a larger ψ produces more partitions (or Voronoi cells) of smaller sizes. This means that two points are less likely to fall into the same cell unless they are very close.
While this implementation of the Isolation kernel produces its contour similar to that of an exponential kernel k(x, y) = exp(− x−y 2σ 2 ) under uniform density distribution, different implementations have different contours. For example, using axis-parallel partitionings to implement the Isolation kernel produces a contour (with the diamond shape) which is more akin to that of Laplacian kernel k(x, y) = exp(− x−y σ ) under uniform density distribution [24]. Of course, both the exponential and Laplacian kernels, like Gaussian kernel, are data-independent.
Appendix C. t-SNE visualisation on News20
We compare the visualisation results of News20 with different parameter settings in Table 11. It is interesting to note that t-SNE using the Isolation Kernel having a small ψ produces better visualisation results having more separable clusters than those using Gaussian kernel with high perplexity, although the IK got slightly lower evaluation measure values (compare Figures (c) and (d) in each of Tables 11.) However, the two clusters are significantly overlapped in most cases.
We suspect that the overlapping issue is caused by the sparsity. To verify it, we use the same data distribution from Table 1 and increase the dimensionality of the 5 subspace clusters. The results in Table 4 show that t-SNE with both Table 11: Visualisation result of t-SNE on News20. t-SNE produced the best DB scores when using Gaussian Kernel with perplexity = 3700 and Isolation Kernel with ψ = 85. Table 4.
Adaptive Gaussian kernel is defined as:
K AG (x, y) = exp −||x − y|| 2 σ x σ y (15) where σ x is the distance between x and x's k-th nearest neighbour.
However, replacing the Gaussian kernel in t-SNE with either of these kernels produces poor outcomes. For example, on the Segment and Spam datasets, the adaptive Gaussian kernel produced AUC RN X scores of 0.35 and 0.22, respectively; and the kNN kernel yielded AUC RN X scores of 0.38 and 0.28, respectively. They are significantly poorer than those produced using the Gaussian kernel or Isolation kernel shown in Table 5. We postulate that this is because a global k is unable to make these kernels sufficiently adaptive to local distribution.
It is interesting to note that the current method used to get a data-dependent kernel is to begin with a data-independent kernel such as Gaussian kernel. And then find ways to make the Gaussian kernel data-dependent. This is an indirect approach. The Isolation kernel is a direct approach in getting a data-dependent kernel, derived directly from a given dataset, without an intermediary of a data-independent kernel.
Appendix E. Visualisation results of Fast interpolation-based t-SNE
FIt-SNE [14] addresses the runtime issue in step 3 of the t-SNE algorithm only. Figure 5 demonstrates the visualisation results of FIt-SNE [14] on two datasets. It is clear that FIt-SNE has the same deficiency of misrepresented structures as in t-SNE, due to the use of Gaussian kernel, as discussed in Section 3.1.1.
(a) 5 subspace clusters connected at one point (b) COIL20 Figure 5: Visualisation of FIt-SNE on two datasets. Figure 6 shows the FIt-SNE results on the MNIST and MNIST8M datasets. 9 FIt-SNE's results are worse than those of t-SNE based on either GK or IK in terms of the CH scores on both the MNIST and MNIST8M datasets; so as the visualisation outcomes. Note that without the colours to differentiate between classes, most of the classes shown in Figure 6 cannot be identified as separate classes in the FIt-SNE results produced from the MNIST8M dataset.
FIt-SNE ran faster than t-SNE because of approximation using grid; and it is implemented with C++ with multi-threading. The price it paid to be more efficient using the approximation is worse visualisation outcomes.
Note that on the MNIST8M dataset, we could only use 2 million data points in FIt-SNE because of its high memory usage. In contrast, with the Isolation kernel, we could run t-SNE (in MatLab without multithreading) on the same machine using the entire 8.1 million data points of MNIST8M (shown in Table 8.) Figure 6 shows the visualisation results on three datasets using opt-SNE 10 . As expected, opt-SNE produced similar results as t-SNE, having misrepresented structures in Figures 7a and 7b. On MNIST, opt-SNE got a slightly worse result than t-SNE (CH=6129 versus CH=6452) because it split the green clusters into two parts, as shown in Figure 7c. 10 The source code is obtained from https://github.com/omiq-ai/Multicore-opt-SNE. All parameters in opt-SNE use the default settings except that we search for the best perplexity in the same range as t-SNE stated in Table 4.
| 8,697 |
1906.09744
|
2952161373
|
We identify a fundamental issue in the popular Stochastic Neighbour Embedding (SNE and t-SNE), i.e., the "learned" similarity of any two points in high-dimensional space is not defined and cannot be computed. It underlines two previously unexplored issues in the algorithm which have undermined the quality of its final visualisation output and its ability to process large datasets. The issues are:(a) the reference probability in high-dimensional space is set based on entropy which has undefined relation with local density; and (b) the use of data independent kernel which leads to the need to determine n bandwidths for a dataset of n points. This paper establishes a principle to set the reference probability via a data-dependent kernel which has a well-defined kernel characteristic that linked directly to local density. A solution based on a recent data-dependent kernel called Isolation Kernel addresses the fundamental issue as well as its two ensuing issues. As a result, it significantly improves the quality of the final visualisation output and removes one obstacle that prevents t-SNE from processing large datasets. The solution is extremely simple, i.e., simply replacing the existing data independent kernel with Isolation Kernel, leaving the rest of the t-SNE procedure unchanged.
|
There are some works focusing on analysing the heuristics methods for solving non-convex optimisation problems for the embedding . Recently, @cite_5 theoretically analyse this optimisation and provide a framework to make clusterable data visually identifiable in the 2-dimensional embedding space. These works are not related to similarity measurements; therefore not directly relevant to work reported here.
|
{
"abstract": [
"A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard Johnson-Lindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization - finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the \"ground-truth\" clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2791925274"
]
}
|
IMPROVING THE EFFECTIVENESS AND EFFICIENCY OF STOCHASTIC NEIGHBOUR EMBEDDING WITH ISOLATION KERNEL A PREPRINT
|
1 Introduction and Motivation t-SNE [18] has been a successful and popular dimensionality reduction method for visualisation. It aims to project high-dimensional datasets into lower-dimensional spaces while preserving the similarities between data points, as measured by the KL divergence. The original SNE [8] employs a Gaussian kernel to measure similarity in both high and low-dimensional spaces. t-SNE replaces the Gaussian kernel with the distance-based similarity (1 + d ij ) 2 (where d ij is the distance between instances i and j) in low-dimensional space, while retaining the Gaussian kernel for high-dimensional space.
When using the Gaussian kernel, t-SNE has to fine-tune a bandwidth of the Gaussian kernel centred at each point in the given dataset because Gaussian kernel is independent of data distribution. In other words, t-SNE must determine n bandwidths for a dataset of n points.
If we look into the bandwidth determination process, it is accompanied by using a heuristic search with a single global parameter called perplexity such that the Shannon entropy is fixed for all probability distributions at all points in adapting each bandwidth to the local density of the dataset. As the perplexity can be interpreted as a smooth measure of the effective number of neighbours [18], the method can be interpreted as using a user-specified number of nearest arXiv:1906.09744v3 [cs.LG] 8 Jul 2021 neighbours (aka kNN) in order to determine the n bandwidths (more on this point in the discussion section.) Whilst there is a single external parameter perplexity, a bandwidth setting must be optimised for each data point internally.
This becomes the first obstacle in dealing with large datasets due to massive computational cost of the bandwidth search process. In addition, the point-based bandwidth is also the cause of misrepresentation in high-dimensional space under some conditions.
To date, the common practice is still using Gaussian kernel in t-SNE on high-dimensional datasets. However, sound and workable solutions to its drawbacks mentioned above have not been brought up yet. The contributions of this paper are:
(1) Uncovering two deficiencies due to the use of the Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in high-dimensional space under some conditions. Second, the use of the data-independent kernel requires t-SNE to determine n bandwidths for a dataset of n points, despite the fact that a user needs to set one parameter only. This becomes one key obstacle in dealing with large datasets.
(2) Revealing the advantages of using a partition-based data-dependent kernel in t-SNE. First, this kernel represents the true structure(s) in the high-dimensional space under the same condition mentioned above. Second, the data-dependent similarity is set with a single parameter only; this allows it to be computed more efficiently. This enables t-SNE to deal with large-scale datasets without trading off accuracy with faster runtime, without resorting to approximation methods.
(3) Proposing an improvement to t-SNE by simply replacing the data-independent kernel with a data-dependent kernel, leaving the rest of the procedure unchanged.
(4) Verifying the effectiveness and efficiency of the data-dependent kernel in t-SNE.
The adopted data-dependent kernel is Isolation kernel [24,20] and the experiment result shows that using Isolation kernel will improve the performance of t-SNE and solve the issues brought by Gaussian kernel in t-SNE.
The rest of the paper is organised as follows. The current t-SNE and related work are described in Section 2. The deficiencies of using Gaussian kernel is presented in Section 3. In Section 4, we characterise the selected Isolation kernel and Section 5 presents the empirical evaluation of using Isolation kernel in t-SNE. Discussion and conclusions are given in the last two sections.
Basics of t-SNE
Given a dataset D = {x 1 , . . . , x n } in R d , t-SNE aims to map D ∈ R d to D ∈ R d where d d such that the similarities between points are preserved as much as possible from the high-dimensional space to the low-dimensional space. As t-SNE is meant for a visualisation tool, d = 2 usually.
The similarity between a pair of points x i , x j (resp. x i x j ) in a high (resp. low)-dimensional space is measured by a probability p ij (resp. p ij ) that point x i picks x j as its neighbour. The probability distributions are computed based on distance measures between the points in the respective space. The aim of this family of projection methods is to project the points from x to x in such a way that the probability distributions between p ij and p ij are as similar as possible.
The similarity between x i and x j is measured using a Gaussian kernel as follows:
K(x i , x j ) = exp( − x i − x j 2 2σ 2 i )(1)
t-SNE computes the conditional probability p j|i that x i would pick x j as its neighbour as follows:
p j|i = K(x i , x j ) k =i K(x i , x k )(2)
The probability p ij , a symmetric version of p j|i , is computed as:
p ij = p j|i + p i|j 2n(3)
t-SNE performs a binary search for the best value of σ i such that the perplexity of the conditional distribution equals a fixed perplexity specified by the user. Therefore, the bandwidth is adapted to the density of the data, i.e., small (large) values of σ i are used in dense (sparse) regions. The perplexity is defined as:
P erp(P i ) = 2 H(Pi) (4) where P i represents the conditional probability distribution over all other data points given data point x i and H(P i ) is the Shannon entropy:
H(P i ) = − j p j|i log 2 p j|i(5)
The perplexity is a smooth measure of the effective number of neighbours, similar to the number of nearest neighbours k used in kNN methods [8]. Thus, σ i is adapted to the density of the data, i.e., it becomes small for dense data since the k-nearest neighbourhood is small and vice versa. In addition, [18] point out that there is a monotonically increasing relationship between perplexity and the bandwidth σ i .
The similarity between x i and x j in the low-dimensional space is measured as:
s(x i , x k ) = (1+ x i − x j 2 ) −1
and the corresponding probability is defined as:
p ij = s(x i , x j ) k = s(x , x k )
The distance-based similarity s is used because it has heavy-tailed distribution, i.e., it approaches an inverse square law for large pairwise distances. This means that the far apart mapped points have p ij which are almost invariant to changes in the scale of the low-dimensional space [18].
Note that the probability distributions are defined in such a way that p ii = 0 and p ii = 0, i.e. a node does not pick itself as a neighbour.
The location of each point x ∈ D is determined by minimising a cost function based on the (non-symmetric) Kullback-Leibler divergence of the joint probability distribution P in the low-dimensional space from the joint distribution P in the high-dimensional space:
KL(P P ) = i =j p ij log p ij p ij
The use of the Gaussian kernel K sharpens the cost function in retaining the local structure of the data when mapping from the high-dimensional space to the low-dimensional space. The main computational step in applying t-SNE is to determine the value of bandwidth σ for each data point.
The procedure of t-SNE is provided in Algorithm 1. Note that m = n for small datasets. For large datasets, m n; and this is to be discussed in Section 5.4. [18] and its variations have been widely applied in dimensionality reduction and visualisation. In addition to t-SNE [18], which is one of the commonly used visualisation methods, many other variations have been proposed to improve SNE in different aspects.
Algorithm 1 t-SNE(D, P erp, m) Require: D -Dataset {x 1 , . . . ,
There are improvements based on some revised Gaussian kernel functions in order to get better similarity measurements. [5] propose a symmetrised SNE; [29] enable t-SNE to accommodate various heavy-tailed embedding similarity functions; and [26] propose an algorithm based on similarity triplets of the form "A is more similar to B than to C" so that it can model the local structure of the data more effectively.
Based on the concept of information retrieval, NeRV [27] uses a cost function to find a trade-off between precision and recall of "making true similarities visible and avoiding false similarities", when projecting data into 2-dimensional space for visualising similarity relationships. Unlike SNE which relies on a single Kullback-Leibler divergence, NeRV uses a weighted mixture of two dual Kullback-Leibler divergences in neighbourhood retrieval. Furthermore, JSE [11] enables t-SNE to use a different mixture of Kullback-Leibler divergences, a kind of generalised Jensen-Shannon divergence, to improve the embedding result.
To reduce the runtime of t-SNE, [25] explores tree-based indexing schemes and uses the Barnes-Hut approximation to reduce the time complexity to O(nlog(n)), where n is the data size. This gives a trade-off between speed and mapping quality. To further reduce the time complexity to O(n), [14] utilise a fast Fourier transform to dramatically reduce the time of computing the gradient during each iteration. The method uses vantage-point trees and approximates nearest neighbours in dissimilarity calculation with rigorous bounds on the approximation error.
Some works focus on analysing the heuristics methods for solving non-convex optimisation problems for the embedding [15,21]. Recently, [1] theoretically analyse this optimisation and provide a framework to make clusterable data visually identifiable in the 2-dimensional embedding space. These works focus on changing the optimisation problem and are not related to similarity measurements.
So far, however, none of these studies has investigated the suitability of Gaussian kernel in t-SNE. The following two sections will uncover the issues of using Gaussian kernel in t-SNE and propose to replace it with Isoaltion kernel.
Deficiencies of Gaussian kernel when used in t-SNE
Here we list two identified deficiencies of Gaussian kernel that cause poor visualisation outputs and high computational cost in t-SNE. As bandwidth σ i of the Gaussian kernel is fixed for each point x i , we identify the following observation: Observation 1. Gaussian kernel with point-based bandwidth can misrepresent the structure of a data distribution, having points significantly denser than the majority of the points in a sample generated from the distribution.
Intuitively, as each point-based bandwidth represents one local density only, the Gaussian kernel can misrepresent the relationship between multiple clusters in the joint distribution of the overlap region. We provide two example cases in which misrepresentation occurs, i.e., there are multiple subspace clusters; each is a Gaussian distribution of the same mean with: (i) different variances; and (ii) the same variance.
Let X 1 and X 2 be two subspace regions in a high-dimensional space, and points in the two clusters are generated from the Gaussian distributions N [0, v 1 ] and N [0, v 2 ], respectively; and the distributions only overlap at the origin O.
In case (i) where variance v 1 v 2 . Let point x k1 ∈ X 1 be the point closest to O in the dense cluster, and point x k2 ∈ X 2 be the point closest to O in the sparse cluster. Then,
K(O, x k1 ) K(O, x k2 ) because O − x k1 O − x k2 and K(O, ·) is inversely proportional to distance.
In case (ii) where v 1 = v 2 , using an appropriate setting in the current t-SNE procedure, each point x in either X 1 or X 2 would have learned approximately the same bandwidth σ, except the origin O because O has at least double the density than any point in either cluster. As a result, ∀x i , x j ∈ X 1 (or ∀x i , x j ∈ X 2 ) and
O − x i = x j − x i , K(O, x i ) K(x j , x i ) because σ O σ j .
This means that the origin is very dissimilar to any points in either cluster.
Simulations of the two cases are given below:
(i) Five subspace clusters having different variances in a 50-dimensional space (see the simulation details in the footnote 1 .) Table 1: Visualisation results of the t-SNE using Gaussian kernel and Isolation kernel on a 50-dimensional dataset with 5 subspace clusters, each in a different 10-dimensional subspace. The black cross indicates the mapped point of the origin in the high-dimensional space shared by three clusters in different subspaces. Note that in (c), all points of the red cluster (cluster 1) are concentrated and they overlap with the mapped origin. perplexity and ψ are the key parameters for Gaussian kernel and Isolation kernel, respectively.
Gaussian kernel (a) perplexity = 50 (b) perplexity = 250 (c) perplexity = 500
Isolation kernel
(d) ψ = 50 (e) ψ = 250 (f) ψ = 500
Using Gaussian kernel, SNE creates a misrepresentation of the structure in the high-dimensional space. The simulation result is shown in the first row in Table 1: t-SNE is unable to identify the joint component of the three clusters in different subspaces which share the same mean at the origin only in the high-dimensional space but nowhere else. Notice that the mapped origin point is misrepresented to be associated with one cluster only; and it is totally disassociated with the other two clusters. In contrast, the same t-SNE algorithm employing the Isolation kernel [24,20], instead of a Gaussian kernel, produces the mapping which truly represents the structure in the high-dimensional space: the three clusters are well separated and yet they share some common points, indicated by the mapped origin point as shown in the second row in Table 1.
(ii) Two subspace clusters in a 200-dimensional dataset with two subspace clusters having the same Gaussian distribution N [0, 1] but in different subspaces 2 . Table 2 shows the simulation results. When Gaussian kernel is used, the t-SNE with a small perplexity produces small bandwidth for every point-leading to each point has almost the same low similarity with every other point in the dataset, as shown in Figure (a) in Table 2. Note that the two clusters could not be distinguished in the visualization if the colors, indicating the ground truth labels, are not used in the plot. Yet, the t-SNE with a large perplexity produces large bandwidths for all points, except the origin which has a significantly smaller bandwidth-note that the origin (denoted as ×) and the rest of the points are at the opposite corners in Figure (c) in Table 2. This is because the origin, being the only overlap point between the two clusters, has a significantly higher density than all other points. As both clusters have the same variance, all their points have low density (relative to the origin) are 'learned' to have approximately the same bandwidth-which is significantly larger than that of the origin. As a result, the origin is very dissimilar to all other points; though all the other points are correctly clustered into two separate groups. Figure In contrast, when the Isolation kernel is used, the origin is always positioned in-between the two clusters, independent of the ψ parameter setting.
four Gaussian distributions. In other words, no clusters share a single relevant attribute. In addition, all clusters have significantly different variances (the variance of the 5th cluster is 625 times larger than that of the 1st cluster). The first three clusters share the same mean; but the last two have different means. Table 2: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 200-dimensional dataset with two equal density subspace clusters. Note that in (c), the origin is far away from both clusters, although there is a clear gap between the two clusters. The green box in (c) presents a zoom-in view of the two clusters.
Gaussian kernel (a) perplexity = 50
(b) perplexity = 210 (c) perplexity = 300
Isolation kernel
(d) ψ = 50 (e) ψ = 210 (f) ψ = 300
Note the above-mentioned deficiency is not restricted to subspace clusters without shared attributes. An example using subspace clusters with shared attributes can be found in Appendix A.
No need for point-based bandwidth in Isolation kernel
The space partitioning mechanism of the Isolation kernel [24,20] determines the size of the partitions in the local region: it produces large partitions in the sparse region and small partitions in the dense region (see Section 4.2 for more details.) As it is partition-based, points in the local neighbourhood are most likely to be in the same partition. As such, points in the intersection of clusters (in different subspaces as shown in Table 1) are almost always captured by the same partition of Isolation kernel.
An example distribution of similarities based on the dataset shown in Table 1 is given in Figure 1. Let x k1 be the origin O's closest point in the dense cluster (i.e., cluster 1); and x k2 be O's closest point in a sparse cluster (cluster 2 or 3). Figure 1b shows that K ψ (O, x k1 ) ≈ K ψ (O, x k2 ) when the Isolation kernel is used. When the Gaussian kernel is used,
K(O, x k1 ) K(O, x k2 )
, as shown in Figure 1a. This explains why the points in the intersection are better mapped in the low-dimensional space by using the Isolation kernel than using the Gaussian kernel.
In other words, the Isolation kernel ensures that the local structure is truly reflected in the similarities among local points in the high-dimensional space, unlike the misrepresentation exhibited in Table 1 and Table 2 when the Gaussian
O(rn 2 ) O(tψ) Step 2: Matrix calculation O(m 2 ) O(tψm 2 ) Step 3: t-SNE Mapping O(sm 2 )
kernel is used. As a result, t-SNE using the Isolation kernel produces the improved visualisation quality which has no misrepresentations.
The second deficiency
Low computational efficiency problem with Gaussian kernel
The use of a Gaussian kernel necessitates the search for a local bandwidth for each local point. t-SNE utilises a binary search for the value of σ i that makes the entropy of the distribution over neighbours equal to log K, where K is the effective number of local neighbours or "perplexity" [18]. This search is the key component that determines the success or failure of t-SNE. A gradient descent search has been used successfully to perform the search for n parameters for small datasets [18]. This formulation has two key limitations for large datasets. First, the need for n-parameters search poses a real limitation in terms of finding appropriate settings for a large number of parameters. Second, it cannot deal with large datasets because its low computational efficiency, i.e., the time complexity is O(n 2 ).
High computational efficiency with Isolation Kernel
The computational complexities of the Guassian kernel and Isolation kernel [24,20] used in t-SNE are shown in Table 3. 3 Although the parameter ψ of Isolation kernel corresponds to the bandwidth parameter of the Gaussian kernel, the Isolation kernel needs no optimisation to determine n bandwidths locally. This is because the partitioning mechanism used by the Isolation kernel produces small partitions in dense regions and large partitions in sparse regions; and the sizes of the partitions are monotonically decreasing with respect to ψ. As the local adaptation has already been done during the process of deriving the kernel, no further adaptation is required after the kernel is derived.
Though the Isolation kernel derivation from data takes constant O(tψ) time, it is significantly less than the optimisation required to determine n bandwidths which takes O(n 2 ) time in Gaussian kernel. For a large dataset, when using Gaussian kernel, it is infeasible to estimate a large number of bandwidths with an appropriate degree of accuracy, and its computational cost is prohibitively high. In contrast, the consequence of using Isolation kernel is that the runtime of step 1 in the t-SNE algorithm is significantly reduced. Thus, the Isolation kernel enables t-SNE to deal with large datasets. More experimental details are provided in Sections 5.4 and 6.3.
The proposed solution: using the Isolation kernel in t-SNE
Since t-SNE needs a data-dependent kernel, we propose to use a recent data-dependent kernel called Isolation kernel [24,20] to replace the data-independent Gaussian kernel in t-SNE.
The Isolation kernel is a perfect match for the task because a data-dependent kernel, by definition, adapts to local distribution without any additional optimisation. The kernel replacement is conducted in the component in the high-dimensional space only, leaving the other components of the t-SNE procedure unchanged.
Isolation kernel
The key idea of Isolation kernel is that using a space partitioning strategy to split the data space into different cells, e.g., we uniformly sample ψ points from the given dataset and generate ψ Voronoi cells, then the similarity between any two points is how likely the two points can be split into the same cell.
The details of Isolation kernel [24,20] are provided below.
Let D = {x 1 , . . . , x n }, x i ∈ R d be a dataset sampled from an unknown probability density function x i ∼ F . Moreover, let H ψ (D) denote the set of all partitionings H admissible for the given dataset D, where each H covers the entire space of R d ; and each of the ψ isolating partitions θ[z] ∈ H isolates one data point z from the rest of the points in a random subset D ⊂ D, and |D| = ψ. In our implementation, H is a Voronoi diagram generated from D. Definition 1. For any two points x, y ∈ R d , the Isolation kernel of x and y wrt D is defined to be the expectation taken over the probability distribution on all partitionings H ∈ H ψ (D) that both x and y fall into the same isolating partition θ[z] ∈ H, z ∈ D:
K ψ (x, y|D) = E H ψ (D) [1(x, y ∈ θ[z] | θ[z] ∈ H)](6)
where 1(·) is an indicator function.
In practice, the Isolation kernel K ψ is constructed using a finite number of partitionings H i , i = 1, . . . , t, where each H i is created using D i ⊂ D:
K ψ (x, y|D) = 1 t t i=1 1(x, y ∈ θ | θ ∈ H i ) = 1 t t i=1 θ∈Hi 1(x ∈ θ)1(y ∈ θ)(7)
where θ is a shorthand for θ[z]; and t can usually be set to a default value. ψ is the sharpness parameter and the only parameter of the Isolation kernel. The larger ψ is, the sharper the kernel distribution is. This corresponds to σ in the Gaussian kernel, i.e., the smaller σ is, the narrower the kernel distribution is. Note that t is the number of partitionings and t can be fixed to a large value to ensure the stability of the estimation.
As Equation (7) is quadratic, K ψ is a valid kernel. For brevity, K ψ (x, y) is used to denote K ψ (x, y|D) hereafter.
How Isolation kernel differs from Gaussian kernel
The key difference is that the Isolation kernel adapts to local density distribution, but the Gaussian kernel is independent of the data distribution.
In addition, the technical differences can be observed in two aspects. First, the Isolation kernel has no closed-form expression. Second, it is derived directly from a dataset, without explicit learning or optimisation. Its adaptation to local density is a direct outcome of its isolation mechanism used to partition space, i.e., the mechanism produces large partitions in sparse regions and small partitions in dense regions [24,20]. A natural isolation mechanism that has this characteristic is a Voronoi diagram. Given a sample of the underlining distribution, each Voronoi cell isolates a point from the rest of the points in the sample; and the cells are small in the dense region and large in the sparse region. Note that the Voronoi diagram is obtained very efficiently, i.e., given a sample, nothing else needs to be done in the training stage because boundaries in the Voronoi diagram can be obtained at the testing stage as the equal distance between the two nearest points in the given sample.
The Isolation kernel makes full use of the distributional information in small samples
The Isolation kernel only requires small samples (ψ) for the space partitioning without a computationally expensive process.
(a) ψ = 16 (b) ψ = 64 Figure 2: Two examples of partitioning H using the nearest neighbour (a Voronoi diagram) on a dataset having two regions of uniform densities, where the left half has a lower density than the right half.
A small sample of a dataset contains data distributional information which is sufficient to build a data-dependent kernel.
The Isolation kernel extracts this information in the form of a Voronoi diagram, which depicts the relative densities between regions.
In contrast, using a data-independent measure such as the Gaussian kernel, the distributional information in a dataset is ignored and each point in the input space is treated as an independent point. In order to get the distributional information in the form of variable bandwidths that are adaptive to the local distribution, a separate optimisation process is required, as conducted in step 1 of the t-SNE algorithm.
It is important to note that when they could not handle a large dataset, most methods may use small samples as a mitigation approach, and this inevitably trades off runtime with accuracy. But it is not the case for the Isolation kernel where small samples are the key in achieving high accuracy; and samples larger than the optimal ψ will degrade the accuracy of Isolation kernel. See further discussion on this issue in Section 6.
In other words, by using the Gaussian kernel, t-SNE must employ a computationally expensive approach to get the distributional information in a dataset. It does not exploit the same information which is freely available in small samples of the dataset. The Isolation kernel is a direct approach that makes full use of the distributional information freely available in small samples of a dataset.
The Isolation kernel is well-defined
The Isolation kernel has the following well-defined data-dependent characteristic: two points in a sparse region are more similar than two points of equal inter-point distance in a dense region [24].
Using a specific implementation of Isolation kernel (see Appendix B), [20] have provided the following Lemma (see its proof in their paper): Lemma 1.
[20] ∀x i , x j ∈ X S (sparse region) and ∀x k , x ∈ X T (dense region) such that ∀ y∈X S ,z∈X T ρ(y) < ρ(z), the nearest neighbour-induced Isolation kernel K ψ has the characteristic that for
x i − x j = x k − x implies K ψ (x i , x j ) > K ψ (x k , x )(8)
where x − y is the distance between x and y; and ρ(x) denotes the density at point x.
Let p b|a be the probability that x a would pick x b as its neighbour.
We provide two corollaries from Lemma 1 as follows. Corollary 1. x i is more likely to pick x j as a neighbour than x k is to pick x as a neighbour, i.e., p j|i > p |k for
∀ a,b p b|a ∝ K ψ (x a , x b ).
This is because x k in the dense region is more likely to pick a point closer than x as its neighbour, in comparison with x i picking x j as a neighbour in the sparse region, given that
x i − x j = x k − x . Corollary 2. ∀ a,b p b|a ∝ 1 ρ(X A )
, where x a , x b ∈ X A is a region in X ; andρ is an average density of a region.
Using a data-dependent kernel with a well-defined characteristic as specified in Lemma 1, we can establish that the probability that x a would pick x b , p b|a , is inversely proportional to the density of the local region.
This becomes the basis in setting a reference probability in the high-dimensional space.
It is interesting to note that the adaptation of Gaussian kernel by optimising n bandwidths attempts to achieve a similar outcome, as stipulated in Corollaries 1 and 2. Yet, it is unclear that a similar data-dependent characteristic, as stated in Lemma 1, can be formally stated for the adaptive Gaussian kernel. This is because the similarity cannot be computed for all x ∈ R d (except those in the given dataset.)
t-SNE with the Isolation kernel
We propose to replace K with K ψ in defining p j|i in Equation (2), i.e.,
p j|i = K ψ (x i , x j ) k =i K ψ (x i , x k ) .(9)
The rest of the procedure of t-SNE remains unchanged.
The procedure of t-SNE with the Isolation kernel is provided in Algorithm 2.
Note that the only difference between the two algorithms is step 1; and Eq 9 (instead of Eq 2) in step 2.
Empirical Evaluation
This section presents the three evaluation methods we adopt, evaluation results, runtime comparison and a scalability test.
Evaluation measures
We used a qualitative assessment R(k) to evaluate the preservation of k-ary neighbourhoods [12,11,10], defined as follows:
R(k) = (n − 1)Q(k) − k n − 1 − k(10)
where Q(k) = n i=1 R(k) measures the k-ary neighbourhood agreement between the HD and corresponding LD spaces. R(k) ∈ [0, 1]; and the higher the score is, the better the neighbourhoods preserved in the LD space. In our experiments, we recorded the assessment with k ∈ {0.01n, 0.03n, ..., 0.99n} and produced the curve, i.e., k vs R(k).
To aggregate the performance over the different k-ary neighbourhoods, we calculate the area under the R(k) curve in the log plot [11] as:
AU C RN X = k R(k)/k k 1/k(11)
AUC RN X assesses the average quality weighted by k, i.e., errors in large neighbourhoods with large k contribute less than that with small k to the average quality.
In addition, the purpose of many methods of dimensionality reduction is to identify HD clusters in the LD space such as in a 2-dimensional scatter plot. Since all the datasets we used for evaluation have ground truth (labels), we can use measures for clustering validation to evaluate whether all clusters can be correctly identified after they are projected into the LD space. Here we select two popular indices of cluster validation, i.e., Davies-Bouldin (DB) index [6] and Calinski-Harabasz (CH) index [3]. Their details are given as follows. Let x be an instance in a cluster C i which has n i instances with the centre as c i . The Davies-Bouldin (DB) index can be obtained as 12) where N C is the number of clusters in the dataset.
DB = 1 N C i max j,j =i {[ 1 n i x∈Ci ||x − c i || 2 + 1 n j x∈Cj ||x − c j || 2 ]/||c i − c j || 2 }(
Calinski-Harabasz (CH) index is calculated as
CH = i n i ||c i − c|| 2 /(N C − 1) i x∈Ci ||x − c i || 2 /(n − N C )(13)
where c is the centre of the dataset.
Both measures take the similarity of points within a cluster and the similarity between clusters into consideration, but in different ways. These measures assign the best score to the algorithm that produces clusters with low intra-cluster distances and high inter-cluster distances. Note that the higher the CH score, the better the cluster distribution; while the lower the DB score is, the better the cluster distribution is.
All algorithms used in the following experiments were implemented in Matlab 2019b and were run on a machine with 14 cores (Intel Xeon E5-2690 v4 @ 2.59 GHz) and 256GB memory. 4 All datasets were normalised using the min-max normalisation to yield each attribute to be in [0,1] before the experiments began. We also use the min-max normalisation on the t-SNE results before calculating DB and CH scores.
Evaluation results
This section presents the result of utility evaluation of isolation kernel and Gaussian kernel in t-SNE using 21 real-world datasets 5 with different data sizes and dimensions. We report the best performance of each algorithm with a systematic parameter search with the range shown in Table 4. 6 Note that there is only one manual parameter ψ to control the partitioning mechanism, and the other parameter t can be fixed to a default number. Table 5 shows the results of the two kernels used in t-SNE. The Isolation kernel performs better on 18 out of 21 datasets in terms of AU C RN X , which means that the Isolation kernel enables t-SNE to preserve the local neighbourhoods much better than the Gaussian kernel. With regard to the cluster quality, the Isolation kernel performs better than the Gaussian kernel on 18 out of 21 datasets in terms of both DB and CH. Notice that when the Gaussian kernel is better, the performance gaps are usually small in any of the three measures. Overall, the Isolation kernel is better than the Gaussian kernel on 16 out of 21 datasets in all three measures. The reverse is true on one dataset only, i.e., News20. The visualization result on New20binary indicates there are significant overlaps between the two clusters in this dataset. This is reflected in the AU C RN X results which are significantly less than a random assignment (AU C RN X = 0.5).
The visualization result of News20 is shown in Appendix C.
On the COIL20 dataset, we have identified a structural misrepresentation issue with the Gaussian kernel, similar to the one shown in Table 2. Table 6 shows the five clusters where the Gaussian kernel has misrepresented structures in the high-dimensional space. The 3-dimensional results denote that the Isolation kernel depicts a more nuanced structural relationship between the five clusters; whereas the Gaussian kernel depicts that they are disparate five clusters, shown in Table 6. Also, note that a reference point × is close to all five clusters when the Isolation kernel is used, but it is far from many clusters when a Gaussian kernel is used.
Runtime comparison
Generally, both Gaussian Kernal and Isolation Kerner have quadratic time and space complexities. However, the Gaussian kernel in the original t-SNE needs a large number of iterations to search for the optimal local bandwidth for each point. as a result, the Gaussian kernel takes a much longer time in step 1 of the algorithm than the Isolation kernel. Figure 3 presents the two runtime comparisons of t-SNE with the two kernels on a synthetic dataset. Figure 3(a) shows that the Gaussian kernel is much slower than the Isolation kernel in similarity calculations. This is mainly due to the search required to tune the n bandwidths in step 1 of the algorithm. It is interesting to note that though both similarities have n 2 time complexity, the constant is significantly lower in the Isolation kernel: if the data size is increased 10 times t-SNE in 2d t-SNE on 5 selected classes in 3d
Gaussian kernel
(a) (b)
Isolation kernel (c) (d) Table 6: (a) and (c) show the t-SNE visualisation results on COIL20 in a two-dimensional space. (b) and (d) show the five clusters and a reference point (indicated as × with the class label "R") on t-SNE visualisation results in a three-dimensional space. from 10,000 to 100,000, the Gaussian kernel increases its runtime 685 times; whereas the Isolation kernel increases 91 times only. As a result, with a dataset of 100,000 data points, the Isolation kernel 7 is two orders of magnitude faster than the Gaussian kernel (887 seconds versus 72,196 seconds). Figure 3(b) shows the runtime of the mapping process in step 3 of Algorithms 1 and 2 which is the same for both algorithms. It is not surprising that their runtime are about the same in this step, regardless of the kernel employed. Table 7 compared the CPU runtime of Gaussian kernel and Isolation kernel used in t-SNE on four real-world datasets. The t-SNE with the Isolation kernel is up to one order of magnitude faster than the t-SNE with Gaussian kernel in the first two steps. 7 In addition, the Isolation kernel is amenable to GPU acceleration [20]. Our experiment shows that the runtime of Isolation kernel can be sped up by two orders of magnitude with a GPU machine, e.g., from 54 CPU seconds to 0.24 GPU seconds for a dataset of 25,000 data points. Table 8: t-SNE visualisation results on the MNIST and MNIST8M datasets.
Scalability testing
Here we show that the Isolation kernel enables t-SNE to deal with large datasets because step 1 takes constant time (once the parameters are fixed), rather than n 2 when a Gaussian kernel is used.
This allows t-SNE to deal with a dataset with millions of data points in step 1, while using a subsample in steps 2 & 3 to visualise the dataset in a low-dimensional space.
To demonstrate this ability, we use the MNIST8M dataset [17] with 8.1 million points in step 1; and then use either the MNIST dataset or a subsample of 10,000 data points from MNIST8M in steps 2 & 3 of t-SNE. The results of t-SNE with the Isolation kernel are shown in the last two columns in Table 8. The results show that IK can get good CH scores with small ψ values. It took 334s (ψ = 2048) in steps 1 and 2, and 972s in step 3. Note that t-SNE with Gaussian kernel cannot be directly applied on this massive dataset in the same manner because it would take too long to complete step 1, as shown in Figure 3(a).
The use of a subsample in steps 2 and 3 was previously suggested by [18]. However, the suggestion was to replace the Gaussian kernel with a graph similarity that employs a random walk method. This graph similarity approach has the same limitation as the Gaussian kernel because of its high time complexity. It requires a neighbourhood graph to be generated before a random walk kernel (or any graph kernel) can be used to measure similarities. While many graph kernels (see e.g., [9]) may be applied here, the key obstacle is the generation of the neighbourhood graph which has at least O(n 2 ) time complexity.
In summary, employing Isolation kernel is the only method that takes constant time in step 1. Meanwhile, subsampling in step 2 and 3 enables t-SNE to process large-scale datasets without compromising the reference probability that needs to be established in step 1.
Discussion
The proposed method can benefit existing variants of t-SNE
The common feature of existing variants of t-SNE is that they all use the Gaussian kernel. 8 The proposed idea can be applied to variants of stochastic neighbour embedding, e.g., NeRV [27] and JSE [11], since they employ the same algorithm procedure as t-SNE. The only difference is the use of variants of cost function, i.e., type 1 or type 2 mixture of KL divergences.
In addition, Isolation kernel can be used in existing methods which aims to speed up t-SNE in step 3 of the algorithm. This is discussed in Section 6.3.
Isolation kernel performs optimally with small samples
The finding-small samples (as the ψ value) have better visualisation results than large samples-was formally analysed in the context of nearest neighbour anomaly detection [23]. The work is motivated by the previous finding that small samples can produce better detection accuracy for some anomaly detectors than large samples (e.g., [16,22].) The theoretical analysis based on computational geometry reveals that the geometry of data distribution has a direct impact on the sample size setting which is essential to produce an optimal nearest neighbour anomaly detector [23]. In a simple geometry such as a Gaussian distribution, a sample size of one data point (at the mean of Gaussian distribution) yields the optimal nearest neighbour anomaly detector; and a sample of more data points will produce a worse performing detector. In a more complex geometry of data distribution (e.g., a mixture of multiple Gaussian distributions), while the optimal sample size is more than one data point, a sample size over the optimal one also produces a worse performing detector. See [23] for details.
The above result can explain the effect of small samples in Isolation kernel described in Section 4.3: the optimal sample size is the representative sample for the underlying geometry of data distribution, allowing the Isolation kernel to model relative similarities between different regions most effectively.
In summary, most methods use small samples as a compromising approach when failing to handle large datasets. It comes at the cost of low accuracy and longer runtime. However, algorithms employing Isolation kernel can process large datasets without trading off accuracy and efficiency due to the resultant sample. While ψ of the Isolation kernel serves the primary purpose of a kernel parameter like the bandwidth parameter of Gaussian kernel, the resultant sample size enables algorithms that employ the Isolation kernel to deal with large datasets without compromising the accuracy of the task.
Methods to speed up t-SNE
Scalability is an open issue for applying unsupervised distance metric learning approaches on large datasets [28]. As mentioned before, currently, there are two ways to speed up t-SNE: subsampling (which is a mitigation approach discussed in Section 4.3), and another is via some approximation to reduce runtime in step 3.
The two approximation methods mentioned in the literature review are (i) the Barnes-Hut algorithm in conjunction with the dual-tree algorithm [25], and (ii) interpolating onto an equispaced grid in order to use the fast Fourier transform to perform the convolution required in step 3 of the t-SNE algorithm [14]. However, these approximation methods sacrifice accuracy for efficiency. For example, opt-SNE [2] utilises Kullback-Leibler divergence evaluation to automatically identify the tailored parameters in the optimisation procedure of t-SNE, in order to reduce the iteration time and improve the embedding quality. Nevertheless, all of these methods are still based on Gaussian kernel. Therefore, they still have the same deficiency of misrepresented structures as the original t-SNE, as discussed in Section 3.1.1. Appendix E and Appendix F show examples of these outcomes of FIt-SNE [14] and opt-SNE [2], respectively.
In a nutshell, the proposed method of using Isolation kernel in t-SNE offers (i) the only way to establish the reference probability in step 1 using a large dataset (without parallelisation); and (ii) a way to speed up t-SNE, which is an alternative to existing speedup methods. The use of a subsample, as a mitigation approach, in step 1 compromises the accuracy of reference probability. The use of an approximation method in step 3 reduces the quality of the dimensionality reduction. These existing methods in speeding up t-SNE still employ Gaussian kernel; and thus they fail to address the two deficiencies we have identified.
Conclusions
This paper identifies two deficiencies in t-SNE due to the use of Gaussian kernel. First, the point-based-bandwidth Gaussian kernel often creates misrepresented structure(s) which do not exist in the given dataset under some conditions. Second, the data-independent Gaussian kernel largely increases the computation load resulted from the need in determining n bandwidths for a dataset of n points and thus unable to deal with large datasets. Though some methods have been suggested to trade off accuracy for faster running speed, the underlying issue due to the use of Gaussian kernel remains unresolved.
Since the root cause of these deficiencies is the use of a data-independent kernel, we propose to simply replace Gaussian kernel with a data-dependent kernel called Isolation kernel.
We show that the use of Isolation kernel in t-SNE overcomes the drawback of misrepresenting some structures in the data, which often occurs when Gaussian kernel is applied in t-SNE. Also, the use of Isolation kernel yields a more efficient similarity computation because data-dependent Isolation kernel has only one parameter that needs to be tuned. Unlike the existing methods in speeding up t-SNE, this efficient feature of Isolation kernel enables t-SNE to deal with large-scale datasets without trading off accuracy.
Appendix A. Visualisation results of t-SNE on subspace clusters having some shared attributes
Here we use a dataset with three subspace clusters where all clusters share two same attributes only. The three clusters have the same Gaussian distribution N [0, 1]. Cluster 1 has 500 points with relevant attributes from #1 to #51 dimensions; cluster 2 has 500 points with relevant attributes from #50 to #100 dimensions; and cluster 3 has 20 points with relevant attributes from #50 to #51 dimensions. All irrelevant attributes of each cluster having zero values. Because most attributes of cluster 3 are zero, the overall distance between cluster 3 and cluster 1 or cluster 2 is much smaller than the distance between cluster 1 and cluster 2. Table 9 shows the visualisation results of t-SNE with Gaussian kernel and Isolation kernel on the above-mentioned 100-dimensional dataset. It can be seen from the table that the Isolation kernel with small ψ values presents the cluster structure correctly, i.e., the third cluster is in the centre and close to clusters 1 and 2.
In contrast, t-SNE with Gaussian kernel using perplexity = 50 shows a small gap between clusters 1 and 2; the separation between cluster 3 and clusters 1 & 2 are not clear. If increasing perplexity to 250, three points that are close to the origin from cluster 3 (including the origin) become far away from clusters 1 and 2. This is because they got much smaller bandwidths than all other points due to the high density around the origin. As a result, they are very dissimilar to most other points. Table 9: Visualisation results of t-SNE with Gaussian kernel and Isolation kernel on a 100-dimensional dataset with three subspace clusters. Note that in (c), three points (including the origin) from cluster 3 are far away from clusters 1 and 2, as indicated with the red arrows. where p (x, y) is a distance function and we use p = 2 as Euclidean distance in this paper. Table 10 compares the contours of the Isolation kernel on two different data distributions with different ψ values. It shows that the Isolation kernel is adaptive to the local density. Under uniform data distribution, the Isolation kernel's contour is symmetric with respect to the reference point at (0.5, 0.5). However, on the Parkinson dataset, the contour shows that, for points having equal inter-point distance from the reference point x at (0.5, 0.5), points in the spare region are more similar to x than points in the dense region to x. In addition, the larger the ψ, the sharper the kernel distribution of the Isolation kernel, as shown in Table 10. This is because a larger ψ produces more partitions (or Voronoi cells) of smaller sizes. This means that two points are less likely to fall into the same cell unless they are very close.
While this implementation of the Isolation kernel produces its contour similar to that of an exponential kernel k(x, y) = exp(− x−y 2σ 2 ) under uniform density distribution, different implementations have different contours. For example, using axis-parallel partitionings to implement the Isolation kernel produces a contour (with the diamond shape) which is more akin to that of Laplacian kernel k(x, y) = exp(− x−y σ ) under uniform density distribution [24]. Of course, both the exponential and Laplacian kernels, like Gaussian kernel, are data-independent.
Appendix C. t-SNE visualisation on News20
We compare the visualisation results of News20 with different parameter settings in Table 11. It is interesting to note that t-SNE using the Isolation Kernel having a small ψ produces better visualisation results having more separable clusters than those using Gaussian kernel with high perplexity, although the IK got slightly lower evaluation measure values (compare Figures (c) and (d) in each of Tables 11.) However, the two clusters are significantly overlapped in most cases.
We suspect that the overlapping issue is caused by the sparsity. To verify it, we use the same data distribution from Table 1 and increase the dimensionality of the 5 subspace clusters. The results in Table 4 show that t-SNE with both Table 11: Visualisation result of t-SNE on News20. t-SNE produced the best DB scores when using Gaussian Kernel with perplexity = 3700 and Isolation Kernel with ψ = 85. Table 4.
Adaptive Gaussian kernel is defined as:
K AG (x, y) = exp −||x − y|| 2 σ x σ y (15) where σ x is the distance between x and x's k-th nearest neighbour.
However, replacing the Gaussian kernel in t-SNE with either of these kernels produces poor outcomes. For example, on the Segment and Spam datasets, the adaptive Gaussian kernel produced AUC RN X scores of 0.35 and 0.22, respectively; and the kNN kernel yielded AUC RN X scores of 0.38 and 0.28, respectively. They are significantly poorer than those produced using the Gaussian kernel or Isolation kernel shown in Table 5. We postulate that this is because a global k is unable to make these kernels sufficiently adaptive to local distribution.
It is interesting to note that the current method used to get a data-dependent kernel is to begin with a data-independent kernel such as Gaussian kernel. And then find ways to make the Gaussian kernel data-dependent. This is an indirect approach. The Isolation kernel is a direct approach in getting a data-dependent kernel, derived directly from a given dataset, without an intermediary of a data-independent kernel.
Appendix E. Visualisation results of Fast interpolation-based t-SNE
FIt-SNE [14] addresses the runtime issue in step 3 of the t-SNE algorithm only. Figure 5 demonstrates the visualisation results of FIt-SNE [14] on two datasets. It is clear that FIt-SNE has the same deficiency of misrepresented structures as in t-SNE, due to the use of Gaussian kernel, as discussed in Section 3.1.1.
(a) 5 subspace clusters connected at one point (b) COIL20 Figure 5: Visualisation of FIt-SNE on two datasets. Figure 6 shows the FIt-SNE results on the MNIST and MNIST8M datasets. 9 FIt-SNE's results are worse than those of t-SNE based on either GK or IK in terms of the CH scores on both the MNIST and MNIST8M datasets; so as the visualisation outcomes. Note that without the colours to differentiate between classes, most of the classes shown in Figure 6 cannot be identified as separate classes in the FIt-SNE results produced from the MNIST8M dataset.
FIt-SNE ran faster than t-SNE because of approximation using grid; and it is implemented with C++ with multi-threading. The price it paid to be more efficient using the approximation is worse visualisation outcomes.
Note that on the MNIST8M dataset, we could only use 2 million data points in FIt-SNE because of its high memory usage. In contrast, with the Isolation kernel, we could run t-SNE (in MatLab without multithreading) on the same machine using the entire 8.1 million data points of MNIST8M (shown in Table 8.) Figure 6 shows the visualisation results on three datasets using opt-SNE 10 . As expected, opt-SNE produced similar results as t-SNE, having misrepresented structures in Figures 7a and 7b. On MNIST, opt-SNE got a slightly worse result than t-SNE (CH=6129 versus CH=6452) because it split the green clusters into two parts, as shown in Figure 7c. 10 The source code is obtained from https://github.com/omiq-ai/Multicore-opt-SNE. All parameters in opt-SNE use the default settings except that we search for the best perplexity in the same range as t-SNE stated in Table 4.
| 8,697 |
1906.09667
|
2949777943
|
The Log-Structured Merge-Tree (LSM-tree) has been widely adopted for use in modern NoSQL systems for its superior write performance. Despite the popularity of LSM-trees, they have been criticized for suffering from write stalls and large performance variances due to the inherent mismatch between their fast in-memory writes and slow background I O operations. In this paper, we use a simple yet effective two-phase experimental approach to evaluate write stalls for various LSM-tree designs. We further explore the design choices of LSM merge schedulers to minimize write stalls given a disk bandwidth budget. We have conducted extensive experiments in the context of the Apache AsterixDB system and we present the results here.
|
Recently, a large number of improvements of the original LSM-tree @cite_45 have been proposed. Chen and Carey @cite_24 survey these improvements, range from improving write performance @cite_23 @cite_26 @cite_4 @cite_21 @cite_20 , reducing the buffer cache misses due to merges @cite_16 @cite_27 , supporting automatic design tuning of LSM-trees @cite_18 @cite_34 , to optimizing LSM-based secondary indexes @cite_46 @cite_36 . However, all of these efforts focus on the throughput of LSM-trees, while performance variances and write stalls are largely ignored.
|
{
"abstract": [
"In this paper, we show that key-value stores backed by an LSM-tree exhibit an intrinsic trade-off between lookup cost, update cost, and main memory footprint, yet all existing designs expose a suboptimal and difficult to tune trade-off among these metrics. We pinpoint the problem to the fact that all modern key-value stores suboptimally co-tune the merge policy, the buffer size, and the Bloom filters' false positive rates in each level. We present Monkey, an LSM-based key-value store that strikes the optimal balance between the costs of updates and lookups with any given main memory budget. The insight is that worst-case lookup cost is proportional to the sum of the false positive rates of the Bloom filters across all levels of the LSM-tree. Contrary to state-of-the-art key-value stores that assign a fixed number of bits-per-element to all Bloom filters, Monkey allocates memory to filters across different levels so as to minimize this sum. We show analytically that Monkey reduces the asymptotic complexity of the worst-case lookup I O cost, and we verify empirically using an implementation on top of LevelDB that Monkey reduces lookup latency by an increasing margin as the data volume grows (50 -80 for the data sizes we experimented with). Furthermore, we map the LSM-tree design space onto a closed-form model that enables co-tuning the merge policy, the buffer size and the filters' false positive rates to trade among lookup cost, update cost and or main memory, depending on the workload (proportion of lookups and updates), the dataset (number and size of entries), and the underlying hardware (main memory available, disk vs. flash). We show how to use this model to answer what-if design questions about how changes in environmental parameters impact performance and how to adapt the various LSM-tree design elements accordingly.",
"We present WiscKey, a persistent LSM-tree-based key-value store with a performance-oriented data layout that separates keys from values to minimize I O amplification. The design of WiscKey is highly SSD optimized, leveraging both the sequential and random performance characteristics of the device. We demonstrate the advantages of WiscKey with both microbenchmarks and YCSB workloads. Microbenchmark results show that WiscKey is 2.5 × to 111 × faster than LevelDB for loading a database (with significantly better tail latencies) and 1.6 × to 14 × faster for random lookups. WiscKey is faster than both LevelDB and RocksDB in all six YCSB workloads.",
"Key-value (KV) stores based on multi-stage structures are widely deployed in the cloud to ingest massive amounts of easily searchable user data. However, current KV storage systems inevitably sacrifice at least one of the performance objectives, such as write, read, space efficiency etc., for the optimization of others. To understand the root cause of and ultimately remove such performance disparities among the representative existing KV stores, we analyze their enabling mechanisms and classify them into two models of data structures facilitating KV operations, namely, the multi-stage tree (MS-tree) as represented by LevelDB, and the multi-stage forest (MS-forest) as typified by the size-tiered compaction in Cassandra. We then build a KV store on a novel split MS-forest structure, called SifrDB, that achieves the lowest write amplification across all workload patterns and minimizes space reservation for the compaction. In addition, we design a highly efficient parallel search algorithm that fully exploits the access parallelism of modern flash-based storage devices to substantially boost the read performance. Evaluation results show that under both micro and YCSB benchmarks, SifrDB outperforms its closest competitors, i.e., the popular MS-forest implementations, making it a highly desirable choice for the modern large-dataset-driven KV stores.",
"NoSQL databases are increasingly used in big data applications, because they achieve fast write throughput and fast lookups on the primary key. Many of these applications also require queries on non-primary attributes. For that reason, several NoSQL databases have added support for secondary indexes. However, these works are fragmented, as each system generally supports one type of secondary index, and may be using different names or no name at all to refer to such indexes. As there is no single system that supports all types of secondary indexes, no experimental head-to-head comparison or performance analysis of the various secondary indexing techniques in terms of throughput and space exists. In this paper, we present a taxonomy of NoSQL secondary indexes, broadly split into two classes: Embedded Indexes (i.e. lightweight filters embedded inside the primary table) and Stand-Alone Indexes (i.e. separate data structures). To ensure the fairness of our comparative study, we built a system, LevelDB++, on top of Google's popular open-source LevelDB key-value store. There, we implemented two Embedded Indexes and three state-of-the-art Stand-Alone indexes, which cover most of the popular NoSQL databases. Our comprehensive experimental study and theoretical evaluation show that none of these indexing techniques dominate the others: the embedded indexes offer superior write throughput and are more space efficient, whereas the stand-alone secondary indexes achieve faster query response times. Thus, the optimal choice of secondary index depends on the application workload. This paper provides an empirical guideline for choosing secondary indexes",
"Key-value stores such as LevelDB and RocksDB offer excellent write throughput, but suffer high write amplification. The write amplification problem is due to the Log-Structured Merge Trees data structure that underlies these key-value stores. To remedy this problem, this paper presents a novel data structure that is inspired by Skip Lists, termed Fragmented Log-Structured Merge Trees (FLSM). FLSM introduces the notion of guards to organize logs, and avoids rewriting data in the same level. We build PebblesDB, a high-performance key-value store, by modifying HyperLevelDB to use the FLSM data structure. We evaluate PebblesDB using micro-benchmarks and show that for write-intensive workloads, PebblesDB reduces write amplification by 2.4-3x compared to RocksDB, while increasing write throughput by 6.7x. We modify two widely-used NoSQL stores, MongoDB and HyperDex, to use PebblesDB as their underlying storage engine. Evaluating these applications using the YCSB benchmark shows that throughput is increased by 18-105 when using PebblesDB (compared to their default storage engines) while write IO is decreased by 35-55 .",
"Multi-stage log-structured (MSLS) designs, such as LevelDB, RocksDB, HBase, and Cassandra, are a family of storage system designs that exploit the high sequential write speeds of hard disks and flash drives by using multiple append-only data structures. As a first step towards accurate and fast evaluation of MSLS, we propose new analytic primitives and MSLS design models that quickly give accurate performance estimates. Our model can almost perfectly estimate the cost of inserts in LevelDB, whereas the conventional worst-case analysis gives 1.8- 3.5× higher estimates than the actual cost. A few minutes of offline analysis using our model can find optimized system parameters that decrease LevelDB's insert cost by up to 9.4-26.2 ; our analytic primitives and model also suggest changes to RocksDB that reduce its insert cost by up to 32.0 , without reducing query performance or requiring extra memory.",
"Recently, the log-structured merge-tree (LSM-tree) has been widely adopted for use in the storage layer of modern NoSQL systems. Because of this, there have been a large number of research efforts, from both the database community and the operating systems community, that try to improve various aspects of LSM-trees. In this paper, we provide a survey of recent research efforts on LSM-trees so that readers can learn the state of the art in LSM-based storage techniques. We provide a general taxonomy to classify the literature of LSM-trees, survey the efforts in detail, and discuss their strengths and trade-offs. We further survey several representative LSM-based open-source NoSQL systems and discuss some potential future research directions resulting from the survey.",
"LSM-tree has been widely used in data management production systems for write-intensive workloads. However, as read and write workloads co-exist under LSM-tree, data accesses can experience long latency and low throughput due to the interferences to buffer caching from the compaction, a major and frequent operation in LSM-tree. After a compaction, the existing data blocks are reorganized and written to other locations on disks. As a result, the related data blocks that have been loaded in the buffer cache are invalidated since their referencing addresses are changed, causing serious performance degradations. In order to re-enable high-speed buffer caching during intensive writes, we propose Log-Structured buffered-Merge tree (simplified as LSbM-tree) by adding a compaction buffer on disks, to minimize the cache invalidations on buffer cache caused by compactions. The compaction buffer efficiently and adaptively maintains the frequently visited data sets. In LSbM, strong locality objects can be effectively kept in the buffer cache with minimum or without harmful invalidations. With the help of a small on-disk compaction buffer, LSbM achieves a high query performance by enabling effective buffer caching, while retaining all the merits of LSM-tree for write-intensive data processing, and providing high bandwidth of disks for range queries. We have implemented LSbM based on LevelDB. We show that with a standard buffer cache and a hard disk, LSbM can achieve 2x performance improvement over LevelDB. We have also compared LSbM with other existing solutions to show its strong effectiveness.",
"High-performance transaction system applications typically insert rows in a History table to provide an activity trace; at the same time the transaction system generates log records for purposes of system recovery. Both types of generated information can benefit from efficient indexing. An example in a well-known setting is the TPC-A benchmark application, modified to support efficient queries on the history for account activity for specific accounts. This requires an index by account-id on the fast-growing History table. Unfortunately, standard disk-based index structures such as the B-tree will effectively double the I O cost of the transaction to maintain an index such as this in real time, increasing the total system cost up to fifty percent. Clearly a method for maintaining a real-time index at low cost is desirable. The log-structured mergetree (LSM-tree) is a disk-based data structure designed to provide low-cost indexing for a file experiencing a high rate of record inserts (and deletes) over an extended period. The LSM-tree uses an algorithm that defers and batches index changes, cascading the changes from a memory-based component through one or more disk components in an efficient manner reminiscent of merge sort. During this process all index values are continuously accessible to retrievals (aside from very short locking periods), either through the memory component or one of the disk components. The algorithm has greatly reduced disk arm movements compared to a traditional access methods such as B-trees, and will improve cost-performance in domains where disk arm costs for inserts with traditional access methods overwhelm storage media costs. The LSM-tree approach also generalizes to operations other than insert and delete. However, indexed finds requiring immediate response will lose I O efficiency in some cases, so the LSM-tree is most useful in applications where index inserts are more common than finds that retrieve the entries. This seems to be a common property for history tables and log files, for example. The conclusions of Sect. 6 compare the hybrid use of memory and disk components in the LSM-tree access method with the commonly understood advantage of the hybrid method to buffer disk pages in memory.",
"In this paper, we show that all mainstream LSM-tree based key-value stores in the literature and in industry are suboptimal with respect to how they trade off among the I O costs of updates, point lookups, range lookups, as well as the cost of storage, measured as space-amplification. The reason is that they perform expensive merge operations in order to (1) bound the number of runs that a lookup has to probe, and to (2) remove obsolete entries to reclaim space. However, most of these merge operations reduce point lookup cost, long range lookup cost, and space-amplification by a negligible amount. To address this problem, we expand the LSM-tree design space with Lazy Leveling, a new design that prohibits merge operations at all levels of LSM-tree but the largest. We show that Lazy Leveling improves the worst-case cost complexity of updates while maintaining the same bounds on point lookup cost, long range lookup cost, and space-amplification. To be able to navigate between Lazy Leveling and other designs, we make the LSM-tree design space fluid by introducing Fluid LSM-tree, a generalization of LSM-tree that can be parameterized to assume all existing LSM-tree designs. We show how to fluidly transition from Lazy Leveling to (1) designs that are more optimized for updates by merging less at the largest level, and (2) designs that are more optimized for small range lookups by merging more at all other levels. We put everything together to design Dostoevsky, a key-value store that navigates the entire Fluid LSM-tree design space based on the application workload and hardware to maximize throughput using a novel closed-form performance model. We implemented Dostoevsky on top of RocksDB, and we show that it strictly dominates state-of-the-art LSM-tree based key-value stores in terms of performance and space-amplification.",
"In recent years, the Log Structured Merge (LSM) tree has been widely adopted by NoSQL and NewSQL systems for its superior write performance. Despite its popularity, however, most existing work has focused on LSM-based key-value stores with only a single LSM-tree; auxiliary structures, which are critical for supporting ad-hoc queries, have received much less attention. In this paper, we focus on efficient data ingestion and query processing for general-purpose LSM-based storage systems. We first propose and evaluate a series of optimizations for efficient batched point lookups, significantly improving the range of applicability of LSM-based secondary indexes. We then present several new and efficient maintenance strategies for LSM-based storage systems. Finally, we have implemented and experimentally evaluated the proposed techniques in the context of the Apache AsterixDB system, and we present the results here.",
"Compactions are a vital maintenance mechanism used by datastores based on the log-structured merge-tree to counter the continuous buildup of data files under update-intensive workloads. While compactions help keep read latencies in check over the long run, this comes at the cost of significantly degraded read performance over the course of the compaction itself. In this paper, we offer an in-depth analysis of compaction-related performance overheads and propose techniques for their mitigation. We offload large, expensive compactions to a dedicated compaction server to allow the datastore server to better utilize its resources towards serving the actual workload. Moreover, since the newly compacted data is already cached in the compaction server's main memory, we fetch this data over the network directly into the datastore server's local cache, thereby avoiding the performance penalty of reading it back from the filesystem. In fact, pre-fetching the compacted data from the remote cache prior to switching the workload over to it can eliminate local cache misses altogether. Therefore, we implement a smarter warmup algorithm that ensures that all incoming read requests are served from the datastore server's local cache even as it is warming up. We have integrated our solution into HBase, and using the YCSB and TPC-C benchmarks, we show that our approach significantly mitigates compaction-related performance problems. We also demonstrate the scalability of our solution by distributing compactions across multiple compaction servers.",
""
],
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_21",
"@cite_34",
"@cite_24",
"@cite_27",
"@cite_45",
"@cite_23",
"@cite_46",
"@cite_16",
"@cite_20"
],
"mid": [
"2605800201",
"2594680891",
"2892777373",
"2798536345",
"2764131694",
"2293496475",
"2963379900",
"2735400990",
"2068739275",
"2798441769",
"2888655500",
"2243935923",
""
]
}
|
On Performance Stability in LSM-based Storage Systems (Extended Version)
|
In recent years, the Log-Structured Merge-Tree (LSMtree) [45,42] has been widely used in modern key-value stores and NoSQL systems [2,4,5,7,14]. Different from traditional index structures, such as B + -trees, which apply updates in-place, an LSM-tree always buffers writes into memory. When memory is full, writes are flushed to disk and subsequently merged using sequential I/Os. To improve efficiency and minimize blocking, flushes and merges are often performed asynchronously in the background.
Despite their popularity, LSM-trees have been criticized for suffering from write stalls and large performance variances [3,51,58]. To illustrate this problem, we conducted a micro-experiment on RocksDB [7], a state-of-the-art LSMbased key-value store, to evaluate its write throughput on SSDs using the YCSB benchmark [25]. The instantaneous write throughput over time is depicted in Figure 1. As one can see, the write throughput of RocksDB periodically slows down after the first 300 seconds, which is when the system Instantaneous write throughput of RocksDB: writes are periodically stalled to wait for lagging merges has to wait for background merges to catch up. Write stalls can significantly impact percentile write latencies and must be minimized to improve the end-user experience or to meet strict service-level agreements [36].
In this paper, we study the impact of write stalls and how to minimize write stalls for various LSM-tree designs. It should first be noted that some write stalls are inevitable. Due to the inherent mismatch between fast in-memory writes and slower background I/O operations, in-memory writes must be slowed down or stopped if background flushes or merges cannot catch up. Without such a flow control mechanism, the system will eventually run out of memory (due to slow flushes) or disk space (due to slow merges). Thus, it is not a surprise that an LSM-tree can exhibit large write stalls if one measures its maximum write throughput by writing as quickly data as possible, such as we did in Figure 1.
This inevitability of write stalls does not necessarily limit the applicability of LSM-trees since in practice writes do not arrive as quickly as possible, but rather are controlled by the expected data arrival rate. The data arrival rate directly impacts the write stall behavior and resulting write latencies of an LSM-tree. If the data arrival rate is relatively low, then write stalls are unlikely to happen. However, it is also desirable to maximize the supported data arrival rate so that the system's resources can be fully utilized. Moreover, the expected data arrival rate is subject to an important constraint -it must be smaller than the processing capacity of the target LSM-tree. Otherwise, the LSM-tree will never be able to process writes as they arrive, causing infinite write latencies. Thus, to evaluate the write stalls of an LSM-tree, the first step is to choose a proper data arrival rate.
As the first contribution, we propose a simple yet effective approach to evaluate the write stalls of various LSM-tree designs by answering the following question: If we set the data arrival rate close to (e.g., 95% of) the maximum write throughput of an LSM-tree, will that cause write stalls? In other words, can a given LSM-tree design provide both a high write throughput and a low write latency? Briefly, the proposed approach consists of two phases: a testing phase and a running phase. During the testing phase, we experimentally measure the maximum write throughput of an LSM-tree by simply writing as much data as possible. During the running phase, we then set the data arrival rate close to the measured maximum write throughput as the limiting data arrival rate to evaluate its write stall behavior based on write latencies. If write stalls happen, the measured write throughput is not sustainable since it cannot be used in the long-term due to the large latencies. However, if write stalls do not happen, then write stalls are no longer a problem since the given LSM-tree can provide a high write throughput with small performance variance.
Although this approach seems to be straightforward at first glance, there exist two challenges that must be addressed. First, how can we accurately measure the maximum sustainable write rate of an LSM-tree experimentally? Second, how can we best schedule LSM I/O operations so as to minimize write stalls at runtime? In the remainder of this paper, we will see that the merge scheduler of an LSMtree can have a large impact on write stalls. As the second contribution, we identify and explore the design choices for LSM merge schedulers and present a new merge scheduler to address these two challenges.
As the paper's final contribution, we have implemented the proposed techniques and various LSM-tree designs inside Apache AsterixDB [14]. This enabled us to carry out extensive experiments to evaluate the write stalls of LSM-trees and the effectiveness of the proposed techniques using our two-phase evaluation approach. We argue that with proper tuning and configuration, LSM-trees can achieve both a high write throughput and small performance variance.
The remainder of this paper is organized as follows: Section 2 provides background information on LSM-trees and briefly surveys related work. Section 3 describes the general experimental setup used throughout this paper. Section 4 identifies the design choices for LSM merge schedulers and evaluates bLSM's spring-and-gear scheduler [51]. Sections 5 and 6 present our techniques for minimizing write stalls for full merges and partitioned merges respectively. Section 8 summarizes the lessons and insights from our evaluation. Finally, Section 9 concludes the paper.
Log-Structured Merge Trees
The LSM-tree [45] is a persistent index structure optimized for write-intensive workloads. In an LSM-tree, writes are first buffered into a memory component. An insert or update simply adds a new entry with the same key, while a delete adds an anti-matter entry indicating that a key has been deleted. When the memory component is full, it is flushed to disk to form a new disk component, within which entries are ordered by keys. Once flushed, LSM disk components are immutable.
A query over an LSM-tree has to reconcile the entries with identical keys from multiple components, as entries from newer components override those from older components. A point lookup query simply searches all components from newest to oldest until the first match is found.
A range query searches all components simultaneously using a priority queue to perform reconciliation. To speed up point lookups, a common optimization is to build Bloom filters [19] over the sets of keys stored in disk components. If a Bloom filter reports that a key does not exist, then that disk component can be excluded from searching. As disk components accumulate, query performance tends to degrade since more components must be examined. To counter this, smaller disk components are gradually merged into larger ones. This is implemented by scanning old disk components to create a new disk component with unique entries. The decision of what disk components to merge is made by a pre-defined merge policy, which is discussed below. Merge Policy. Two types of LSM merge policies are commonly used in practice [42], both of which organize components into "levels". The leveling merge policy (Figure 2a) maintains one component per level, and a component at Level i + 1 will be T times larger than that of Level i. As a result, the component at Level i will be merged multiple times with the component from Level i − 1 until it fills up and is then merged into Level i + 1. In contrast, the tiering merge policy (Figure 2b) maintains multiple components per level. When a Level i becomes full with T components, these T components are merged together into a new component at Level i + 1. In both merge policies, T is called the size ratio, as it controls the maximum capacity of each level. We will refer to both of these merge policies as full merges since components are merged entirely.
In general, the leveling merge policy optimizes for query performance by minimizing the number of components but at the cost of write performance. This design also maximizes space efficiency, which measures the amount of space used for storing obsolete entries, by having most of the entries at the largest level. In contrast, the tiering merge policy is more write-optimized by reducing the merge frequency, but this leads to lower query performance and space utilization.
Partitioning. Partitioning is a commonly used optimization in modern LSM-based key-value stores that is often implemented together with the leveling merge policy, as pioneered by LevelDB [5]. In this optimization, a large LSM disk component is range-partitioned into multiple (often fixed-size) files. This bounds the processing time and the temporary space of each merge. An example of a partitioned LSM-tree with the leveling merge policy is shown in Figure 3, where each file is labeled with its key range. Note that partitioning starts from Level 1, as components in Level 0 are directly flushed from memory. To merge a file from Level i to Level i + 1, all of its overlapping files at Level i + 1 are selected and these files are merged to form new files at Level i + 1. For example in Figure 3, the file labeled 0-50 at Level 1 will be merged with the files labeled 0-20 and 22-52 at Level 2, which produce new files labeled 0-15, 17-30, and 32-52 at Level 2. To select which file to merge next, LevelDB uses a round-robin policy. Both full merges and partitioned merges are widely used in existing systems. Full merges are used in AsterixDB [1], Cassandra [2], HBase [4], ScyllaDB [8], Tarantool [9], and WiredTiger (MongoDB) [11]. Partitioned merges are used in LevelDB [5], RocksDB [7], and X-Engine [34].
Write Stalls in LSM-trees. Since in-memory writes are inherently faster than background I/O operations, writing to memory components sometimes must be stalled to ensure the stability of an LSM-tree, which, however, will negatively impact write latencies. This is often referred to as the write stall problem. If the incoming write speed is faster than the flush speed, writes will be stalled when all memory components are full. Similarly, if there are too many disk components, new writes should be stalled as well. In general, merges are the major source of write stalls since writes are flushed once but merged multiple times. Moreover, flush stalls can be avoided by giving higher I/O priority to flushes. In this paper, we thus focus on write stalls caused by merges.
Apache AsterixDB
Apache AsterixDB [1,14,22] is a parallel, semi-structured Big Data Management System (BDMS) that aims to manage large amounts of data efficiently. It supports a feedbased framework for efficient data ingestion [32,56]. The records of a dataset in AsterixDB are hash-partitioned based on their primary keys across multiple nodes of a sharednothing cluster; thus, a range query has to search all nodes. Each partition of a dataset uses a primary LSM-based B +tree index to store the data records, while local secondary indexes, including LSM-based B + -trees, R-trees, and inverted indexes, can be built to expedite query processing. Aster-ixDB internally uses a variation of the tiering merge policy to manage disk components, similar to the one used in existing systems [4,7]. Instead of organizing components into levels explicitly as in Figure 2b, AsterixDB's variation simply schedules merges based on the sizes of disk components. In this work, we do not focus on the LSM-tree implementation in AsterixDB but instead use AsterixDB as a common testbed to evaluate various LSM-tree designs.
Related Work
LSM-trees. Recently, a large number of improvements of the original LSM-tree [45] have been proposed. [42] surveys these improvements, ranging from improving write performance [18,28,29,38,40,44,47,57], optimizing mem- [13,17,53,59], supporting automatic tuning of LSM-trees [26,27,39], optimizing LSM-based secondary indexes [41,46], to extending the applicability of LSM-trees [43,49]. However, all of these efforts have largely ignored performance variances and write stalls of LSM-trees. Several LSM-tree implementations seek to bound the write processing latency to alleviate the negative impact of write stalls [5,7,37,58]. bLSM [51] proposes a spring-and-gear merge scheduler to avoid write stalls. As shown in Figure 4, bLSM has one memory component, C0, and two disk components, C1 and C2. The memory component C0 is continuously flushed and merged with C1. When C1 becomes full, a new C1 component is created while the old C1, which now becomes C 1 , will be merged with C2. bLSM ensures that for each Level i, the progress of merging C i into Ci+1 (denoted as "outi") will be roughly identical to the progress of the formation of a new Ci (denoted as "ini"). This eventually limits the write rate for the memory component (in0) and avoids blocking writes. However, we will see later that simply bounding the maximum write processing latency alone is insufficient, because a large variance in the processing rate can still cause large queuing delays for subsequent writes.
Performance Stability. Performance stability has long been recognized as a critical performance metric. The TPC-C benchmark [10] measures not only absolute throughput, but also specifies the acceptable upper bounds for the percentile latencies. Huang et al. [36] applied VProfiler [35] to identify major sources of variance in database transactions. Various techniques have been proposed to optimize the variance of query processing [15,16,20,24,48,55]. Cao et al. [21] found that variance is common in storage stacks and heavily depends on configurations and workloads. Dean and Barroso [30] discussed several engineering techniques to reduce performance variances at Google. Different from these efforts, in this work we focus on the performance variances of LSM-trees due to their inherent out-of-place update design.
EXPERIMENTAL METHODOLOGY
For ease of presentation, we will mix our techniques with a detailed performance analysis for each LSM-tree design. We now describe the general experimental setup and methodology for all experiments to follow.
Experimental Setup
All experiments were run on a single node with an 8-core Intel i7-7567U 3.5GHZ CPU, 16 GB of memory, a 500GB SSD, and a 1TB 7200 rpm hard disk. We used the SSD for LSM storage and configured the hard disk for transaction logging due to its sufficiently high sequential throughput. We allocated 10GB of memory for the AsterixDB instance. Within that allocation, the buffer cache size was set at 2GB. Each LSM memory component had a 128MB budget, and each LSM-tree had two memory components to minimize stalls during flushes. Each disk component had a Bloom filter with a false positive rate setting of 1%. The data page size was set at 4KB to align with the SSD page size.
It is important to note that not all sources of performance variance can be eliminated [36]. For example, writing a keyvalue pair with a 1MB value inherently requires more work than writing one that only has 1KB. Moreover, short time periods with quickly occurring writes (workload bursts) will be much more likely to cause write stalls than a long period of slow writes, even though their long-term write rate may be the same. In this paper, we will focus on the avoidable variance [36] caused by the internal implementation of LSMtrees instead of variances in the workloads.
To evaluate the internal variances of LSM-trees, we adopt YCSB [25] as the basis for our experimental workload. Instead of using the pre-defined YCSB workloads, we designed our own workloads to better study the performance stability of LSM-trees. Each experiment first loads an LSM-tree with 100 million records, in random key order, where each record has size 1KB. It then runs for 2 hours to update the previously loaded LSM-tree. This ensures that the measured write throughput of an LSM-tree is stable over time. Unless otherwise noted, we used one writer thread for writing data to the LSM memory components. We evaluated two update workloads, where the updated keys follow either a uniform or Zipf distribution. The specific workload setups will be discussed in the subsequent sections.
We used two commonly used I/O optimizations when implementing LSM-trees, namely I/O throttling and periodic disk forces. In all experiments, we throttled the SSD write speed of all LSM flush and merge operations to 100MB/s. This was implemented by using a rate limiter to inject artificial sleeps into SSD writes. This mechanism bounds the negative impact of the SSD writes on query performance and allows us to more fairly compare the performance differences of various LSM merge schedulers. We further had each flush or merge operation force its SSD writes after each 16MB of data. This helps to limit the OS I/O queue length, reducing the negative impact of SSD writes on queries. We have verified that disabling this optimization would not impact the performance trends of writes; however, large forces at the end of each flush and merge operation, which are required for durability, can significantly interfere with queries.
Performance Metrics
To quantify the impact of write stalls, we will not only present the write throughput of LSM-trees but also their write latencies. However, there are different models for measuring write latencies. Throughout the paper, we will use arrival rate to denote the rate at which writes are submitted by clients, processing rate to denote the rate at which writes can be processed by an LSM-tree, and write throughput to denote the number of writes processed by an LSM-tree per unit of time. The difference between the write throughput and arrival/processing rates is discussed further below.
The bLSM paper [51], as well as most of the existing LSM research, used the experimental setup depicted in Figure 5a to write as much data as possible and measure the latency of each write. In this closed system setup [33], the processing rate essentially controls the arrival rate, which further equals the write throughput. Although this model is sufficient for measuring the maximum write throughput of LSM-trees, it is not suitable for characterizing their write latencies for several reasons. First, writing to memory is inherently faster than background I/Os, so an LSM-tree will always have to stall writes in order to wait for lagged flushes and merges.
Moreover, under this model, a client cannot submit its next write until its current write is completed. Thus, when the LSM-tree is stalled, only a small number of ongoing writes will actually experience a large latency since the remaining writes have not been submitted yet 1 .
In practice, a DBMS generally cannot control how quickly writes are submitted by external clients, nor will their writes always arrive as fast as possible. Instead, the arrival rate is usually independent from the processing rate, and when the system is not able to process writes as fast as they arrive, the newly arriving writes must be temporarily queued. In such an open system (Figure 5b), the measured write latency includes both the queuing latency and processing latency. Moreover, an important constraint is that the arrival rate must be smaller than the processing rate since otherwise the queue length will be unbounded. Thus, the (overall) write throughput is actually determined by the arrival rate.
A simple example will illustrate the important difference between these two models. Suppose that 5 clients are used to generate an intended arrival rate of 1000 writes/s and that the LSM-tree stalls for 1 second. Under the closed system model (Figure 5a), only 5 delayed writes will experience a write latency of 1s since the remaining (intended) 995 writes simply will not occur. However, under the open system model (Figure 5b), all 1000 writes will be queued and their average latency will be at least 0.5s.
To evaluate write latencies in an open system, one must first set the arrival rate properly since the write latency heavily depends on the arrival rate. It is also important to maximize the arrival rate to maximize the system's utilization. For these reasons, we propose a two-phase evaluation approach with a testing phase and a running phase. During the testing phase, we use the closed system model ( Figure 5a) to measure the maximum write throughput of an LSM-tree, which is also its processing rate. When measuring the maximum write throughput, we excluded the initial 20-minute period (out of 2 hours) of the testing phase since the initially loaded LSM-tree has a relatively small number of disk components at first. During the running phase, we use the open system model (Figure 5b) to evaluate the write latencies under a constant arrival rate set at 95% of the measured maximum write throughput. Based on queuing theory [33], the queuing time approaches infinity when the utilization, which is the ratio between the arrival rate and the processing rate, approaches 100%. We thus empirically determine a high utilization load (95%) while leaving some room for the system to absorb variance. If the running phase then reports large write latencies, the maximum write throughput as determined in the testing phase is not sustainable; we must improve the implementation of the LSM-tree or reduce the expected arrival rate to reduce the latencies. In contrast, if the measured write latency is small, then the given LSM-tree can provide a high write throughput with a small performance variance.
LSM MERGE SCHEDULER
Different from a merge policy, which decides which components to merge, a merge scheduler is responsible for executing the merge operations created by the merge policy. In this section, we discuss the design choices for a merge scheduler and evaluate bLSM's spring-and-gear merge scheduler.
Scheduling Choices
The write cost of an LSM-tree, which is the number of I/Os per write, is determined by the LSM-tree design itself and the workload characteristics but not by how merges are executed [26]. Thus, a merge scheduler will have little impact on the overall write throughput of an LSM-tree as long as the allocated I/O bandwidth budget can be fully utilized. However, different scheduling choices can significantly impact the write stalls of an LSM-tree, and merge schedulers must be carefully designed to minimize write stalls. We have identified the following design choices for a merge scheduler.
Component Constraint: A merge scheduler usually specifies an upper-bound constraint on the total number of components allowed to accumulate before incoming writes to the LSM memory components should be stalled. We call this the component constraint. For example, bLSM [51] allows at most two disk components per level, while other systems like HBase [4] or Cassandra [2] specify the total number of disk components across all levels.
Interaction with Writes: There exist different strategies to enforce a given component constraint. One strategy is to simply stop processing writes once the component constraint is violated. Alternatively, the processing of writes can be degraded gracefully based on the merge pressure [51].
Degree of Concurrency: In general, an LSM-tree can often create multiple merge operations in the same time period. A merge scheduler should decide how these merge operations should be scheduled. Allowing concurrent merges will enable merges at multiple levels to proceed concurrently, but they will also compete for CPU and I/O resources, which can negatively impact query performance [13]. As two examples, bLSM [51] allows one merge operation per level, while LevelDB [5] uses just one single background thread to execute all merges one by one.
I/O Bandwidth Allocation: Given multiple concurrent merge operations, the merge scheduler should further decide how to allocate the available I/O bandwidth among these merge operations. A commonly used heuristic is to allocate I/O bandwidth "fairly" (evenly) to all active merge operations. Alternatively, bLSM [51] allocates I/O bandwidth based on the relative progress of the merge operations to ensure that merges at each level all make steady progress.
Evaluation of bLSM
Due to the implementation complexity of bLSM and its dependency on a particular storage system, Stasis [50], we chose to directly evaluate the released version of bLSM [6]. bLSM uses the leveling merge policy with two on-disk levels. We set its memory component size to 1GB and size ratio to 10 so that the experimental dataset with 100 million records can fit into the last level. We used 8 write threads to maximize the write throughput of bLSM.
Testing Phase. During the testing phase, we measured the maximum write throughput of bLSM by writing as much data as possible using both the uniform and Zipf update workloads. The instantaneous write throughput of bLSM under these two workloads is shown in Figure 6a. For readability, the write throughput is averaged over 30-second windows. (Unless otherwise noted, the same aggregation applies to all later experiments as well.)
Even though bLSM's merge scheduler prevents writes from being stalled, the instantaneous write throughput still exhibits a large variance with regular temporary peaks. Recall that bLSM uses the merge progress at each level to control its in-memory write speed. After the component C1 is full and becomes C 1 , the original C1 will be empty and will have much shorter merge times. This will temporarily increase the in-memory write speed of bLSM, which then quickly drops as C1 grows larger and larger. Moreover, the Zipf update workload increases the write throughput only because updated entries can be reclaimed earlier, but the overall variance performance trends are still the same.
Running Phase. Based on the maximum write throughput measured in the testing phase, we then used a constant data arrival process (95% of the maximum) in the running phase to evaluate bLSM's behavior. Figure 6b shows the instantaneous write throughput of bLSM under the uniform and Zipf update workloads. bLSM maintains a sustained write throughput during the initial period of the experiment, but later has to slow down its in-memory write rate periodically due to background merge pressure. Figure 6c further shows the resulting percentile write and processing latencies. The processing latency measures only the time for the LSM-tree to process a write, while the write latency includes both the write's queuing time and processing time. By slowing down the in-memory write rate, bLSM indeed bounds the processing latency. However, the write latency is much larger because writes must be queued when they cannot be processed immediately. This suggests that simply bounding the maximum processing latency is far from sufficient; it is important to minimize the variance in an LSM-tree's processing rate to minimize write latencies.
FULL MERGES
In this section, we explore the scheduling choices of LSMtrees with full merges and then evaluate the impact of merge scheduling on write stalls using our two-phase approach. Finally, we examine other variations of the tiering merge policy that are used in practical systems.
Merge Scheduling for Full Merges
We first introduce some useful notation for use throughout our analysis in Table 1. To simplify the analysis, we will ignore the I/O cost of flushes since merges consume most of the I/O bandwidth.
Component Constraint
To provide acceptable query performance and space utilization, the total number of disk components of an LSMtree must be bounded. We call this upper bound the component constraint, and it can be enforced either locally or It remains a question how to determine the maximum number of disk components for the component constraint. In general, tolerating more disk components will increase the LSM-tree's ability to reduce write stalls and absorb write bursts, but it will decrease query performance and space utilization. Given the negative impact of stalls on write latencies, one solution is to tolerate a sufficient number of disk components to avoid write stalls while the worst-case query performance and space utilization are still bounded. For example, one conservative constraint would be to tolerate twice the expected number of disk components, e.g., 2 · L components for leveling and 2 · T · L components for tiering.
Interaction with Writes
When the component constraint is violated, the processing of writes by an LSM-tree has to be slowed down or stopped. Existing LSM-tree implementations [5,7,51] prefer to gracefully slow down the in-memory write rate by adding delays to some writes. This approach reduces the maximum processing latency, as large pauses are broken down into many smaller ones, but the overall processing rate of an LSM-tree, which depends on the I/O cost of each write, is not affected. Moreover, this approach will result in an even larger queuing latency. There may be additional considerations for gracefully slowing down writes, but we ar-gue that processing writes as quickly as possible minimizes the overall write latency, as stated by the following theorem. See the Appendix for proofs for all theorems.
Theorem 1. Given any data arrival process and any LSM-tree, processing writes as quickly as possible minimizes the latency of each write.
Proof Sketch. Consider two merge schedulers S and S which only differ in that S may add arbitrary delays to writes while S processes writes as quickly as possible. For each write request r, r must be completed by S no later than S because the LSM-tree has the same processing rate but S adds some delays to writes.
It should be noted that Theorem 1 only considers write latencies. By processing writes as quickly as possible, disk components can stack up more quickly (up to the component constraint), which may negatively impact query performance. Thus, a better approach may be to increase the write processing rate, e.g., by changing the structure of the LSM-tree. We leave the exploration of this direction as future work. newly flushed components added while this merge operation is being executed, assuming that flushes can still proceed.
Degree of Concurrency
Our two-phase evaluation approach chooses the maximum write throughput of an LSM-tree as the arrival rate µ. For leveling, the maximum write throughput is approximately
W level = 2·B
T ·L , as each entry is merged T 2 times per level. For tiering, the maximum write throughput is approximately Wtier = B L , as each entry is merged only once per level. By substituting W level and Wtier for µ, one needs to tolerate at least 2·T i−1 L flushed components for leveling and T i L flushed components for tiering to avoid write stalls. Since the term T i grows exponentially, a large number of flushed components will have to be tolerated when a large disk component is being merged. Consider the leveling merge policy with a size ratio of 10. To merge a disk component at Level 5, approximately 2·10 4 5 = 4000 flushed components would need to be tolerated, which is highly unacceptable.
Clearly, concurrent merges must be performed to minimize write stalls. When a large merge is being processed, smaller merges can still be completed to reduce the number of components. By the definition of the tiering and leveling merge policies, there can be at most one active merge operation per level. Thus, given an LSM-tree with L levels, at most L merge operations can be scheduled concurrently.
I/O Bandwidth Allocation
Given multiple active merge operations, the merge scheduler must further decide how to allocate I/O bandwidth to these operations. A heuristic used by existing systems [2,4,7] is to allocate I/O bandwidth fairly (evenly) to all ongoing merges. We call this the fair scheduler. The fair scheduler ensures that all merges at different levels can proceed, thus eliminating potential starvation. Recall that write stalls occur when an LSM-tree has too many disk components, thus violating the component constraint. It is unclear whether or not the fair scheduler can minimize write stalls by minimizing the number of disk components over time.
Recall that both the leveling and tiering merge policies always merge the same number of disk components at once. We propose a novel greedy scheduler that always allocates the full I/O bandwidth to the merge operation with the smallest remaining number of bytes. The greedy scheduler has a useful property that it minimizes the number of disk components over time for a given set of merge operations.
Theorem 2. Given any set of merge operations that process the same number of disk components and any I/O bandwidth budget, the greedy scheduler minimizes the number of disk components at any time instant.
Proof Sketch. Consider an arbitrary scheduler S and the greedy scheduler S . Given N merge operations, we can show that S always completes the i-th (1 ≤ i ≤ N ) merge operation no later than S. This can be done by noting that S always processes the smallest merge operation first.
Theorem 2 only considers a set of statically created merge operations. This conclusion may not hold in general because sometimes completing a large merge may enable the merge policy to create smaller merges, which can then reduce the number of disk components more quickly. Because of this, there actually exists no merge scheduler that can always minimize the number of disk components over time, as stated by the following theorem. However, as we will see in our later evaluation, the greedy scheduler is still a very effective heuristic for minimizing write stalls.
Theorem 3. Given any I/O bandwidth budget, no merge scheduler can minimize the number of disk components at any time instant for any data arrival process and any LSMtree for a deterministic merge policy where all merge operations process the same number of disk components.
Proof Sketch. Consider an LSM-tree that has created a small merge MS and a large merge ML. Completing ML allows the LSM-tree to create a new small merge M S that is smaller than MS. Consider two merge schedulers S1 and S2, where S1 first processes MS and then ML, and S2 first processes ML and then M S . It can be shown that S1 has the earliest completion time for the first merge and S2 has the earliest completion time for the second merge, but no merge scheduler can outperform both S1 and S2. The pseudocode for the greedy scheduling algorithm is shown in Figure 7. It stores the list of scheduled merge operations in mergeOps. At any time, there is at most one merge operation being executed by the merge thread, which is denoted by activeOp. The merge policy calls Schedule-Merge when a new merge operation is scheduled, and the merge thread calls CompleteMerge when a merge operation is completed. In both functions, mergeOps is updated accordingly and the merge scheduler is notified to check whether a new merge operation needs to be executed. It should be noted that in general one cannot exactly know which merge operation requires the least amount of I/O bandwidth until the new component has been fully produced. Thus, line 12 uses the number of remaining input pages as an approximation to determine the smallest merge operation. Finally, if the newly selected merge operation is inactive, i.e., not being executed, the scheduler pauses the previous active merge operation and activates the new one.
Putting Everything Together
Under the greedy scheduler, larger merges may be starved at times since they receive lower priority. This has a few implications. First, during normal user workloads, such starvation can only occur if the arrival rate is temporarily faster than the processing rate of an LSM-tree. Given the negative impact of write stalls on write latencies, it can actually be beneficial to temporarily delay large merges so that the system can better absorb write bursts. Second, the greedy scheduler should not be used in the testing phase because it would report a higher but unsustainable write throughput due to such starved large merges.
Finally, our discussions of the greedy scheduler as well as the single-threaded scheduler are based on an important assumption that a single merge operation is able to fully utilize
Experimental Evaluation
We now experimentally evaluate the write stalls of LSMtrees using our two-phase approach. We discuss the specific experimental setup followed by the detailed evaluation, including the impact of merge schedulers on write stalls, the benefit of enforcing the component constraint globally and of processing writes as quickly as possible, and the impact of merge scheduling on query performance.
Experimental Setup
All experiments in this section were performed using As-terixDB with the general setup described in Section 3. Unless otherwise noted, the size ratio of leveling was set at 10, which is a commonly used configuration in practice [5,7]. For the experimental dataset with 100 million unique records, this results in a three-level LSM-tree, where the last level is nearly full. For tiering, the size ratio was set at 3, which leads to better write performance than leveling without sacrificing too much on query performance. This ratio results in an eight-level LSM-tree.
We evaluated the single-threaded scheduler (Section 5.1.3), the fair scheduler (Section 5.1.3), and the proposed greedy scheduler (Section 5.1.5). The single-threaded scheduler only executes one merge at a time using a single thread. Both the fair and greedy schedulers are concurrent schedulers that execute each merge using a separate thread. The difference is that the fair scheduler allocates the I/O bandwidth to all ongoing merges evenly, while the greedy scheduler always allocates the full I/O bandwidth to the smallest merge. To minimize flush stalls, a flush operation is always executed in a separate thread and receives higher I/O priority. Unless otherwise noted, all three schedulers enforce global component constraints and process writes as quickly as possible.
The maximum number of disk components is set at twice the expected number of disk components for each merge policy. Each experiment was performed under both the uniform and Zipf update workloads. Since the Zipf update workload had little impact on the overall performance trends, except that it led to higher write throughput, its experiment results are omitted here for brevity.
Testing Phase
During the testing phase, we measured the maximum write throughput of an LSM-tree by writing as much data as possible. In general, alternative merge schedulers have little impact on the maximum write throughput since the I/O bandwidth budget is fixed, but their measured write throughput may be different due to the finite experimental period. Figures 8a and 8b shows the instantaneous write throughout of LSM-trees using different merge schedulers for tiering and leveling. Under both merge policies, the single-threaded scheduler regularly exhibits long pauses, making its write throughput vary over time. The fair scheduler exhibits a relatively stable write throughput over time since all merge levels can proceed at the same rate. With leveling, its write throughput still varies slightly over time since the component size at each level varies. The greedy scheduler appears to achieve a higher write throughput than the fair scheduler by starving large merges. However, this higher write throughput eventually drops when no small merges can be scheduled. For example, the write throughput with tiering drops slightly at 1100s and 4000s, and there is a long pause from 6000s to 7000s with leveling. This result confirms that the fair scheduler is more suitable for testing the maximum write throughput of an LSM-tree, as merges at all levels can proceed at the same rate. In contrast, the single-threaded scheduler incurs many long pauses, causing a large variance in the measured write throughput. The greedy scheduler provides a higher write throughput by starving large merges, which would be undesirable at runtime.
Running Phase
Turning to the running phase, we used a constant data arrival process, configured based on 95% of the maximum write throughput measured by the fair scheduler, to evaluate the write stalls of LSM-trees.
LSM-trees can provide a stable write throughput. We first evaluated whether LSM-trees with different merge schedulers can support a high write throughput with low write latencies. For each experiment, we measured the instantaneous write throughput and the number of disk components over time as well as percentile write latencies.
The results for tiering are shown in Figure 9. Both the fair and greedy schedulers are able to provide stable write throughputs and the total number of disk components never reaches the configured threshold. The greedy scheduler also minimizes the number of disk components over time. The single-threaded scheduler, however, causes a large number of write stalls due to the blocking of large merges, which confirms our previous analysis. Because of this, the singlethreaded scheduler incurs large percentile write latencies. In contrast, both the fair and greedy schedulers provide small write latencies because of their stable write throughput. Figure 10 shows the corresponding results for leveling. inherent variance of merge times, the fair scheduler alone cannot provide a stable write throughput; this results in relatively large write latencies. In contrast, the greedy scheduler avoids write stalls by always minimizing the number of components, which results in small write latencies. This experiment confirms that LSM-trees can achieve a stable write throughput with a relatively small performance variance. Moreover, the write stalls of an LSM-tree heavily depend on the design of the merge scheduler.
Impact of Size Ratio. To verify our findings on LSMtrees with different shapes, we further carried out a set of experiments by varying the size ratio from 2 to 10 for both tiering and leveling. For leveling, we applied the dynamic level size optimization [31] so that the largest level remains almost full by slightly modifying the size ratio between Levels 0 and 1. This optimization maximizes space utilization without impacting write or query performance.
During the testing phase, we measured the maximum write throughput for each LSM-tree configuration using the fair scheduler, which is shown in Figure 11a. In general, a larger size ratio increases write throughput for tiering but decreases write throughput for leveling because it decreases the merge frequency of tiering but increases that of leveling. During the running phase, we evaluated the 99% percentile write latency for each LSM-tree configuration using constant data arrivals, which is shown in Figure 11b. With tiering, both the fair and greedy schedulers are able to provide a stable write throughput with small write latencies. With leveling, the fair scheduler causes large write latencies when the size ratio becomes larger, as we have seen before. In contrast, the greedy scheduler is always able to provide a stable write throughput along with small write latencies. This again confirms that LSM-trees, despite their size ratios, can provide a high write throughput with a small variance with an appropriately chosen merge scheduler.
Benefit of Global Component Constraints. We next evaluated the benefit of global component constraints in terms of minimizing write stalls. We additionally included a variation of the fair and greedy schedulers that enforces local component constraints, that is, 2 components per level for leveling and 2 · T components per level for tiering. The resulting write latencies are shown in Figure 12. In general, local component constraints have little impact on tiering since its merge time per level is relatively stable. However, the resulting write latencies for leveling become much large due to the inherent variance of its merge times. Moreover, local component constraints have a larger negative impact on the greedy scheduler. The greedy scheduler prefers small merges, which may not be able to complete due to possible violations of the constraint at the next level. This in turn causes longer stalls and thus larger percentile write latencies. In contrast, global component constraints better absorb these variances, reducing the write latencies.
Benefits of Processing Writes As Quickly As Possible. We further evaluated the benefit of processing writes as quickly as possible. We used the leveling merge policy with a bursty data arrival process that alternates between a normal arrival rate of 2000 records/s for 25 minutes and a high arrival rate of 8000 records/s for 5 minutes. We evaluated two variations of the greedy scheduler. The first variation processes writes as quickly as possible (denoted as "No Limit"), as we did before. The second variation enforces a maximum in-memory write rate of 4000 records/s (denoted as "Limit") to avoid write stalls.
The instantaneous write throughput and the percentile write latencies of the two variations are shown in Figures 13a and 13b respectively. As Figure 13a shows, delaying writes avoids write stalls and the resulting write throughput is more stable over time. However, this causes larger write latencies (Figure 13b) since delayed writes must be queued. In contrary, writing as quickly as possible causes occasional write stalls but still minimizes overall write latencies. This confirms our previous analysis that processing writes as quickly as possible minimizes write latencies.
Impact on Query Performance. Finally, since the point of having data is to query it, we evaluated the impact of the fair and greedy schedulers on concurrent query performance. We evaluated three types of queries, namely point lookups, short scans, and long scans. A point lookup accesses 1 record given a primary key. A short scan query accesses 100 records and a long scan query accesses 1 million records. In each experiment, we executed one type of query concurrently with concurrent updates with constant arrival rates as before. To maximize query performance while ensuring that LSM flush and merge operations receive enough I/O bandwidth, we used 8 query threads for point lookups and short scans and used 4 query threads for long scans. We also evaluated the impact of forcing SSD writes regularly on query performance. For this purpose, we included the variations of the fair and greedy schedulers that only force SSD writes when a merge completes.
The instantaneous query throughput under the tiering and leveling merge policies is depicted in Figure 14 and Figure 16 respectively. The corresponding percentile query latencies are shown in Figure 17 and Figure 15 respectively. For point lookups and short scans, the query throughput is averaged over 30-second windows. For long scans, the query throughput is averaged over 1-minute windows. As the results show, leveling has similar point lookup throughput to tiering because Bloom filters can filter out most unnecessary I/Os, but it has much better range query throughput than tiering. Moreover, the greedy scheduler always improves query performance by minimizing the number of components. Among the three types of queries, point lookups and short scans benefit more from the greedy scheduler since these two types of queries are more sensitive to the number of disk components. In contrast, long scans incur most of their I/O cost at the largest level. Moreover, the tiering merge policy benefits more from the greedy scheduler than the leveling merge policy since the performance difference between the greedy and fair schedulers is larger under the tiering merge policy. This is because the tiering merge policy has more disk components than the leveling merge policy. Note that under the leveling merge policy, there is a drop in query throughput under the fair scheduler at around 5400s, even though there is little difference in the number of disk components between the fair and greedy scheduler. This drop is caused by write stalls during that period, as indicated by the instantaneous write throughput of Figure 10. After the LSM-tree recovers from write stalls, it attempts to write as much data as possible in order to catch up, which results in a lower query throughput.
Forcing SSD writes regularly has some slight negative impact on query throughput, but it significantly reduces the percentile query latencies. The reason is that disk components must be forced to disk in the end of merges. Without forcing SSD writes regularly, the large disk forces will significantly impact the query latencies.
Tiering in Practice
Existing LSM-based systems, such as BigTable [23] and HBase [4], use a slight variation of the tiering merge policy discussed in the literature. This variation, often referred as the size-tiered merge policy, does not organize components into levels explicitly but simply schedules merges based on the sizes of disk components. This policy has three important parameters, namely the size ratio T , the minimum number of components to merge min, and the maximum number of components to merge max. It merges a sequence of components, whose length is at least min, when the total size of the sequence's the younger components is T times larger than that of the oldest component in the sequence. It also seeks to merge as many components as possible at once until max is reached. Concurrent merges can also be performed. For example, in HBase [4], each execution of the size-tiered merge policy will always examine the longest prefix of the component sequence in which no component is being merged.
An example of the size-tiered merge policy is shown in Figure 18, where each disk component is labeled with its size. Let the size ratio be 1.2 and the minimum and maximum number of components per merge be 2 and 4 respectively. Suppose initially that no component is being merged. The first of execution the size-tiered merge policy starts from the oldest component, labeled 100GB. However, no merge is scheduled since this component is too large. It then examines the next component, labeled 10GB, and schedules a merge operation for the 4 components labeled from 10GB to 5GB. The next execution of the size-tiered merge policy starts from the component labeled 1GB, and it schedules a merge for the 3 components labeled from 128MB to 64MB.
To evaluate the write stalls of the size-tiered merge policy, we repeated the experiments using our two-phase approach. In our evaluation, the size ratio was set at 1.2, which is the default value in HBase [4], and the minimum and maximum mergeable components were set at 2 and 10 respectively. The maximum tolerated disk components parameter was set at 50. During the testing phase, the maximum write throughput measured by using the fair scheduler was 17,008 records/s. Then during the running phase, we used a constant data arrival process based on 95% of this maximum throughput to evaluate write stalls. The instantaneous write throughput of the LSM-tree and the number of disk components over time are shown in Figures 19a and 19b respectively. As one can see, write stalls have occurred under the fair scheduler. Moreover, even though the greedy scheduler avoids write stalls, its number of disk components keeps increasing over time. This result indicates that the maximum write throughput measured during the testing phase is not sustainable. This problem is caused by the non-determinism of the size-tiered merge policy since it tries to merge as many disk components as possible. This behavior impacts the maximum write throughput of the LSM-tree. During the testing phase, when writes are often blocked because of too many disk components, this merge policy tends to merge more disk components at once, which then leads to a higher write throughput. However, during the running phase, when writes arrive steadily, this merge policy tends to schedule smaller merges as flushed components accumulate. For example, during the running phase of this experiment, 55 long merges that involved 10 components were scheduled, but only 24 long merges were scheduled under the fair scheduler during the running phase. Even worse, 99.76% of the scheduled merges under the greedy scheduler involved no more than 4 components since large merges were starved.
To address problem and to minimize write stalls, the arrival rate must be reduced. However, finding the maxi- mum "stall free" arrival rate is non-trivial due to the nondeterminism of the size-tiered merge policy. Instead, we propose a simple and conservative solution to avoid write stalls. During the testing phase, we propose to measure the lower bound write throughput by always merging the minimum number of disk components. This write throughput will serve as a baseline of the arrival rate. During runtime, the size-tiered merge policy can merge more disk components to dynamically increase its write throughput to minimize stalls. We repeated the previously experiments based on this solution. During the testing phase, the merge policy always merged 2 disk components, which resulted in a lower maximum write throughput of 8,863 records/s. We then repeated the running phase based on this throughput. Figures 20a and 20b show the instantaneous write throughput and the number of disk components over time respectively during the running phase. In this case, both schedulers exhibit no write stalls and the number of disk components is more stable over time. Moreover, the greedy scheduler still slightly reduces the number of disk components.
PARTITIONED MERGES
We now examine the write stall behavior of partitioned LSM-trees using our two-phase approach. In a partitioned LSM-tree, a large disk component is range-partitioned into multiple small files and each merge operation only processes a small number of files with overlapping ranges. Since merges always happen immediately once a level is full, a singlethreaded scheduler could be sufficient to minimize write stalls. In the reminder of this section, we will evaluate Lev-elDB's single-threaded scheduler.
LevelDB's Merge Scheduler
LevelDB's merge scheduler is single-threaded. It computes a score for each level and selects the level with the largest score to merge. Specifically, the score for Level 0 is computed as the total number of flushed components divided by the minimum number of flushed components to merge. For a partitioned level (1 and above), its score is defined as the total size of all files at this level divided by the configured maximum size. A merge operation is scheduled if the largest score is at least 1, which means that the selected level is full. If a partitioned level is chosen to merge, LevelDB selects the next file to merge in a round-robin way.
LevelDB only restricts the number of flushed components at Level 0. By default, the minimum number of flushed components to merge is 4. The processing of writes will be slowed down or stopped of the number of flushed component reaches 8 and 12 respectively. Since we have already shown in Section 5.1.2 that processing writes as quickly as possible reduces write latencies, we will only use the stop threshold (12) in our evaluation.
Experimental Evaluation. We have implemented Lev-elDB's partitioned leveling merge policy and its merge scheduler inside AsterixDB for evaluation. Similar to LevelDB, the minimum number of flushed components to merge was set at 4 and the stop threshold was set at 12 components. Unless otherwise noted, the maximum size of each file was To minimize write stalls caused by flushes, we used two memory components and a separate flush thread. We further evaluated the impact of two widely used merge selection strategies on write stalls. The roundrobin strategy chooses the next file to merge in a round-robin way. The choose-best strategy [54] chooses the file with the fewest overlapping files at the next level. We used our two-phase approach to evaluate this partitioned LSM-tree design. The instantaneous write throughput during the testing phase is shown in Figure 21a, where the write throughput of both strategies decreases over time due to more frequent stalls. Moreover, under the uniform update workload, the alternative selection strategies have little impact on the overall write throughput, as reported in [39]. During the testing phase, we used a constant arrival process to evaluate write stalls. The instantaneous write throughput of both strategies is shown in Figure 21b. As the result shows, in both cases write stalls start to occur after time 6000s. This suggests that the measured write throughput during the testing phase is not sustainable.
Measuring Sustainable Write Throughput
One problem with LevelDB's score-based merge scheduler is that it merges as many components at Level 0 as possible at once. To see this, suppose that the minimum number of mergeable components at Level L0 is T0 and that the maximum number of components at Level 0 is T 0 . During the testing phase, where writes pile up as quickly as possible, the merge scheduler tends to merge the maximum possible number of components T 0 instead of just T0 at once. Because of this, the LSM-tree will eventually transit from the expected shape (Figure 22a) to the actual shape (Figure 22b), where T is the size ratio of the partitioned levels. Note that the largest level is not affected since its size is determined by the number of unique entries, which is relatively stable. Even though this elastic design dynamically increases the processing rate as needed, it has the following problems.
Unsustainable Write Throughput. The measured maximum write throughput is based on merging T 0 flushed components at Level 0 at once. However, this is likely to cause write stalls during the running phase since flushes cannot further proceed.
Suboptimal Trade-Offs. The LSM-tree structure in Figure 22b is no longer making optimal performance tradeoffs since the size ratios between its adjacent levels are not the same anymore [45]. By adjusting the sizes of interme- diate levels so that adjacent levels have the same size ratio, one can improve both write throughput and space utilization without affecting query performance. Low Space Utilization. One motivation for industrial systems to adopt partitioned LSM-trees is their higher space utilization [31]. However, the LSM-tree depicted in Figure 22b violates this performance guarantee because the ratio of wasted space increases from 1/T to T 0 /T0 · 1/T .
Because of these problems, the measured maximum write throughput cannot be used in the long-term. We propose a simple solution to address these problems. During the testing phase, we always merge exactly T0 components at Level 0. This ensures that merge preferences will be given equally to all levels so that the LSM-tree will stay in the expected shape ( Figure 22a). Then, during the running phase, the LSM-tree can elastically merge more components at Level 0 as needed to absorb write bursts.
To verify the effectiveness of the proposed solution, we repeated the previous experiments on the partitioned LSMtree. During the testing phase, the LSM-tree always merged 4 components at Level 0 at once. The measured instantaneous write throughput is shown in Figure 23a, which is 30% lower than that of the previous experiment. During the running phase, we used a constant arrival process based on this lower write throughput. The resulting instantaneous write throughput is shown in Figure 23b, where the LSMtree successfully maintains a sustainable write throughput without any write stalls, which in turn results in low write latencies (not shown in the figure). This confirms that Lev-elDB's single-threaded scheduler is sufficient to minimize write stalls, given that a single merge thread can fully utilize the I/O bandwidth budget.
After fixing the unsustainable write throughput problem of LevelDB, we further evaluated the impact of partition size on the write stalls of partitioned LSM-trees. In this experiment, we varied the size of each partitioned file from 8MB to 32GB so that partitioned merges effectively transit into full merges. The maximum write throughput during the running phase and the 99th percentile write latencies Figures 24a and 24b respectively. Even though the partition size has little impact on the overall write throughput, a large partition size can cause large write latencies since we have shown in Section 5 that a single-threaded scheduler is insufficient to minimize write stalls for full merges. Most implementations of partitioned LSM-trees today already choose a small partition size to bound the temporary space occupied by merges. We see here that one more reason to do so is to minimize write stalls under a single-threaded scheduler.
EXTENSION: SECONDARY INDEXES
We now extend our two-phase approach to evaluate LSMbased datasets in the presence of secondary indexes. We first discuss two secondary index maintenance strategies used in practical systems, followed by the experimental evaluation and analysis.
Secondary Index Maintenance
An LSM-based storage system often contains a primary index plus multiple secondary indexes for a given dataset [41]. while each secondary index stores the mapping from secondary keys to primary keys. During data ingestion, secondary indexes must be properly maintained to ensure correctness. In the primary LSM-tree, writes (inserts, deletes, and updates) can be added blindly to memory since entries with identical keys will be reconciled by queries automatically. However, this mechanism does not work for secondary indexes since the value of a secondary index key might change. Thus, in addition to adding the new entry to the secondary index, the old entry (if any) must be cleaned as well. We now discuss two secondary index maintenance strategies used in practice [41].
The eager index maintenance strategy performs a point lookup to fetch the old record during the ingestion time. If the old record exists, anti-matter entries are produced to cleanup its secondary indexes. The new record is then added to the primary index and all secondary indexes. In an update-heavy workload, these point lookups can become the ingestion bottleneck instead of the LSM-tree write operations.
The lazy index maintenance strategy does not cleanup secondary indexes during the ingestion time. Instead, it only adds the new entry into secondary indexes without any point lookups. Secondary indexes are then cleaned up in the background either when merging the primary index components [52] or when merging the secondary index components [41]. Evaluating different secondary index cleanup methods is beyond the scope of this work. Instead, we choose to evaluate the lazy strategy without cleaning up secondary indexes.
Experimental Evaluation
Experiment Setup. In this set of experiments, we modified the YCSB benchmark to allow us incorporate secondary indexes and formulate secondary index queries. Specifically, we generated records with multiple fields with each secondary field value randomly following a uniform distribution based on the total number of base records. We built two secondary indexes in our experiment. The primary index and the two secondary indexes all used the tiering merge policy with size ratio 3.
In this set of experiments, we evaluated two merge schedulers, namely fair and greedy. Each LSM-tree is merged independently with a separate merge scheduler instance. However, these LSM-trees shared the same memory budget 128MB for each memory component and the I/O bandwidth budget of 100MB/s. We also evaluated two index maintenance strategies, namely eager and lazy. For the ea- ger strategy, we used 8 writer threads to maximize the point lookup throughput. For the lazy strategy, 1 writer thread was sufficient to reach the maximum write throughput since there were no point lookups during data ingestion. Testing Phase. We first measured the maximum write throughput of the lazy and eager strategies using the fair scheduler during the testing phase. The maximum write throughput was 9,731 records/s for the lazy strategy and 7,601 records/s for the eager strategy. (The eager strategy results in a slightly lower write throughput because it has to cleanup secondary indexes using point lookups.)
Running Phase. During the running phase, we used constant data arrivals to evaluate write stalls. The instantaneous write throughput and percentile write latencies for the lazy and eager strategies are shown in Figures 25 and 26 respectively. The lazy strategy exhibits a relatively stable write throughput ( Figure 25a) and lower write latencies (Figure 25b), which is similar to the single LSM-tree case. However, under the eager strategy, there are regular fluctuations in the write throughput (Figure 26a), results in larger write latencies (Figure 26b). This is because the write throughput of the eager strategy is bounded by point lookups in this experiment, and the point lookup throughput inherently varies due to ongoing disk activities and the number of disk components. Based on queuing theory [33], the system utilization must be reduced to minimize the write latency. Moreover, the greedy scheduler still has lower write latencies due to its minimizing the number of disk components to improve point lookup performance.
Since the eager strategy results in large percentile write latencies under a high data arrival rate, we further carried out another experiment to evaluate the percentile write latencies under different system utilizations, that is, different data arrival rates. The resulting 99% percentile write latencies under various utilizations are shown in Figure 27. As the result shows, the write latency becomes relatively small once the utilization is below 80%. This is much smaller than the utilization used in our previous experiments, which was 95%. This result also confirms that because of the inherent variance of the point lookup throughput, one must reduce the data arrival rate, that is, the system utilization, to achieve smaller write latencies.
Secondary Index Queries. Finally, we evaluated the impact of different merge schedulers and maintenance strategies on the performance of secondary index queries. We used 8 query threads to maximize query throughput. Each secondary index query first scans the secondary index to fetch primary keys, which are then sorted and used to fetch records from the primary index. We varied the query selectivity from 1 record to 1000 records so that the performance bottleneck eventually shifts from secondary index scans to primary index lookups.
The instantaneous query throughput for various query selectivities under the lazy and eager strategy is shown in Figures 28 and 29 respectively. The query throughput is averaged over each 30-second windows. In general, the greedy scheduler improves secondary index query performance under all query selectivities since it reduces the number of disk components for both the primary index and secondary indexes. The improvement is less significant under the eager strategy since the arrival rate is lower.
To summarize, under the lazy strategy, an LSM-based dataset with multiple secondary indexes has similar performance characteristics to the single LSM-tree case, because this can be viewed as a simple extension to multiple LSM-trees. The greedy scheduler also improves query performance by minimizing the number of disk components as before. However, under the eager strategy, the point lookups actually become the ingestion bottleneck instead of LSMtree write operations. This not only reduces the overall write throughput, but further causes larger write latencies due to the inherent variance of the point lookup throughput.
LESSONS AND INSIGHTS
Having studied and evaluated the write stall problem for various LSM-tree designs, here we summarize the lessons and insights observed from our evaluation.
The LSM-tree's write latency must be measured properly. The out-of-place update nature of LSM-trees has introduced the write stall problem. Throughout our evaluation, we have seen cases where one can obtain a higher but unsustainable write throughput. For example, the greedy scheduler would report a higher write throughput by starving large merges, and LevelDB's merge scheduler would report a higher but unsustainable write throughput by dynamically adjusting the shape of the LSM-tree. Based on our findings, we argue that in addition to the testing phase, used by existing LSM research, an extra running phase must be performed to evaluate the usability of the measured maximum write throughput. Moreover, the write latency must be measured properly due to queuing. One solution is to use the proposed two-phase evaluation approach to evaluate the resulting write latencies under high utilization, where the arrival rate is close to the processing rate.
Merge scheduling is critical to minimizing write stalls. Throughout our evaluation of various LSM-tree designs, including bLSM [51], full merges, and partitioned merges, we have seen that merge scheduling has a critical impact on write stalls. Comparing these LSM-tree designs in general depends on many factors and is beyond the scope of this paper; here we have focused on how to minimize write stalls for each LSM-tree design.
bLSM [51], an instance of full merges, introduces a sophisticated spring-and-gear merge scheduler to bound the processing latency of LSM-trees. However, we found that bLSM still has large variances in its processing rate, leading to large write latencies under high arrival rates. Among the three evaluated schedulers, namely single-threaded, fair, and greedy, the single-threaded scheduler should not be used in practical systems due to the long stalls caused by large merges. The fair scheduler should be used when measuring the maximum throughput because it provides fairness to all merges. The greedy scheduler should be used at runtime since it better minimizes the number of disk components, both reducing write stalls and improving query performance. Moreover, as an important design choice, global component constraints better minimizes write stalls.
Partitioned merges simplify merge scheduling by breaking large merges into many smaller ones. However, we found a new problem that the measured maximum write throughput of LevelDB is unsustainable because it dynamically adjusts the size ratios under write-intensive workloads. After fixing this problem, a single-threaded scheduler with a small partition size, as used by LevelDB, is sufficient for delivering low write latencies under high utilization. However, fixing this problem reduced the maximum write throughput of LevelDB roughly one-third in our evaluation.
For both full and partitioned merges, processing writes as quickly as possible better minimizes write latencies. Finally, with proper merge scheduling, all LSM-tree designs can indeed minimize write stalls by delivering low write latencies under high utilizations.
CONCLUSION
In this paper, we have studied and evaluated the write stall problem for various LSM-tree designs. We first proposed a two-phase approach to use in evaluating the impact of write stalls on percentile write latencies using a combination of closed and open system testing models. We then identified and explored the design choices for LSM merge schedulers. For full merges, we proposed a greedy scheduler that minimizes write stalls. For partitioned merges, we found that a single-threaded scheduler is sufficient to provide a stable write throughput but that the maximum write throughput must be measured properly. Based on these findings, we have shown that performance variance must be considered together with write throughput to ensure the actual usability of the measured throughput.
Proof. Given an LSM-tree, consider two merge schedulers S and S which only differ in that S may add arbitrary delays to writes to avoid write stalls while S processes writes as quickly as possible. Denote the total number of writes processed by S and S at time instant T as WT and W T respectively. Since S processes writes as quickly as possible, we have WT ≤ W T . In other words, given the same numbers of writes, S processes these writes no later than S.
Consider the i-th write request that arrives at time instant Ta i . Suppose this write is processed by S and S at time instants Tp i and T p i respectively. Based on the analysis above, it is straightforward that Tp i ≥ T p i . Thus, we have Tp i − Ta i ≥ T p i − Ta i , which implies that S minimizes the latency of each write.
Theorem 2. Given any set of merge operations that process the same number of disk components and any I/O bandwidth budget, the greedy scheduler minimizes the number of disk components at any time instant.
Proof. Let S be an arbitrary merge scheduler and S be the greedy scheduler. Suppose there are N merge operations in total and the initial time instant is t0. Denote by ti and t i the time instants when S and S complete their i-th merge operation, respectively. Since all merge operations always process the same number of disk components, we only need to show that for any i ∈ [1, N ], ti ≥ t i always holds. In other words, S completes each merge operation no later than S.
Suppose there exists i ∈ [1, N ] s.t. ti < t i . Denote by |S ≤i | and |S ≤i | the total number of bytes read and written by S and S up to the completion of the i-th merge operation. By the definition of the greedy scheduler S , we have |S ≤i | ≥ |S ≤i |. Since ti < t i , we further have
|S ≤i | t i −t 0 > |S ≤i | t i −t 0 .
This implies that the merge scheduler S requires a larger I/O bandwidth budget than S , which leads to a contradiction. Thus, for any i ∈ [1, N ], ti ≤ t i always holds, which proves that S minimizes the number of disk components over time.
Theorem 3. Given any I/O bandwidth budget, no merge scheduler can minimize the number of disk components at any time instant for any data arrival process and any LSMtree for a deterministic merge policy where all merge operations process the same number of disk components.
Proof. In this proof, we will construct an example showing that no such merge scheduler can be designed. Consider a two-level LSM-tree with a tiering merge policy. The size ratio of this merge policy is set at 2. Suppose Level 1, which is the last level, contains three disk components D1, D2, D3 and Level 0 contains two disk components, D4 and D5. For simplicity, assume that no more writes will arrive. Initially, the merge policy creates two merge operations, namely the merge operation M1−2 that processes D1 and D2 and the merge operation M4−5 that processes D4 and D5. Upon the completion of M1−2, which produces a new disk component D1−2, the merge policy will create a new merge operation M1−3 that processes D1−2 and D3. We further denote the amount of I/O bandwidth required by each merge operation M1−2, M4−5, and M1−3 as |M1−2|, |M4−5|, and |M1−3|. Finally, we assume that |M1−3| < |M4−5| < |M1−2|. This can happen if D2 contains a large number of deleted keys against D1 so that the merged disk component D1−2 is very small.
Suppose that the initial time instant is t0 and let the given I/O bandwidth budget be B. Consider a merge scheduler S that first executes M4−5 and then M1−2. At time instant t1 = t0 + , S completes M1−3. Based on the assumption |M1−3| < |M4−5| < |M1−2|, it follows that t1 < t 1 and t 2 < t2. Suppose there exists a merge scheduler S * that minimizes the number of disk components over time. Then, S * must satisfy the following two constraints: (1) complete one merge operation no later than t1; (2) complete two merge operations no later than t 2 .
To satisfy constraint (1), S * must execute M4−5 first. Then, S * must complete the second merge operation within time interval t 2 − t1 = . Thus, S * cannot satisfy constraint (2) by completing the second merge operation no later than t 2 because the only remaining merge operation M1−2 takes time
|M 1−2 | B
to finish. This leads to a contradiction that S * minimizes the number of disk components over time. Thus, we have constructed an example for which no such merge scheduler can be designed, which proves the theorem.
| 12,298 |
1906.09667
|
2949777943
|
The Log-Structured Merge-Tree (LSM-tree) has been widely adopted for use in modern NoSQL systems for its superior write performance. Despite the popularity of LSM-trees, they have been criticized for suffering from write stalls and large performance variances due to the inherent mismatch between their fast in-memory writes and slow background I O operations. In this paper, we use a simple yet effective two-phase experimental approach to evaluate write stalls for various LSM-tree designs. We further explore the design choices of LSM merge schedulers to minimize write stalls given a disk bandwidth budget. We have conducted extensive experiments in the context of the Apache AsterixDB system and we present the results here.
|
Performance stability has long been recognized as a critical performance metric. The TPC-C benchmark @cite_9 measures not only absolute throughput, but also specifies the acceptable upper bounds for the percentile latencies of the transactions. @cite_40 applied VProfiler @cite_0 to identify major sources of variance in database transactions and proposed a variance-aware transaction scheduling algorithm. @cite_10 proposed techniques to optimize parameterized queries while balancing the average and variance of query cost. To reduce the variance of query processing, most existing proposals have either emphasized the use of table scans @cite_14 @cite_31 @cite_42 or stuck to worst-case query plans @cite_2 @cite_44 . Cao @cite_8 conducted an experimental study of the performance variance of modern storage stacks; they found that variance is common in storage stacks and heavily depends on configurations and workloads. Dean and Barroso @cite_30 discussed several engineering techniques to reduce performance variance in large-scale distributed systems at Google. Different from these efforts, in this work we focus on evaluating and minimizing the performance variances of LSM-trees due to their inherent out-of-place update design.
|
{
"abstract": [
"Software techniques that tolerate latency variability are vital to building responsive large-scale Web services.",
"",
"Ensuring stable performance for storage stacks is important, especially with the growth in popularity of hosted services where customers expect QoS guarantees. The same requirement arises from benchmarking settings as well. One would expect that repeated, carefully controlled experiments might yield nearly identical performance results--but we found otherwise. We therefore undertook a study to characterize the amount of variability in benchmarking modern storage stacks. In this paper we report on the techniques used and the results of this study. We conducted many experiments using several popular workloads, file systems, and storage devices--and varied many parameters across the entire storage stack. In over 25 of the sampled configurations, we uncovered variations higher than 10 in storage performance between runs. We analyzed these variations and found that there was no single root cause: it often changed with the workload, hardware, or software configuration in the storage stack. In several of those cases we were able to fix the cause of variation and reduce it to acceptable levels. We believe our observations in benchmarking will also shed some light on addressing stability issues in production systems.",
"",
"",
"Most software profiling tools quantify average performance and rely on a program's control flow graph to organize and report results. However, in interactive server applications, performance predictability is often an equally important measure. Moreover, the end user is often concerned with the performance of a semantically defined interval of execution, such as a request or transaction, which may not directly map to any single function in the call graph, especially in high-performance applications that use asynchrony or event-based programming. It is difficult to distinguish functionality that lies on the critical path of a semantic interval from other activity (e.g., periodic logging or side operations) that may nevertheless appear prominent in a conventional profile. Existing profilers lack the ability to (i) aggregate results for a semantic interval and (ii) attribute its performance variance to individual functions. We propose a profiler called VProfiler that, given the source code of a software system and programmer annotations indicating the start and end of semantic intervals of interest, is able to identify the dominant sources of latency variance in a semantic context. Using a novel abstraction, called a variance tree, VProfiler analyzes the thread interleaving and deconstructs overall latency variance into variances and covariances of the execution time of individual functions. It then aggregates latency variance along a backwards path of dependence relationships among threads from the end of an interval to its start. We evaluate VProfiler's effectiveness on three popular open-source projects (MySQL, Postgres, and Apache Web Server). By identifying a few culprit functions in these complex code bases, VProfiler allows us to eliminate 27 --82 of the overall latency variance of these systems with a modest programming effort.",
"Developers of rapidly growing applications must be able to anticipate potential scalability problems before they cause performance issues in production environments. A new type of data independence, called scale independence, seeks to address this challenge by guaranteeing a bounded amount of work is required to execute all queries in an application, independent of the size of the underlying data. While optimization strategies have been developed to provide these guarantees for the class of queries that are scale-independent when executed using simple indexes, there are important queries for which such techniques are insufficient. Executing these more complex queries scale-independently requires precomputation using incrementally-maintained materialized views. However, since this precomputation effectively shifts some of the query processing burden from execution time to insertion time, a scale-independent system must be careful to ensure that storage and maintenance costs do not threaten scalability. In this paper, we describe a scale-independent view selection and maintenance system, which uses novel static analysis techniques that ensure that created views do not themselves become scaling bottlenecks. Finally, we present an empirical analysis that includes all the queries from the TPC-W benchmark and validates our implementation's ability to maintain nearly constant high-quantile query and update latency even as an application scales to hundreds of machines.",
"While much of the research on transaction processing has focused on improving overall performance in terms of throughput and mean latency, surprisingly less attention has been given to performance predictability: how often individual transactions exhibit execution latency far from the mean. Performance predictability is increasingly important when transactions lie on the critical path of latency-sensitive applications, enterprise software, or interactive web services. In this paper, we focus on understanding and mitigating the sources of performance unpredictability in today's transactional databases. We conduct the first quantitative study of major sources of variance in MySQL, Postgres (two of the largest and most popular open-source products on the market), and VoltDB (a non-conventional database). We carry out our study with a tool called TProfiler that, given the source code of a database system and programmer annotations indicating the start and end of a transaction, is able to identify the dominant sources of variance in transaction latency. Based on our findings, we investigate alternative algorithms, implementations, and tuning strategies to reduce latency variance without compromising mean latency or throughput. Most notably, we propose a new lock scheduling algorithm, called Variance-Aware Transaction Scheduling (VATS), and a lazy buffer pool replacement policy. In particular, our modified MySQL exhibits significantly lower variance and 99th percentile latencies by up to 5.6× and 6.3×, respectively. Our proposal has been welcomed by the open-source community, and our VATS algorithm has already been adopted as of MySQL's 5.7.17 release (and been made the default scheduling policy in MariaDB).",
"",
"",
"Parameterized queries are commonly used in database applications. In a parameterized query, the same SQL statement is potentially executed multiple times with different parameter values. In today's DBMSs the query optimizer typically chooses a single execution plan that is reused for multiple instances of the same query. A key problem is that even if a plan with low average cost across instances is chosen, its variance can be high, which is undesirable in many production settings. In this paper, we describe techniques for selecting a plan that can better address the trade-off between the average and variance of cost across instances of a parameterized query. We show how to efficiently compute the skyline in the average-variance cost space. We have implemented our techniques on top of a commercial DBMS. We present experimental results on benchmark and real-world decision support queries."
],
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_42",
"@cite_0",
"@cite_44",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_10"
],
"mid": [
"1982063824",
"",
"2604571800",
"",
"",
"2606834595",
"2057546223",
"2612779005",
"",
"",
"1994341124"
]
}
|
On Performance Stability in LSM-based Storage Systems (Extended Version)
|
In recent years, the Log-Structured Merge-Tree (LSMtree) [45,42] has been widely used in modern key-value stores and NoSQL systems [2,4,5,7,14]. Different from traditional index structures, such as B + -trees, which apply updates in-place, an LSM-tree always buffers writes into memory. When memory is full, writes are flushed to disk and subsequently merged using sequential I/Os. To improve efficiency and minimize blocking, flushes and merges are often performed asynchronously in the background.
Despite their popularity, LSM-trees have been criticized for suffering from write stalls and large performance variances [3,51,58]. To illustrate this problem, we conducted a micro-experiment on RocksDB [7], a state-of-the-art LSMbased key-value store, to evaluate its write throughput on SSDs using the YCSB benchmark [25]. The instantaneous write throughput over time is depicted in Figure 1. As one can see, the write throughput of RocksDB periodically slows down after the first 300 seconds, which is when the system Instantaneous write throughput of RocksDB: writes are periodically stalled to wait for lagging merges has to wait for background merges to catch up. Write stalls can significantly impact percentile write latencies and must be minimized to improve the end-user experience or to meet strict service-level agreements [36].
In this paper, we study the impact of write stalls and how to minimize write stalls for various LSM-tree designs. It should first be noted that some write stalls are inevitable. Due to the inherent mismatch between fast in-memory writes and slower background I/O operations, in-memory writes must be slowed down or stopped if background flushes or merges cannot catch up. Without such a flow control mechanism, the system will eventually run out of memory (due to slow flushes) or disk space (due to slow merges). Thus, it is not a surprise that an LSM-tree can exhibit large write stalls if one measures its maximum write throughput by writing as quickly data as possible, such as we did in Figure 1.
This inevitability of write stalls does not necessarily limit the applicability of LSM-trees since in practice writes do not arrive as quickly as possible, but rather are controlled by the expected data arrival rate. The data arrival rate directly impacts the write stall behavior and resulting write latencies of an LSM-tree. If the data arrival rate is relatively low, then write stalls are unlikely to happen. However, it is also desirable to maximize the supported data arrival rate so that the system's resources can be fully utilized. Moreover, the expected data arrival rate is subject to an important constraint -it must be smaller than the processing capacity of the target LSM-tree. Otherwise, the LSM-tree will never be able to process writes as they arrive, causing infinite write latencies. Thus, to evaluate the write stalls of an LSM-tree, the first step is to choose a proper data arrival rate.
As the first contribution, we propose a simple yet effective approach to evaluate the write stalls of various LSM-tree designs by answering the following question: If we set the data arrival rate close to (e.g., 95% of) the maximum write throughput of an LSM-tree, will that cause write stalls? In other words, can a given LSM-tree design provide both a high write throughput and a low write latency? Briefly, the proposed approach consists of two phases: a testing phase and a running phase. During the testing phase, we experimentally measure the maximum write throughput of an LSM-tree by simply writing as much data as possible. During the running phase, we then set the data arrival rate close to the measured maximum write throughput as the limiting data arrival rate to evaluate its write stall behavior based on write latencies. If write stalls happen, the measured write throughput is not sustainable since it cannot be used in the long-term due to the large latencies. However, if write stalls do not happen, then write stalls are no longer a problem since the given LSM-tree can provide a high write throughput with small performance variance.
Although this approach seems to be straightforward at first glance, there exist two challenges that must be addressed. First, how can we accurately measure the maximum sustainable write rate of an LSM-tree experimentally? Second, how can we best schedule LSM I/O operations so as to minimize write stalls at runtime? In the remainder of this paper, we will see that the merge scheduler of an LSMtree can have a large impact on write stalls. As the second contribution, we identify and explore the design choices for LSM merge schedulers and present a new merge scheduler to address these two challenges.
As the paper's final contribution, we have implemented the proposed techniques and various LSM-tree designs inside Apache AsterixDB [14]. This enabled us to carry out extensive experiments to evaluate the write stalls of LSM-trees and the effectiveness of the proposed techniques using our two-phase evaluation approach. We argue that with proper tuning and configuration, LSM-trees can achieve both a high write throughput and small performance variance.
The remainder of this paper is organized as follows: Section 2 provides background information on LSM-trees and briefly surveys related work. Section 3 describes the general experimental setup used throughout this paper. Section 4 identifies the design choices for LSM merge schedulers and evaluates bLSM's spring-and-gear scheduler [51]. Sections 5 and 6 present our techniques for minimizing write stalls for full merges and partitioned merges respectively. Section 8 summarizes the lessons and insights from our evaluation. Finally, Section 9 concludes the paper.
Log-Structured Merge Trees
The LSM-tree [45] is a persistent index structure optimized for write-intensive workloads. In an LSM-tree, writes are first buffered into a memory component. An insert or update simply adds a new entry with the same key, while a delete adds an anti-matter entry indicating that a key has been deleted. When the memory component is full, it is flushed to disk to form a new disk component, within which entries are ordered by keys. Once flushed, LSM disk components are immutable.
A query over an LSM-tree has to reconcile the entries with identical keys from multiple components, as entries from newer components override those from older components. A point lookup query simply searches all components from newest to oldest until the first match is found.
A range query searches all components simultaneously using a priority queue to perform reconciliation. To speed up point lookups, a common optimization is to build Bloom filters [19] over the sets of keys stored in disk components. If a Bloom filter reports that a key does not exist, then that disk component can be excluded from searching. As disk components accumulate, query performance tends to degrade since more components must be examined. To counter this, smaller disk components are gradually merged into larger ones. This is implemented by scanning old disk components to create a new disk component with unique entries. The decision of what disk components to merge is made by a pre-defined merge policy, which is discussed below. Merge Policy. Two types of LSM merge policies are commonly used in practice [42], both of which organize components into "levels". The leveling merge policy (Figure 2a) maintains one component per level, and a component at Level i + 1 will be T times larger than that of Level i. As a result, the component at Level i will be merged multiple times with the component from Level i − 1 until it fills up and is then merged into Level i + 1. In contrast, the tiering merge policy (Figure 2b) maintains multiple components per level. When a Level i becomes full with T components, these T components are merged together into a new component at Level i + 1. In both merge policies, T is called the size ratio, as it controls the maximum capacity of each level. We will refer to both of these merge policies as full merges since components are merged entirely.
In general, the leveling merge policy optimizes for query performance by minimizing the number of components but at the cost of write performance. This design also maximizes space efficiency, which measures the amount of space used for storing obsolete entries, by having most of the entries at the largest level. In contrast, the tiering merge policy is more write-optimized by reducing the merge frequency, but this leads to lower query performance and space utilization.
Partitioning. Partitioning is a commonly used optimization in modern LSM-based key-value stores that is often implemented together with the leveling merge policy, as pioneered by LevelDB [5]. In this optimization, a large LSM disk component is range-partitioned into multiple (often fixed-size) files. This bounds the processing time and the temporary space of each merge. An example of a partitioned LSM-tree with the leveling merge policy is shown in Figure 3, where each file is labeled with its key range. Note that partitioning starts from Level 1, as components in Level 0 are directly flushed from memory. To merge a file from Level i to Level i + 1, all of its overlapping files at Level i + 1 are selected and these files are merged to form new files at Level i + 1. For example in Figure 3, the file labeled 0-50 at Level 1 will be merged with the files labeled 0-20 and 22-52 at Level 2, which produce new files labeled 0-15, 17-30, and 32-52 at Level 2. To select which file to merge next, LevelDB uses a round-robin policy. Both full merges and partitioned merges are widely used in existing systems. Full merges are used in AsterixDB [1], Cassandra [2], HBase [4], ScyllaDB [8], Tarantool [9], and WiredTiger (MongoDB) [11]. Partitioned merges are used in LevelDB [5], RocksDB [7], and X-Engine [34].
Write Stalls in LSM-trees. Since in-memory writes are inherently faster than background I/O operations, writing to memory components sometimes must be stalled to ensure the stability of an LSM-tree, which, however, will negatively impact write latencies. This is often referred to as the write stall problem. If the incoming write speed is faster than the flush speed, writes will be stalled when all memory components are full. Similarly, if there are too many disk components, new writes should be stalled as well. In general, merges are the major source of write stalls since writes are flushed once but merged multiple times. Moreover, flush stalls can be avoided by giving higher I/O priority to flushes. In this paper, we thus focus on write stalls caused by merges.
Apache AsterixDB
Apache AsterixDB [1,14,22] is a parallel, semi-structured Big Data Management System (BDMS) that aims to manage large amounts of data efficiently. It supports a feedbased framework for efficient data ingestion [32,56]. The records of a dataset in AsterixDB are hash-partitioned based on their primary keys across multiple nodes of a sharednothing cluster; thus, a range query has to search all nodes. Each partition of a dataset uses a primary LSM-based B +tree index to store the data records, while local secondary indexes, including LSM-based B + -trees, R-trees, and inverted indexes, can be built to expedite query processing. Aster-ixDB internally uses a variation of the tiering merge policy to manage disk components, similar to the one used in existing systems [4,7]. Instead of organizing components into levels explicitly as in Figure 2b, AsterixDB's variation simply schedules merges based on the sizes of disk components. In this work, we do not focus on the LSM-tree implementation in AsterixDB but instead use AsterixDB as a common testbed to evaluate various LSM-tree designs.
Related Work
LSM-trees. Recently, a large number of improvements of the original LSM-tree [45] have been proposed. [42] surveys these improvements, ranging from improving write performance [18,28,29,38,40,44,47,57], optimizing mem- [13,17,53,59], supporting automatic tuning of LSM-trees [26,27,39], optimizing LSM-based secondary indexes [41,46], to extending the applicability of LSM-trees [43,49]. However, all of these efforts have largely ignored performance variances and write stalls of LSM-trees. Several LSM-tree implementations seek to bound the write processing latency to alleviate the negative impact of write stalls [5,7,37,58]. bLSM [51] proposes a spring-and-gear merge scheduler to avoid write stalls. As shown in Figure 4, bLSM has one memory component, C0, and two disk components, C1 and C2. The memory component C0 is continuously flushed and merged with C1. When C1 becomes full, a new C1 component is created while the old C1, which now becomes C 1 , will be merged with C2. bLSM ensures that for each Level i, the progress of merging C i into Ci+1 (denoted as "outi") will be roughly identical to the progress of the formation of a new Ci (denoted as "ini"). This eventually limits the write rate for the memory component (in0) and avoids blocking writes. However, we will see later that simply bounding the maximum write processing latency alone is insufficient, because a large variance in the processing rate can still cause large queuing delays for subsequent writes.
Performance Stability. Performance stability has long been recognized as a critical performance metric. The TPC-C benchmark [10] measures not only absolute throughput, but also specifies the acceptable upper bounds for the percentile latencies. Huang et al. [36] applied VProfiler [35] to identify major sources of variance in database transactions. Various techniques have been proposed to optimize the variance of query processing [15,16,20,24,48,55]. Cao et al. [21] found that variance is common in storage stacks and heavily depends on configurations and workloads. Dean and Barroso [30] discussed several engineering techniques to reduce performance variances at Google. Different from these efforts, in this work we focus on the performance variances of LSM-trees due to their inherent out-of-place update design.
EXPERIMENTAL METHODOLOGY
For ease of presentation, we will mix our techniques with a detailed performance analysis for each LSM-tree design. We now describe the general experimental setup and methodology for all experiments to follow.
Experimental Setup
All experiments were run on a single node with an 8-core Intel i7-7567U 3.5GHZ CPU, 16 GB of memory, a 500GB SSD, and a 1TB 7200 rpm hard disk. We used the SSD for LSM storage and configured the hard disk for transaction logging due to its sufficiently high sequential throughput. We allocated 10GB of memory for the AsterixDB instance. Within that allocation, the buffer cache size was set at 2GB. Each LSM memory component had a 128MB budget, and each LSM-tree had two memory components to minimize stalls during flushes. Each disk component had a Bloom filter with a false positive rate setting of 1%. The data page size was set at 4KB to align with the SSD page size.
It is important to note that not all sources of performance variance can be eliminated [36]. For example, writing a keyvalue pair with a 1MB value inherently requires more work than writing one that only has 1KB. Moreover, short time periods with quickly occurring writes (workload bursts) will be much more likely to cause write stalls than a long period of slow writes, even though their long-term write rate may be the same. In this paper, we will focus on the avoidable variance [36] caused by the internal implementation of LSMtrees instead of variances in the workloads.
To evaluate the internal variances of LSM-trees, we adopt YCSB [25] as the basis for our experimental workload. Instead of using the pre-defined YCSB workloads, we designed our own workloads to better study the performance stability of LSM-trees. Each experiment first loads an LSM-tree with 100 million records, in random key order, where each record has size 1KB. It then runs for 2 hours to update the previously loaded LSM-tree. This ensures that the measured write throughput of an LSM-tree is stable over time. Unless otherwise noted, we used one writer thread for writing data to the LSM memory components. We evaluated two update workloads, where the updated keys follow either a uniform or Zipf distribution. The specific workload setups will be discussed in the subsequent sections.
We used two commonly used I/O optimizations when implementing LSM-trees, namely I/O throttling and periodic disk forces. In all experiments, we throttled the SSD write speed of all LSM flush and merge operations to 100MB/s. This was implemented by using a rate limiter to inject artificial sleeps into SSD writes. This mechanism bounds the negative impact of the SSD writes on query performance and allows us to more fairly compare the performance differences of various LSM merge schedulers. We further had each flush or merge operation force its SSD writes after each 16MB of data. This helps to limit the OS I/O queue length, reducing the negative impact of SSD writes on queries. We have verified that disabling this optimization would not impact the performance trends of writes; however, large forces at the end of each flush and merge operation, which are required for durability, can significantly interfere with queries.
Performance Metrics
To quantify the impact of write stalls, we will not only present the write throughput of LSM-trees but also their write latencies. However, there are different models for measuring write latencies. Throughout the paper, we will use arrival rate to denote the rate at which writes are submitted by clients, processing rate to denote the rate at which writes can be processed by an LSM-tree, and write throughput to denote the number of writes processed by an LSM-tree per unit of time. The difference between the write throughput and arrival/processing rates is discussed further below.
The bLSM paper [51], as well as most of the existing LSM research, used the experimental setup depicted in Figure 5a to write as much data as possible and measure the latency of each write. In this closed system setup [33], the processing rate essentially controls the arrival rate, which further equals the write throughput. Although this model is sufficient for measuring the maximum write throughput of LSM-trees, it is not suitable for characterizing their write latencies for several reasons. First, writing to memory is inherently faster than background I/Os, so an LSM-tree will always have to stall writes in order to wait for lagged flushes and merges.
Moreover, under this model, a client cannot submit its next write until its current write is completed. Thus, when the LSM-tree is stalled, only a small number of ongoing writes will actually experience a large latency since the remaining writes have not been submitted yet 1 .
In practice, a DBMS generally cannot control how quickly writes are submitted by external clients, nor will their writes always arrive as fast as possible. Instead, the arrival rate is usually independent from the processing rate, and when the system is not able to process writes as fast as they arrive, the newly arriving writes must be temporarily queued. In such an open system (Figure 5b), the measured write latency includes both the queuing latency and processing latency. Moreover, an important constraint is that the arrival rate must be smaller than the processing rate since otherwise the queue length will be unbounded. Thus, the (overall) write throughput is actually determined by the arrival rate.
A simple example will illustrate the important difference between these two models. Suppose that 5 clients are used to generate an intended arrival rate of 1000 writes/s and that the LSM-tree stalls for 1 second. Under the closed system model (Figure 5a), only 5 delayed writes will experience a write latency of 1s since the remaining (intended) 995 writes simply will not occur. However, under the open system model (Figure 5b), all 1000 writes will be queued and their average latency will be at least 0.5s.
To evaluate write latencies in an open system, one must first set the arrival rate properly since the write latency heavily depends on the arrival rate. It is also important to maximize the arrival rate to maximize the system's utilization. For these reasons, we propose a two-phase evaluation approach with a testing phase and a running phase. During the testing phase, we use the closed system model ( Figure 5a) to measure the maximum write throughput of an LSM-tree, which is also its processing rate. When measuring the maximum write throughput, we excluded the initial 20-minute period (out of 2 hours) of the testing phase since the initially loaded LSM-tree has a relatively small number of disk components at first. During the running phase, we use the open system model (Figure 5b) to evaluate the write latencies under a constant arrival rate set at 95% of the measured maximum write throughput. Based on queuing theory [33], the queuing time approaches infinity when the utilization, which is the ratio between the arrival rate and the processing rate, approaches 100%. We thus empirically determine a high utilization load (95%) while leaving some room for the system to absorb variance. If the running phase then reports large write latencies, the maximum write throughput as determined in the testing phase is not sustainable; we must improve the implementation of the LSM-tree or reduce the expected arrival rate to reduce the latencies. In contrast, if the measured write latency is small, then the given LSM-tree can provide a high write throughput with a small performance variance.
LSM MERGE SCHEDULER
Different from a merge policy, which decides which components to merge, a merge scheduler is responsible for executing the merge operations created by the merge policy. In this section, we discuss the design choices for a merge scheduler and evaluate bLSM's spring-and-gear merge scheduler.
Scheduling Choices
The write cost of an LSM-tree, which is the number of I/Os per write, is determined by the LSM-tree design itself and the workload characteristics but not by how merges are executed [26]. Thus, a merge scheduler will have little impact on the overall write throughput of an LSM-tree as long as the allocated I/O bandwidth budget can be fully utilized. However, different scheduling choices can significantly impact the write stalls of an LSM-tree, and merge schedulers must be carefully designed to minimize write stalls. We have identified the following design choices for a merge scheduler.
Component Constraint: A merge scheduler usually specifies an upper-bound constraint on the total number of components allowed to accumulate before incoming writes to the LSM memory components should be stalled. We call this the component constraint. For example, bLSM [51] allows at most two disk components per level, while other systems like HBase [4] or Cassandra [2] specify the total number of disk components across all levels.
Interaction with Writes: There exist different strategies to enforce a given component constraint. One strategy is to simply stop processing writes once the component constraint is violated. Alternatively, the processing of writes can be degraded gracefully based on the merge pressure [51].
Degree of Concurrency: In general, an LSM-tree can often create multiple merge operations in the same time period. A merge scheduler should decide how these merge operations should be scheduled. Allowing concurrent merges will enable merges at multiple levels to proceed concurrently, but they will also compete for CPU and I/O resources, which can negatively impact query performance [13]. As two examples, bLSM [51] allows one merge operation per level, while LevelDB [5] uses just one single background thread to execute all merges one by one.
I/O Bandwidth Allocation: Given multiple concurrent merge operations, the merge scheduler should further decide how to allocate the available I/O bandwidth among these merge operations. A commonly used heuristic is to allocate I/O bandwidth "fairly" (evenly) to all active merge operations. Alternatively, bLSM [51] allocates I/O bandwidth based on the relative progress of the merge operations to ensure that merges at each level all make steady progress.
Evaluation of bLSM
Due to the implementation complexity of bLSM and its dependency on a particular storage system, Stasis [50], we chose to directly evaluate the released version of bLSM [6]. bLSM uses the leveling merge policy with two on-disk levels. We set its memory component size to 1GB and size ratio to 10 so that the experimental dataset with 100 million records can fit into the last level. We used 8 write threads to maximize the write throughput of bLSM.
Testing Phase. During the testing phase, we measured the maximum write throughput of bLSM by writing as much data as possible using both the uniform and Zipf update workloads. The instantaneous write throughput of bLSM under these two workloads is shown in Figure 6a. For readability, the write throughput is averaged over 30-second windows. (Unless otherwise noted, the same aggregation applies to all later experiments as well.)
Even though bLSM's merge scheduler prevents writes from being stalled, the instantaneous write throughput still exhibits a large variance with regular temporary peaks. Recall that bLSM uses the merge progress at each level to control its in-memory write speed. After the component C1 is full and becomes C 1 , the original C1 will be empty and will have much shorter merge times. This will temporarily increase the in-memory write speed of bLSM, which then quickly drops as C1 grows larger and larger. Moreover, the Zipf update workload increases the write throughput only because updated entries can be reclaimed earlier, but the overall variance performance trends are still the same.
Running Phase. Based on the maximum write throughput measured in the testing phase, we then used a constant data arrival process (95% of the maximum) in the running phase to evaluate bLSM's behavior. Figure 6b shows the instantaneous write throughput of bLSM under the uniform and Zipf update workloads. bLSM maintains a sustained write throughput during the initial period of the experiment, but later has to slow down its in-memory write rate periodically due to background merge pressure. Figure 6c further shows the resulting percentile write and processing latencies. The processing latency measures only the time for the LSM-tree to process a write, while the write latency includes both the write's queuing time and processing time. By slowing down the in-memory write rate, bLSM indeed bounds the processing latency. However, the write latency is much larger because writes must be queued when they cannot be processed immediately. This suggests that simply bounding the maximum processing latency is far from sufficient; it is important to minimize the variance in an LSM-tree's processing rate to minimize write latencies.
FULL MERGES
In this section, we explore the scheduling choices of LSMtrees with full merges and then evaluate the impact of merge scheduling on write stalls using our two-phase approach. Finally, we examine other variations of the tiering merge policy that are used in practical systems.
Merge Scheduling for Full Merges
We first introduce some useful notation for use throughout our analysis in Table 1. To simplify the analysis, we will ignore the I/O cost of flushes since merges consume most of the I/O bandwidth.
Component Constraint
To provide acceptable query performance and space utilization, the total number of disk components of an LSMtree must be bounded. We call this upper bound the component constraint, and it can be enforced either locally or It remains a question how to determine the maximum number of disk components for the component constraint. In general, tolerating more disk components will increase the LSM-tree's ability to reduce write stalls and absorb write bursts, but it will decrease query performance and space utilization. Given the negative impact of stalls on write latencies, one solution is to tolerate a sufficient number of disk components to avoid write stalls while the worst-case query performance and space utilization are still bounded. For example, one conservative constraint would be to tolerate twice the expected number of disk components, e.g., 2 · L components for leveling and 2 · T · L components for tiering.
Interaction with Writes
When the component constraint is violated, the processing of writes by an LSM-tree has to be slowed down or stopped. Existing LSM-tree implementations [5,7,51] prefer to gracefully slow down the in-memory write rate by adding delays to some writes. This approach reduces the maximum processing latency, as large pauses are broken down into many smaller ones, but the overall processing rate of an LSM-tree, which depends on the I/O cost of each write, is not affected. Moreover, this approach will result in an even larger queuing latency. There may be additional considerations for gracefully slowing down writes, but we ar-gue that processing writes as quickly as possible minimizes the overall write latency, as stated by the following theorem. See the Appendix for proofs for all theorems.
Theorem 1. Given any data arrival process and any LSM-tree, processing writes as quickly as possible minimizes the latency of each write.
Proof Sketch. Consider two merge schedulers S and S which only differ in that S may add arbitrary delays to writes while S processes writes as quickly as possible. For each write request r, r must be completed by S no later than S because the LSM-tree has the same processing rate but S adds some delays to writes.
It should be noted that Theorem 1 only considers write latencies. By processing writes as quickly as possible, disk components can stack up more quickly (up to the component constraint), which may negatively impact query performance. Thus, a better approach may be to increase the write processing rate, e.g., by changing the structure of the LSM-tree. We leave the exploration of this direction as future work. newly flushed components added while this merge operation is being executed, assuming that flushes can still proceed.
Degree of Concurrency
Our two-phase evaluation approach chooses the maximum write throughput of an LSM-tree as the arrival rate µ. For leveling, the maximum write throughput is approximately
W level = 2·B
T ·L , as each entry is merged T 2 times per level. For tiering, the maximum write throughput is approximately Wtier = B L , as each entry is merged only once per level. By substituting W level and Wtier for µ, one needs to tolerate at least 2·T i−1 L flushed components for leveling and T i L flushed components for tiering to avoid write stalls. Since the term T i grows exponentially, a large number of flushed components will have to be tolerated when a large disk component is being merged. Consider the leveling merge policy with a size ratio of 10. To merge a disk component at Level 5, approximately 2·10 4 5 = 4000 flushed components would need to be tolerated, which is highly unacceptable.
Clearly, concurrent merges must be performed to minimize write stalls. When a large merge is being processed, smaller merges can still be completed to reduce the number of components. By the definition of the tiering and leveling merge policies, there can be at most one active merge operation per level. Thus, given an LSM-tree with L levels, at most L merge operations can be scheduled concurrently.
I/O Bandwidth Allocation
Given multiple active merge operations, the merge scheduler must further decide how to allocate I/O bandwidth to these operations. A heuristic used by existing systems [2,4,7] is to allocate I/O bandwidth fairly (evenly) to all ongoing merges. We call this the fair scheduler. The fair scheduler ensures that all merges at different levels can proceed, thus eliminating potential starvation. Recall that write stalls occur when an LSM-tree has too many disk components, thus violating the component constraint. It is unclear whether or not the fair scheduler can minimize write stalls by minimizing the number of disk components over time.
Recall that both the leveling and tiering merge policies always merge the same number of disk components at once. We propose a novel greedy scheduler that always allocates the full I/O bandwidth to the merge operation with the smallest remaining number of bytes. The greedy scheduler has a useful property that it minimizes the number of disk components over time for a given set of merge operations.
Theorem 2. Given any set of merge operations that process the same number of disk components and any I/O bandwidth budget, the greedy scheduler minimizes the number of disk components at any time instant.
Proof Sketch. Consider an arbitrary scheduler S and the greedy scheduler S . Given N merge operations, we can show that S always completes the i-th (1 ≤ i ≤ N ) merge operation no later than S. This can be done by noting that S always processes the smallest merge operation first.
Theorem 2 only considers a set of statically created merge operations. This conclusion may not hold in general because sometimes completing a large merge may enable the merge policy to create smaller merges, which can then reduce the number of disk components more quickly. Because of this, there actually exists no merge scheduler that can always minimize the number of disk components over time, as stated by the following theorem. However, as we will see in our later evaluation, the greedy scheduler is still a very effective heuristic for minimizing write stalls.
Theorem 3. Given any I/O bandwidth budget, no merge scheduler can minimize the number of disk components at any time instant for any data arrival process and any LSMtree for a deterministic merge policy where all merge operations process the same number of disk components.
Proof Sketch. Consider an LSM-tree that has created a small merge MS and a large merge ML. Completing ML allows the LSM-tree to create a new small merge M S that is smaller than MS. Consider two merge schedulers S1 and S2, where S1 first processes MS and then ML, and S2 first processes ML and then M S . It can be shown that S1 has the earliest completion time for the first merge and S2 has the earliest completion time for the second merge, but no merge scheduler can outperform both S1 and S2. The pseudocode for the greedy scheduling algorithm is shown in Figure 7. It stores the list of scheduled merge operations in mergeOps. At any time, there is at most one merge operation being executed by the merge thread, which is denoted by activeOp. The merge policy calls Schedule-Merge when a new merge operation is scheduled, and the merge thread calls CompleteMerge when a merge operation is completed. In both functions, mergeOps is updated accordingly and the merge scheduler is notified to check whether a new merge operation needs to be executed. It should be noted that in general one cannot exactly know which merge operation requires the least amount of I/O bandwidth until the new component has been fully produced. Thus, line 12 uses the number of remaining input pages as an approximation to determine the smallest merge operation. Finally, if the newly selected merge operation is inactive, i.e., not being executed, the scheduler pauses the previous active merge operation and activates the new one.
Putting Everything Together
Under the greedy scheduler, larger merges may be starved at times since they receive lower priority. This has a few implications. First, during normal user workloads, such starvation can only occur if the arrival rate is temporarily faster than the processing rate of an LSM-tree. Given the negative impact of write stalls on write latencies, it can actually be beneficial to temporarily delay large merges so that the system can better absorb write bursts. Second, the greedy scheduler should not be used in the testing phase because it would report a higher but unsustainable write throughput due to such starved large merges.
Finally, our discussions of the greedy scheduler as well as the single-threaded scheduler are based on an important assumption that a single merge operation is able to fully utilize
Experimental Evaluation
We now experimentally evaluate the write stalls of LSMtrees using our two-phase approach. We discuss the specific experimental setup followed by the detailed evaluation, including the impact of merge schedulers on write stalls, the benefit of enforcing the component constraint globally and of processing writes as quickly as possible, and the impact of merge scheduling on query performance.
Experimental Setup
All experiments in this section were performed using As-terixDB with the general setup described in Section 3. Unless otherwise noted, the size ratio of leveling was set at 10, which is a commonly used configuration in practice [5,7]. For the experimental dataset with 100 million unique records, this results in a three-level LSM-tree, where the last level is nearly full. For tiering, the size ratio was set at 3, which leads to better write performance than leveling without sacrificing too much on query performance. This ratio results in an eight-level LSM-tree.
We evaluated the single-threaded scheduler (Section 5.1.3), the fair scheduler (Section 5.1.3), and the proposed greedy scheduler (Section 5.1.5). The single-threaded scheduler only executes one merge at a time using a single thread. Both the fair and greedy schedulers are concurrent schedulers that execute each merge using a separate thread. The difference is that the fair scheduler allocates the I/O bandwidth to all ongoing merges evenly, while the greedy scheduler always allocates the full I/O bandwidth to the smallest merge. To minimize flush stalls, a flush operation is always executed in a separate thread and receives higher I/O priority. Unless otherwise noted, all three schedulers enforce global component constraints and process writes as quickly as possible.
The maximum number of disk components is set at twice the expected number of disk components for each merge policy. Each experiment was performed under both the uniform and Zipf update workloads. Since the Zipf update workload had little impact on the overall performance trends, except that it led to higher write throughput, its experiment results are omitted here for brevity.
Testing Phase
During the testing phase, we measured the maximum write throughput of an LSM-tree by writing as much data as possible. In general, alternative merge schedulers have little impact on the maximum write throughput since the I/O bandwidth budget is fixed, but their measured write throughput may be different due to the finite experimental period. Figures 8a and 8b shows the instantaneous write throughout of LSM-trees using different merge schedulers for tiering and leveling. Under both merge policies, the single-threaded scheduler regularly exhibits long pauses, making its write throughput vary over time. The fair scheduler exhibits a relatively stable write throughput over time since all merge levels can proceed at the same rate. With leveling, its write throughput still varies slightly over time since the component size at each level varies. The greedy scheduler appears to achieve a higher write throughput than the fair scheduler by starving large merges. However, this higher write throughput eventually drops when no small merges can be scheduled. For example, the write throughput with tiering drops slightly at 1100s and 4000s, and there is a long pause from 6000s to 7000s with leveling. This result confirms that the fair scheduler is more suitable for testing the maximum write throughput of an LSM-tree, as merges at all levels can proceed at the same rate. In contrast, the single-threaded scheduler incurs many long pauses, causing a large variance in the measured write throughput. The greedy scheduler provides a higher write throughput by starving large merges, which would be undesirable at runtime.
Running Phase
Turning to the running phase, we used a constant data arrival process, configured based on 95% of the maximum write throughput measured by the fair scheduler, to evaluate the write stalls of LSM-trees.
LSM-trees can provide a stable write throughput. We first evaluated whether LSM-trees with different merge schedulers can support a high write throughput with low write latencies. For each experiment, we measured the instantaneous write throughput and the number of disk components over time as well as percentile write latencies.
The results for tiering are shown in Figure 9. Both the fair and greedy schedulers are able to provide stable write throughputs and the total number of disk components never reaches the configured threshold. The greedy scheduler also minimizes the number of disk components over time. The single-threaded scheduler, however, causes a large number of write stalls due to the blocking of large merges, which confirms our previous analysis. Because of this, the singlethreaded scheduler incurs large percentile write latencies. In contrast, both the fair and greedy schedulers provide small write latencies because of their stable write throughput. Figure 10 shows the corresponding results for leveling. inherent variance of merge times, the fair scheduler alone cannot provide a stable write throughput; this results in relatively large write latencies. In contrast, the greedy scheduler avoids write stalls by always minimizing the number of components, which results in small write latencies. This experiment confirms that LSM-trees can achieve a stable write throughput with a relatively small performance variance. Moreover, the write stalls of an LSM-tree heavily depend on the design of the merge scheduler.
Impact of Size Ratio. To verify our findings on LSMtrees with different shapes, we further carried out a set of experiments by varying the size ratio from 2 to 10 for both tiering and leveling. For leveling, we applied the dynamic level size optimization [31] so that the largest level remains almost full by slightly modifying the size ratio between Levels 0 and 1. This optimization maximizes space utilization without impacting write or query performance.
During the testing phase, we measured the maximum write throughput for each LSM-tree configuration using the fair scheduler, which is shown in Figure 11a. In general, a larger size ratio increases write throughput for tiering but decreases write throughput for leveling because it decreases the merge frequency of tiering but increases that of leveling. During the running phase, we evaluated the 99% percentile write latency for each LSM-tree configuration using constant data arrivals, which is shown in Figure 11b. With tiering, both the fair and greedy schedulers are able to provide a stable write throughput with small write latencies. With leveling, the fair scheduler causes large write latencies when the size ratio becomes larger, as we have seen before. In contrast, the greedy scheduler is always able to provide a stable write throughput along with small write latencies. This again confirms that LSM-trees, despite their size ratios, can provide a high write throughput with a small variance with an appropriately chosen merge scheduler.
Benefit of Global Component Constraints. We next evaluated the benefit of global component constraints in terms of minimizing write stalls. We additionally included a variation of the fair and greedy schedulers that enforces local component constraints, that is, 2 components per level for leveling and 2 · T components per level for tiering. The resulting write latencies are shown in Figure 12. In general, local component constraints have little impact on tiering since its merge time per level is relatively stable. However, the resulting write latencies for leveling become much large due to the inherent variance of its merge times. Moreover, local component constraints have a larger negative impact on the greedy scheduler. The greedy scheduler prefers small merges, which may not be able to complete due to possible violations of the constraint at the next level. This in turn causes longer stalls and thus larger percentile write latencies. In contrast, global component constraints better absorb these variances, reducing the write latencies.
Benefits of Processing Writes As Quickly As Possible. We further evaluated the benefit of processing writes as quickly as possible. We used the leveling merge policy with a bursty data arrival process that alternates between a normal arrival rate of 2000 records/s for 25 minutes and a high arrival rate of 8000 records/s for 5 minutes. We evaluated two variations of the greedy scheduler. The first variation processes writes as quickly as possible (denoted as "No Limit"), as we did before. The second variation enforces a maximum in-memory write rate of 4000 records/s (denoted as "Limit") to avoid write stalls.
The instantaneous write throughput and the percentile write latencies of the two variations are shown in Figures 13a and 13b respectively. As Figure 13a shows, delaying writes avoids write stalls and the resulting write throughput is more stable over time. However, this causes larger write latencies (Figure 13b) since delayed writes must be queued. In contrary, writing as quickly as possible causes occasional write stalls but still minimizes overall write latencies. This confirms our previous analysis that processing writes as quickly as possible minimizes write latencies.
Impact on Query Performance. Finally, since the point of having data is to query it, we evaluated the impact of the fair and greedy schedulers on concurrent query performance. We evaluated three types of queries, namely point lookups, short scans, and long scans. A point lookup accesses 1 record given a primary key. A short scan query accesses 100 records and a long scan query accesses 1 million records. In each experiment, we executed one type of query concurrently with concurrent updates with constant arrival rates as before. To maximize query performance while ensuring that LSM flush and merge operations receive enough I/O bandwidth, we used 8 query threads for point lookups and short scans and used 4 query threads for long scans. We also evaluated the impact of forcing SSD writes regularly on query performance. For this purpose, we included the variations of the fair and greedy schedulers that only force SSD writes when a merge completes.
The instantaneous query throughput under the tiering and leveling merge policies is depicted in Figure 14 and Figure 16 respectively. The corresponding percentile query latencies are shown in Figure 17 and Figure 15 respectively. For point lookups and short scans, the query throughput is averaged over 30-second windows. For long scans, the query throughput is averaged over 1-minute windows. As the results show, leveling has similar point lookup throughput to tiering because Bloom filters can filter out most unnecessary I/Os, but it has much better range query throughput than tiering. Moreover, the greedy scheduler always improves query performance by minimizing the number of components. Among the three types of queries, point lookups and short scans benefit more from the greedy scheduler since these two types of queries are more sensitive to the number of disk components. In contrast, long scans incur most of their I/O cost at the largest level. Moreover, the tiering merge policy benefits more from the greedy scheduler than the leveling merge policy since the performance difference between the greedy and fair schedulers is larger under the tiering merge policy. This is because the tiering merge policy has more disk components than the leveling merge policy. Note that under the leveling merge policy, there is a drop in query throughput under the fair scheduler at around 5400s, even though there is little difference in the number of disk components between the fair and greedy scheduler. This drop is caused by write stalls during that period, as indicated by the instantaneous write throughput of Figure 10. After the LSM-tree recovers from write stalls, it attempts to write as much data as possible in order to catch up, which results in a lower query throughput.
Forcing SSD writes regularly has some slight negative impact on query throughput, but it significantly reduces the percentile query latencies. The reason is that disk components must be forced to disk in the end of merges. Without forcing SSD writes regularly, the large disk forces will significantly impact the query latencies.
Tiering in Practice
Existing LSM-based systems, such as BigTable [23] and HBase [4], use a slight variation of the tiering merge policy discussed in the literature. This variation, often referred as the size-tiered merge policy, does not organize components into levels explicitly but simply schedules merges based on the sizes of disk components. This policy has three important parameters, namely the size ratio T , the minimum number of components to merge min, and the maximum number of components to merge max. It merges a sequence of components, whose length is at least min, when the total size of the sequence's the younger components is T times larger than that of the oldest component in the sequence. It also seeks to merge as many components as possible at once until max is reached. Concurrent merges can also be performed. For example, in HBase [4], each execution of the size-tiered merge policy will always examine the longest prefix of the component sequence in which no component is being merged.
An example of the size-tiered merge policy is shown in Figure 18, where each disk component is labeled with its size. Let the size ratio be 1.2 and the minimum and maximum number of components per merge be 2 and 4 respectively. Suppose initially that no component is being merged. The first of execution the size-tiered merge policy starts from the oldest component, labeled 100GB. However, no merge is scheduled since this component is too large. It then examines the next component, labeled 10GB, and schedules a merge operation for the 4 components labeled from 10GB to 5GB. The next execution of the size-tiered merge policy starts from the component labeled 1GB, and it schedules a merge for the 3 components labeled from 128MB to 64MB.
To evaluate the write stalls of the size-tiered merge policy, we repeated the experiments using our two-phase approach. In our evaluation, the size ratio was set at 1.2, which is the default value in HBase [4], and the minimum and maximum mergeable components were set at 2 and 10 respectively. The maximum tolerated disk components parameter was set at 50. During the testing phase, the maximum write throughput measured by using the fair scheduler was 17,008 records/s. Then during the running phase, we used a constant data arrival process based on 95% of this maximum throughput to evaluate write stalls. The instantaneous write throughput of the LSM-tree and the number of disk components over time are shown in Figures 19a and 19b respectively. As one can see, write stalls have occurred under the fair scheduler. Moreover, even though the greedy scheduler avoids write stalls, its number of disk components keeps increasing over time. This result indicates that the maximum write throughput measured during the testing phase is not sustainable. This problem is caused by the non-determinism of the size-tiered merge policy since it tries to merge as many disk components as possible. This behavior impacts the maximum write throughput of the LSM-tree. During the testing phase, when writes are often blocked because of too many disk components, this merge policy tends to merge more disk components at once, which then leads to a higher write throughput. However, during the running phase, when writes arrive steadily, this merge policy tends to schedule smaller merges as flushed components accumulate. For example, during the running phase of this experiment, 55 long merges that involved 10 components were scheduled, but only 24 long merges were scheduled under the fair scheduler during the running phase. Even worse, 99.76% of the scheduled merges under the greedy scheduler involved no more than 4 components since large merges were starved.
To address problem and to minimize write stalls, the arrival rate must be reduced. However, finding the maxi- mum "stall free" arrival rate is non-trivial due to the nondeterminism of the size-tiered merge policy. Instead, we propose a simple and conservative solution to avoid write stalls. During the testing phase, we propose to measure the lower bound write throughput by always merging the minimum number of disk components. This write throughput will serve as a baseline of the arrival rate. During runtime, the size-tiered merge policy can merge more disk components to dynamically increase its write throughput to minimize stalls. We repeated the previously experiments based on this solution. During the testing phase, the merge policy always merged 2 disk components, which resulted in a lower maximum write throughput of 8,863 records/s. We then repeated the running phase based on this throughput. Figures 20a and 20b show the instantaneous write throughput and the number of disk components over time respectively during the running phase. In this case, both schedulers exhibit no write stalls and the number of disk components is more stable over time. Moreover, the greedy scheduler still slightly reduces the number of disk components.
PARTITIONED MERGES
We now examine the write stall behavior of partitioned LSM-trees using our two-phase approach. In a partitioned LSM-tree, a large disk component is range-partitioned into multiple small files and each merge operation only processes a small number of files with overlapping ranges. Since merges always happen immediately once a level is full, a singlethreaded scheduler could be sufficient to minimize write stalls. In the reminder of this section, we will evaluate Lev-elDB's single-threaded scheduler.
LevelDB's Merge Scheduler
LevelDB's merge scheduler is single-threaded. It computes a score for each level and selects the level with the largest score to merge. Specifically, the score for Level 0 is computed as the total number of flushed components divided by the minimum number of flushed components to merge. For a partitioned level (1 and above), its score is defined as the total size of all files at this level divided by the configured maximum size. A merge operation is scheduled if the largest score is at least 1, which means that the selected level is full. If a partitioned level is chosen to merge, LevelDB selects the next file to merge in a round-robin way.
LevelDB only restricts the number of flushed components at Level 0. By default, the minimum number of flushed components to merge is 4. The processing of writes will be slowed down or stopped of the number of flushed component reaches 8 and 12 respectively. Since we have already shown in Section 5.1.2 that processing writes as quickly as possible reduces write latencies, we will only use the stop threshold (12) in our evaluation.
Experimental Evaluation. We have implemented Lev-elDB's partitioned leveling merge policy and its merge scheduler inside AsterixDB for evaluation. Similar to LevelDB, the minimum number of flushed components to merge was set at 4 and the stop threshold was set at 12 components. Unless otherwise noted, the maximum size of each file was To minimize write stalls caused by flushes, we used two memory components and a separate flush thread. We further evaluated the impact of two widely used merge selection strategies on write stalls. The roundrobin strategy chooses the next file to merge in a round-robin way. The choose-best strategy [54] chooses the file with the fewest overlapping files at the next level. We used our two-phase approach to evaluate this partitioned LSM-tree design. The instantaneous write throughput during the testing phase is shown in Figure 21a, where the write throughput of both strategies decreases over time due to more frequent stalls. Moreover, under the uniform update workload, the alternative selection strategies have little impact on the overall write throughput, as reported in [39]. During the testing phase, we used a constant arrival process to evaluate write stalls. The instantaneous write throughput of both strategies is shown in Figure 21b. As the result shows, in both cases write stalls start to occur after time 6000s. This suggests that the measured write throughput during the testing phase is not sustainable.
Measuring Sustainable Write Throughput
One problem with LevelDB's score-based merge scheduler is that it merges as many components at Level 0 as possible at once. To see this, suppose that the minimum number of mergeable components at Level L0 is T0 and that the maximum number of components at Level 0 is T 0 . During the testing phase, where writes pile up as quickly as possible, the merge scheduler tends to merge the maximum possible number of components T 0 instead of just T0 at once. Because of this, the LSM-tree will eventually transit from the expected shape (Figure 22a) to the actual shape (Figure 22b), where T is the size ratio of the partitioned levels. Note that the largest level is not affected since its size is determined by the number of unique entries, which is relatively stable. Even though this elastic design dynamically increases the processing rate as needed, it has the following problems.
Unsustainable Write Throughput. The measured maximum write throughput is based on merging T 0 flushed components at Level 0 at once. However, this is likely to cause write stalls during the running phase since flushes cannot further proceed.
Suboptimal Trade-Offs. The LSM-tree structure in Figure 22b is no longer making optimal performance tradeoffs since the size ratios between its adjacent levels are not the same anymore [45]. By adjusting the sizes of interme- diate levels so that adjacent levels have the same size ratio, one can improve both write throughput and space utilization without affecting query performance. Low Space Utilization. One motivation for industrial systems to adopt partitioned LSM-trees is their higher space utilization [31]. However, the LSM-tree depicted in Figure 22b violates this performance guarantee because the ratio of wasted space increases from 1/T to T 0 /T0 · 1/T .
Because of these problems, the measured maximum write throughput cannot be used in the long-term. We propose a simple solution to address these problems. During the testing phase, we always merge exactly T0 components at Level 0. This ensures that merge preferences will be given equally to all levels so that the LSM-tree will stay in the expected shape ( Figure 22a). Then, during the running phase, the LSM-tree can elastically merge more components at Level 0 as needed to absorb write bursts.
To verify the effectiveness of the proposed solution, we repeated the previous experiments on the partitioned LSMtree. During the testing phase, the LSM-tree always merged 4 components at Level 0 at once. The measured instantaneous write throughput is shown in Figure 23a, which is 30% lower than that of the previous experiment. During the running phase, we used a constant arrival process based on this lower write throughput. The resulting instantaneous write throughput is shown in Figure 23b, where the LSMtree successfully maintains a sustainable write throughput without any write stalls, which in turn results in low write latencies (not shown in the figure). This confirms that Lev-elDB's single-threaded scheduler is sufficient to minimize write stalls, given that a single merge thread can fully utilize the I/O bandwidth budget.
After fixing the unsustainable write throughput problem of LevelDB, we further evaluated the impact of partition size on the write stalls of partitioned LSM-trees. In this experiment, we varied the size of each partitioned file from 8MB to 32GB so that partitioned merges effectively transit into full merges. The maximum write throughput during the running phase and the 99th percentile write latencies Figures 24a and 24b respectively. Even though the partition size has little impact on the overall write throughput, a large partition size can cause large write latencies since we have shown in Section 5 that a single-threaded scheduler is insufficient to minimize write stalls for full merges. Most implementations of partitioned LSM-trees today already choose a small partition size to bound the temporary space occupied by merges. We see here that one more reason to do so is to minimize write stalls under a single-threaded scheduler.
EXTENSION: SECONDARY INDEXES
We now extend our two-phase approach to evaluate LSMbased datasets in the presence of secondary indexes. We first discuss two secondary index maintenance strategies used in practical systems, followed by the experimental evaluation and analysis.
Secondary Index Maintenance
An LSM-based storage system often contains a primary index plus multiple secondary indexes for a given dataset [41]. while each secondary index stores the mapping from secondary keys to primary keys. During data ingestion, secondary indexes must be properly maintained to ensure correctness. In the primary LSM-tree, writes (inserts, deletes, and updates) can be added blindly to memory since entries with identical keys will be reconciled by queries automatically. However, this mechanism does not work for secondary indexes since the value of a secondary index key might change. Thus, in addition to adding the new entry to the secondary index, the old entry (if any) must be cleaned as well. We now discuss two secondary index maintenance strategies used in practice [41].
The eager index maintenance strategy performs a point lookup to fetch the old record during the ingestion time. If the old record exists, anti-matter entries are produced to cleanup its secondary indexes. The new record is then added to the primary index and all secondary indexes. In an update-heavy workload, these point lookups can become the ingestion bottleneck instead of the LSM-tree write operations.
The lazy index maintenance strategy does not cleanup secondary indexes during the ingestion time. Instead, it only adds the new entry into secondary indexes without any point lookups. Secondary indexes are then cleaned up in the background either when merging the primary index components [52] or when merging the secondary index components [41]. Evaluating different secondary index cleanup methods is beyond the scope of this work. Instead, we choose to evaluate the lazy strategy without cleaning up secondary indexes.
Experimental Evaluation
Experiment Setup. In this set of experiments, we modified the YCSB benchmark to allow us incorporate secondary indexes and formulate secondary index queries. Specifically, we generated records with multiple fields with each secondary field value randomly following a uniform distribution based on the total number of base records. We built two secondary indexes in our experiment. The primary index and the two secondary indexes all used the tiering merge policy with size ratio 3.
In this set of experiments, we evaluated two merge schedulers, namely fair and greedy. Each LSM-tree is merged independently with a separate merge scheduler instance. However, these LSM-trees shared the same memory budget 128MB for each memory component and the I/O bandwidth budget of 100MB/s. We also evaluated two index maintenance strategies, namely eager and lazy. For the ea- ger strategy, we used 8 writer threads to maximize the point lookup throughput. For the lazy strategy, 1 writer thread was sufficient to reach the maximum write throughput since there were no point lookups during data ingestion. Testing Phase. We first measured the maximum write throughput of the lazy and eager strategies using the fair scheduler during the testing phase. The maximum write throughput was 9,731 records/s for the lazy strategy and 7,601 records/s for the eager strategy. (The eager strategy results in a slightly lower write throughput because it has to cleanup secondary indexes using point lookups.)
Running Phase. During the running phase, we used constant data arrivals to evaluate write stalls. The instantaneous write throughput and percentile write latencies for the lazy and eager strategies are shown in Figures 25 and 26 respectively. The lazy strategy exhibits a relatively stable write throughput ( Figure 25a) and lower write latencies (Figure 25b), which is similar to the single LSM-tree case. However, under the eager strategy, there are regular fluctuations in the write throughput (Figure 26a), results in larger write latencies (Figure 26b). This is because the write throughput of the eager strategy is bounded by point lookups in this experiment, and the point lookup throughput inherently varies due to ongoing disk activities and the number of disk components. Based on queuing theory [33], the system utilization must be reduced to minimize the write latency. Moreover, the greedy scheduler still has lower write latencies due to its minimizing the number of disk components to improve point lookup performance.
Since the eager strategy results in large percentile write latencies under a high data arrival rate, we further carried out another experiment to evaluate the percentile write latencies under different system utilizations, that is, different data arrival rates. The resulting 99% percentile write latencies under various utilizations are shown in Figure 27. As the result shows, the write latency becomes relatively small once the utilization is below 80%. This is much smaller than the utilization used in our previous experiments, which was 95%. This result also confirms that because of the inherent variance of the point lookup throughput, one must reduce the data arrival rate, that is, the system utilization, to achieve smaller write latencies.
Secondary Index Queries. Finally, we evaluated the impact of different merge schedulers and maintenance strategies on the performance of secondary index queries. We used 8 query threads to maximize query throughput. Each secondary index query first scans the secondary index to fetch primary keys, which are then sorted and used to fetch records from the primary index. We varied the query selectivity from 1 record to 1000 records so that the performance bottleneck eventually shifts from secondary index scans to primary index lookups.
The instantaneous query throughput for various query selectivities under the lazy and eager strategy is shown in Figures 28 and 29 respectively. The query throughput is averaged over each 30-second windows. In general, the greedy scheduler improves secondary index query performance under all query selectivities since it reduces the number of disk components for both the primary index and secondary indexes. The improvement is less significant under the eager strategy since the arrival rate is lower.
To summarize, under the lazy strategy, an LSM-based dataset with multiple secondary indexes has similar performance characteristics to the single LSM-tree case, because this can be viewed as a simple extension to multiple LSM-trees. The greedy scheduler also improves query performance by minimizing the number of disk components as before. However, under the eager strategy, the point lookups actually become the ingestion bottleneck instead of LSMtree write operations. This not only reduces the overall write throughput, but further causes larger write latencies due to the inherent variance of the point lookup throughput.
LESSONS AND INSIGHTS
Having studied and evaluated the write stall problem for various LSM-tree designs, here we summarize the lessons and insights observed from our evaluation.
The LSM-tree's write latency must be measured properly. The out-of-place update nature of LSM-trees has introduced the write stall problem. Throughout our evaluation, we have seen cases where one can obtain a higher but unsustainable write throughput. For example, the greedy scheduler would report a higher write throughput by starving large merges, and LevelDB's merge scheduler would report a higher but unsustainable write throughput by dynamically adjusting the shape of the LSM-tree. Based on our findings, we argue that in addition to the testing phase, used by existing LSM research, an extra running phase must be performed to evaluate the usability of the measured maximum write throughput. Moreover, the write latency must be measured properly due to queuing. One solution is to use the proposed two-phase evaluation approach to evaluate the resulting write latencies under high utilization, where the arrival rate is close to the processing rate.
Merge scheduling is critical to minimizing write stalls. Throughout our evaluation of various LSM-tree designs, including bLSM [51], full merges, and partitioned merges, we have seen that merge scheduling has a critical impact on write stalls. Comparing these LSM-tree designs in general depends on many factors and is beyond the scope of this paper; here we have focused on how to minimize write stalls for each LSM-tree design.
bLSM [51], an instance of full merges, introduces a sophisticated spring-and-gear merge scheduler to bound the processing latency of LSM-trees. However, we found that bLSM still has large variances in its processing rate, leading to large write latencies under high arrival rates. Among the three evaluated schedulers, namely single-threaded, fair, and greedy, the single-threaded scheduler should not be used in practical systems due to the long stalls caused by large merges. The fair scheduler should be used when measuring the maximum throughput because it provides fairness to all merges. The greedy scheduler should be used at runtime since it better minimizes the number of disk components, both reducing write stalls and improving query performance. Moreover, as an important design choice, global component constraints better minimizes write stalls.
Partitioned merges simplify merge scheduling by breaking large merges into many smaller ones. However, we found a new problem that the measured maximum write throughput of LevelDB is unsustainable because it dynamically adjusts the size ratios under write-intensive workloads. After fixing this problem, a single-threaded scheduler with a small partition size, as used by LevelDB, is sufficient for delivering low write latencies under high utilization. However, fixing this problem reduced the maximum write throughput of LevelDB roughly one-third in our evaluation.
For both full and partitioned merges, processing writes as quickly as possible better minimizes write latencies. Finally, with proper merge scheduling, all LSM-tree designs can indeed minimize write stalls by delivering low write latencies under high utilizations.
CONCLUSION
In this paper, we have studied and evaluated the write stall problem for various LSM-tree designs. We first proposed a two-phase approach to use in evaluating the impact of write stalls on percentile write latencies using a combination of closed and open system testing models. We then identified and explored the design choices for LSM merge schedulers. For full merges, we proposed a greedy scheduler that minimizes write stalls. For partitioned merges, we found that a single-threaded scheduler is sufficient to provide a stable write throughput but that the maximum write throughput must be measured properly. Based on these findings, we have shown that performance variance must be considered together with write throughput to ensure the actual usability of the measured throughput.
Proof. Given an LSM-tree, consider two merge schedulers S and S which only differ in that S may add arbitrary delays to writes to avoid write stalls while S processes writes as quickly as possible. Denote the total number of writes processed by S and S at time instant T as WT and W T respectively. Since S processes writes as quickly as possible, we have WT ≤ W T . In other words, given the same numbers of writes, S processes these writes no later than S.
Consider the i-th write request that arrives at time instant Ta i . Suppose this write is processed by S and S at time instants Tp i and T p i respectively. Based on the analysis above, it is straightforward that Tp i ≥ T p i . Thus, we have Tp i − Ta i ≥ T p i − Ta i , which implies that S minimizes the latency of each write.
Theorem 2. Given any set of merge operations that process the same number of disk components and any I/O bandwidth budget, the greedy scheduler minimizes the number of disk components at any time instant.
Proof. Let S be an arbitrary merge scheduler and S be the greedy scheduler. Suppose there are N merge operations in total and the initial time instant is t0. Denote by ti and t i the time instants when S and S complete their i-th merge operation, respectively. Since all merge operations always process the same number of disk components, we only need to show that for any i ∈ [1, N ], ti ≥ t i always holds. In other words, S completes each merge operation no later than S.
Suppose there exists i ∈ [1, N ] s.t. ti < t i . Denote by |S ≤i | and |S ≤i | the total number of bytes read and written by S and S up to the completion of the i-th merge operation. By the definition of the greedy scheduler S , we have |S ≤i | ≥ |S ≤i |. Since ti < t i , we further have
|S ≤i | t i −t 0 > |S ≤i | t i −t 0 .
This implies that the merge scheduler S requires a larger I/O bandwidth budget than S , which leads to a contradiction. Thus, for any i ∈ [1, N ], ti ≤ t i always holds, which proves that S minimizes the number of disk components over time.
Theorem 3. Given any I/O bandwidth budget, no merge scheduler can minimize the number of disk components at any time instant for any data arrival process and any LSMtree for a deterministic merge policy where all merge operations process the same number of disk components.
Proof. In this proof, we will construct an example showing that no such merge scheduler can be designed. Consider a two-level LSM-tree with a tiering merge policy. The size ratio of this merge policy is set at 2. Suppose Level 1, which is the last level, contains three disk components D1, D2, D3 and Level 0 contains two disk components, D4 and D5. For simplicity, assume that no more writes will arrive. Initially, the merge policy creates two merge operations, namely the merge operation M1−2 that processes D1 and D2 and the merge operation M4−5 that processes D4 and D5. Upon the completion of M1−2, which produces a new disk component D1−2, the merge policy will create a new merge operation M1−3 that processes D1−2 and D3. We further denote the amount of I/O bandwidth required by each merge operation M1−2, M4−5, and M1−3 as |M1−2|, |M4−5|, and |M1−3|. Finally, we assume that |M1−3| < |M4−5| < |M1−2|. This can happen if D2 contains a large number of deleted keys against D1 so that the merged disk component D1−2 is very small.
Suppose that the initial time instant is t0 and let the given I/O bandwidth budget be B. Consider a merge scheduler S that first executes M4−5 and then M1−2. At time instant t1 = t0 + , S completes M1−3. Based on the assumption |M1−3| < |M4−5| < |M1−2|, it follows that t1 < t 1 and t 2 < t2. Suppose there exists a merge scheduler S * that minimizes the number of disk components over time. Then, S * must satisfy the following two constraints: (1) complete one merge operation no later than t1; (2) complete two merge operations no later than t 2 .
To satisfy constraint (1), S * must execute M4−5 first. Then, S * must complete the second merge operation within time interval t 2 − t1 = . Thus, S * cannot satisfy constraint (2) by completing the second merge operation no later than t 2 because the only remaining merge operation M1−2 takes time
|M 1−2 | B
to finish. This leads to a contradiction that S * minimizes the number of disk components over time. Thus, we have constructed an example for which no such merge scheduler can be designed, which proves the theorem.
| 12,298 |
1906.09715
|
2952773240
|
The widespread adoption of Internet of Things has led to many security issues. Post the Mirai-based DDoS attack in 2016 which compromised IoT devices, a host of new malware using Mirai's leaked source code and targeting IoT devices have cropped up, e.g. Satori, Reaper, Amnesia, Masuta etc. These malware exploit software vulnerabilities to infect IoT devices instead of open TELNET ports (like Mirai) making them more difficult to block using existing solutions such as firewalls. In this research, we present EDIMA, a distributed modular solution which can be used towards the detection of IoT malware network activity in large-scale networks (e.g. ISP, enterprise networks) during the scanning infecting phase rather than during an attack. EDIMA employs machine learning algorithms for edge devices' traffic classification, a packet traffic feature vector database, a policy module and an optional packet sub-sampling module. We evaluate the classification performance of EDIMA through testbed experiments and present the results obtained.
|
There are several works in the literature on detecting PC-based botnets using their CnC (Command-and-control) server communication features. Bothunter @cite_24 builds a based on which three bot-specific sensors are constructed and correlation is performed between inbound intrusion scan alarms and the infection dialog model to generate a consolidated report. Spatio-temporal similarities between bots in a botnet in terms of bot-CnC coordinated activities are captured from network traffic and leveraged towards botnet detection in a local area network in Botsniffer @cite_7 . In BotMiner @cite_17 , the authors have proposed a botnet detection system which clusters similar CnC communication traffic and similar malicious activity traffic, and uses cross cluster correlation to detect bots in a monitored network.
|
{
"abstract": [
"We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter's infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.",
"Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names addresses). We start from the definition and essential properties of botnets. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets (IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate."
],
"cite_N": [
"@cite_24",
"@cite_7",
"@cite_17"
],
"mid": [
"191098608",
"1583098994",
"1775772884"
]
}
|
EDIMA: Early Detection of IoT Malware Network Activity Using Machine Learning Techniques
|
The Internet of Things (IoT) [1] is a network of sensing devices with limited resources and capable of wired/wireless communications with cloud services. IoT devices are being increasingly targeted by attackers using malware as they are easier to infect than conventional computers. This is due to several reasons [2] such as presence of legacy devices with no security updates, low priority given to security within the development cycle, weak login credentials, etc.
In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [3] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack.
The source code for Mirai was leaked in 2017 and since then there has been a proliferation of IoT malware. Script "kiddies" as well as professional blackhat/greyhat hackers have used the leaked source code to build their own IoT malware. These malware are usually variants of Mirai using a similar brute force technique of scanning random IP addresses for open TELNET ports and attempting to login using a built-in dictionary of commonly used credentials (Remaiten, Hajime), or more sophisticated malware that exploit software vulnerabilities to execute remote command injections on vulnerable devices (Reaper, Satori, Masuta, Linux.Darlloz, Amnesia etc.). Even though TELNET port scanning can be countered by deploying firewalls (at the user access gateway) which block incoming/outgoing TELNET traffic, malware exploiting software vulnerabilities involving application protocols such as HTTP, SOAP, PHP etc. are more difficult to block using firewalls because those application protocols form a part of legitimate traffic as well.
Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [4]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. Bitdefender mentioned in its blog in September 2017 [5] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through TELNET scanning telemetry data. In an October 2017 article [6], Arbor researchers estimated that the actual size of the Reaper botnet fluctuated between 10,000-20,000 bots but warned that this number could change at any time with an additional 2 million devices having been identified by botnet scanners as potential Reaper bots. A Kaspersky lab report [7] released in September 2018 says that 121,588 IoT malware samples were identified in the first half of 2018 which was three times the number of IoT malware samples in the whole of 2017.
Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage. As pointed out in [8], attempting to ensure that all IoT devices are secure-by-construction is futile and it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be networkbased rather than host-based.
In this research, we propose a solution towards detecting the network activity of IoT malware in large-scale networks such as enterprise and ISP (Internet Service Provider) networks.
Our proposed solution consists of machine learning (ML) algorithms running at the user access gateway which detect malware activity based on their scanning traffic patterns, a database that stores the malware scanning traffic patterns and can be used to retrieve or update those patterns, and a policy module which decides the further course of action after gateway traffic has been classified as malicious. It also includes an optional packet sub-sampling module which can be deployed for example, in case of enterprises where a number of IoT devices (≈ 10-100) are connected to a single access gateway. The bot detection solution can be deployed both on physical access gateways supplied by the ISP companies or as NFV (Network Function Virtualization) functions at the customer premises/enterprise in a SDN-NFV based network architecture, where SDN stands for Software-Defined Networking.
Bots scanning for and infecting vulnerable devices are targeted in particular by our solution. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Once the IoT bots are detected, the network operators can take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. The major contributions of this paper are listed below: 1) We have categorized most of the current IoT malware into a few categories to help identify similar malware and simplify the task of designing detection methods for them. 2) We have analyzed the traffic patterns for IoT malware from each category through testbed experiments and packet capture utilities. 3) We have proposed a modular solution towards detection of IoT malware activity by using ML techniques with the above traffic patterns.
III. EDIMA ARCHITECTURE
Our proposed solution towards detecting the scanning packet traffic generated by IoT malware through the use of ML algorithms is called EDIMA (Early Detection of IoT Malware Network Activity) and is shown in Fig.1 infected with known IoT malware as well as gateways connected to uninfected devices. The database is updated frequently for newly discovered malware. The feature vectors and corrresponding class labels are retrieved by the ML model constructor for training ML classifier for the first time and also for re-training the classifier whenever a new malware is discovered. We envisage a community of security researchers, industry personnel and users who will collect traffic data for IoT malware through honeypots, consumer access gateways etc. The feature vectors extracted from the raw traffic data samples and the class labels assigned to those samples will be updated to the online feature database. 4) Policy Module: The policy module consists of a list of policies defined by network administrator which decide the course of actions to be taken once the traffic from an access gateway has been classified as malicious by the ML classifier module. For instance, the network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. 5) Sub-sampling Module (optional): For premises having thousands of IoT devices such as enterprises, industries etc. we also propose an optional sub-sampling module as introduced in [19]. This module samples the packet traffic from IoT devices both along time as well across the devices and presents them as input to the ML classifier module. The sub-sampling module would help reduce the computational overhead for ML classifier module by forwarding only a fraction of the incoming IoT packet traffic. We have categorized known IoT malware into three categories based on type of vulnerability that they target: TELNET, HTTP POST and HTTP GET. TELNET is an application-layer protocol used for bidirectional byte-oriented communication. Typically, a user with a terminal and running a TELNET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. HTTP GET and POST are methods based on HTTP (HyperText Transfer Protocol) applicationlayer protocol which are used to request data from and send data to server resources respectively. For example, HTTP GET is commonly used for requesting web pages from remote web servers through a browser. We have presented the malware categories, various malware belonging to those categories and brief descriptions of their operation in Table I.
B. ML Classification
The classification is performed on IoT access gatewaylevel traffic rather than device-level traffic as working on aggregate traffic is faster and reduces the memory space required. We define two classes of gateway-level traffic : benign and malicious. Benign traffic refers to the gateway traffic with no malware-induced scanning packets while malicious traffic refers to gateway traffic that includes malware-induced scanning packets from one of the three malware categories. For classification of gateway traffic, we have to first generate training data samples consisting of packet captures belonging to those classes. Benign traffic is not difficult to generate since it involves the normal operation of uninfected devices. However, malicious traffic would contain both benign traffic as well as scanning/infection packets generated by malware. To keep things simple, we chose to collect the gateway traffic statically in fixed session intervals. Further, we apply the classification algorithm on these traffic sessions rather than individual packets because per-packet classification is computationally much more costly and doesn't yield any significant benefits. The [20]. Hajime Same propagation mechanism as Mirai, but no CnC server. Instead, it is built on a P2P network. Purpose seems to be to improve security of IoT devices [21]. Remaiten Same propagation mechanism as Mirai. Downloads binary specific to targeted platform. Uses IRC protocol for CnC server communication [22]. Linux.Wifatch Same propagation mechanism as Mirai. Apparently, it tries to secure IoT devices from other malware [23]. Brickerbot
Rewrites the device firmware, rendering the device permanently inoperable [24].
HTTP POST
Satori
Sends NewInternalClient request through miniigd SOAP service (REALTEK SDK) or sends malicious packets to port 37215 (Huwaei home gateway) [25].
Masuta
Forms SOAP request which bypasses authentication and causes arbitrary code execution [26]. Linux.Darlloz Sends HTTP POST requests by using PHP 'php-cgi' Information Disclosure Vulnerability to download the worm from a malicious server on an unpatched device [27].
Reaper
Scans first on a list of TCP Ports to fingerprint devices, then second wave of scans on TCP ports running web services such as 80, 8080. . . , sends HTTP POST request for command injection [28].
HTTP GET Reaper
Scanning behavior similar as above, sends HTTP request for remote command execution, usually through CGI or PHP. Amnesia
Makes simple HTTP requests, searches for a special string "Cross Web Server" in the HTTP response from target. If successful, sends four more HTTP requests which contain exploit payloads of four different shell commands [29]. The steps for gateway-level traffic classification are given below:
1) Filter each traffic session to include only TCP packets with SYN flag activated and destination port numbers belonging to a target list. 2) Extract the feature vectors for each traffic session.
3) Retrieve the trained classifier from ML model constructor and apply it on the extracted feature vectors to classify the corresponding sessions. The target list of destination port numbers is made on the basis of information obtained from public malware exploits. For example, in 'TELNET' category, target destination port numbers are 23 and 2323. In 'HTTP POST' category, target destination port numbers are 37215, 80, 20736, 36895 etc. In 'HTTP GET' category, target destination port number is always 80.
In this work, we use a total of 4 features for ML model training and traffic classification: 1) Number of unique destination IP addresses 2) Number of packets per destination IP address (maximum, minimum, mean) The motivation behind selecting the first feature is that the malware generate random IP addresses and send malicious requests to them. Hence, the number of unique destination IP addresses in case of malware-induced scannning traffic will be far more than benign traffic. The second feature set seeks to exploit the fact that malware typically do not send multiple malicious packets to the same IP address (only a single packet is sent in most cases), possibly to cover as many devices as possible during the scanning/propagation phase.
One may argue that the malware author/attacker can adopt a less aggressive scanning strategy to avoid detection. The attacker will incur a cost though, in terms of the malware performance, resulting in fewer infected devices in a fixed time period. We plan to investigate this malware performancescanning behavior trade off by formulating an optimization problem in the future. For now, the duration of traffic sessions collected for training/classification can be increased to counter any decrease in scanning rates by the attacker.
V. PERFORMANCE EVALUATION A. Testbed Description
We built a testbed with IoT devices, a laptop PC, Android smartphone and a wireless access gateway to collect ingress/egress traffic at the gateway which would form a part of the training data used to train the ML algorithms to be deployed in the ML Classifier module. The IoT devices were: Philips Hue bridge, D-Link DCS-930L Wi-Fi IP camera and TP-Link HS110 Smart Wi-Fi Plug. The laptop PC has an Intel Core i3-5020U 2.2 GHz processor with 4GB RAM and runs Windows 10 OS. Network applications such as web browser (accessing web pages, video streaming sites e.g. YouTube), email client, WiFi camera online platform etc. were run on the the laptop PC by a user. The Android smartphone has Cortex-A53 Octa-core 1.6 GHz processor with 3GB RAM and runs Android 8.0 OS. Again, the same user ran applications such as web browser, social media (Facebook/Twitter/LinkedIn), chat (WhatsApp), Wi-Fi plug app, Hue app etc. on the smartphone which also ran a few other network applications in the background. The wireless access gateway was a D-Link DIR-600 router with an Atheros AR7240 350 MHz network processor, Atheros AR9285 network adapter, 32MB RAM, 4MB flash supporting 1EEE 802.11b/g/n Wi-Fi standards. The testbed is shown in Fig. 2. We used a TP-Link TL-SG108E Gigabit Ethernet switch with port-mirroring feature to mirror the traffic from all of the above devices (IoT, laptop, smartphone) to a Raspberry Pi 3B+ Ethernet port and monitor the cumulative traffic.
B. Evaluation Methodology
As we can't use real malware due to legal and ethical considerations, we wrote scripts to simulate the generation of malicious packets based on publicly available exploits [30] for the vulnerabilities exploited by those malware. The script generates random IP addresses and sends malicious requests to them in order to execute remote command injection attacks. The injected commands were non-malicious (for ex. ls -l, uname -a), thus causing no actual harm to any device in the network even if it was vulnerable. The scanning/infection rates in our scripts were designed keeping in the mind the scanning/infection behavior reported online and the Mirai source code which is the basis for most of the current IoT malware. We selected one malware per category for our performance evaluation since the malware in each category have similar scanning/infection behavior.
A total of 60 traffic sessions of 15 minutes duration each were collected for both benign and malicious classes through our testbed. The traffic sessions collected for each case were divided into two sets: training and test data using a 70:30 split. For the training data, the class labels were assigned to each feature vector extracted from the traffic sessions included in the training data.
C. Results
The distributions of the feature values for benign and malicious training data where the malware belongs to TELNET, HTTP POST and HTTP GET categories are shown in Fig. 3 The scikit-learn ML algorithms library [31] was used for training and classification purposes. We trained Gaussian Naive Bayes, k-NN (k-Nearest Neighbor) and Random Forest algorithms with our training data and evaluated the trained ML models with test data for all three malware categories. The classification accuracy, precision, recall and F-1 scores obtained for the above three classification algorithms are shown in Table II.
The classification accuracy refers to the fraction of the total number of input samples whose labels are correctly predicted by a classifier. The precision is the ratio T P T P+FP , where T P is the number of true positives and FP is the number of false positives. It represents the ability of a classifier to avoid labeling samples that are negative as positive. The recall is the ratio T P T P+FN , where T P is the number of true positives and FN is the number of false negatives. It represents the ability of a classifier to avoid labeling samples that are positive as negative. The F1 score is the harmonic mean of precision and recall, expressed as 2× precision×recall precision+recall . It represents balance between precision and recall offered by a classifier. The scores in Table II show that the k-NN classifier performs the best followed by Random Forest classifier and Gaussian Naive Bayes classifier.
VI. CONCLUSION
In this paper, we proposed EDIMA, a modular solution for early detection of network activity originating from IoT malware using ML classification techniques. Existing IoT malware were distributed among multiple categories based on their targeted software vulnerabilities. Later, steps for the ML classifier operation and the features used for classification were listed. A testbed consisting of PC, smartphone and IoT devices connected to an access gateway was used to evaluate the classification performance of EDIMA. Using packet traffic captures at access gateway-level, feature vectors were extracted with class labels (benign or malicious) assigned to them. Subsequently, we depicted the distribution of benign and malicious traffic feature vectors for different malware categories. A proportion of the feature vectors extracted were used as training data to train few standard ML algorithms and the ML models thus obtained were applied to test data with their classification scores reported. As part of our future work, we are working on the software-based implementation of EDIMA and its performance evaluation. We are also planning to adapt some state-of-the-art botnet detection techniques using bot-CnC communication features and ML algorithms for malware activity detection and compare their performance with EDIMA.
| 2,971 |
1906.09715
|
2952773240
|
The widespread adoption of Internet of Things has led to many security issues. Post the Mirai-based DDoS attack in 2016 which compromised IoT devices, a host of new malware using Mirai's leaked source code and targeting IoT devices have cropped up, e.g. Satori, Reaper, Amnesia, Masuta etc. These malware exploit software vulnerabilities to infect IoT devices instead of open TELNET ports (like Mirai) making them more difficult to block using existing solutions such as firewalls. In this research, we present EDIMA, a distributed modular solution which can be used towards the detection of IoT malware network activity in large-scale networks (e.g. ISP, enterprise networks) during the scanning infecting phase rather than during an attack. EDIMA employs machine learning algorithms for edge devices' traffic classification, a packet traffic feature vector database, a policy module and an optional packet sub-sampling module. We evaluate the classification performance of EDIMA through testbed experiments and present the results obtained.
|
There has also been some research on intrusion detection and anomaly detection systems for IoT. A whitelist-based intrusion detection system for IoT devices (Heimdall) has been presented in @cite_2 . The authors in @cite_3 propose an intrusion detection model for IoT backbone networks leveraging two-layer dimension reduction and two-tier classification techniques to detect U2R (User-to-Root) and R2L (Remote-to-Local) attacks.
|
{
"abstract": [
"With increasing reliance on Internet of Things (IoT) devices and services, the capability to detect intrusions and malicious activities within IoT networks is critical for resilience of the network infrastructure. In this paper, we present a novel model for intrusion detection based on two-layer dimension reduction and two-tier classification module, designed to detect malicious activities such as User to Root (U2R) and Remote to Local (R2L) attacks. The proposed model is using component analysis and linear discriminate analysis of dimension reduction module to spate the high dimensional dataset to a lower one with lesser features. We then apply a two-tier classification module utilizing Naive Bayes and Certainty Factor version of K-Nearest Neighbor to identify suspicious behaviors. The experiment results using NSL-KDD dataset shows that our model outperforms previous models designed to detect U2R and R2L attacks.",
"The Internet of Things (IoT) is built of many small smart objects continuously connected to the Internet. This makes these devices an easy target for attacks exploiting vulnerabilities at the network, application, and mobile level. With that it comes as no surprise that distributed denial of service attacks leveraging these vulnerable devices have become a new standard for effective botnets. In this paper, we propose Heimdall, a whitelist-based intrusion detection technique tailored to IoT devices. Heimdall operates on routers acting as gateways for IoT as a homogeneous defense for all devices behind the router. Our experimental results show that our defense mechanism is effective and has minimal overhead."
],
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2557450880",
"2614230424"
]
}
|
EDIMA: Early Detection of IoT Malware Network Activity Using Machine Learning Techniques
|
The Internet of Things (IoT) [1] is a network of sensing devices with limited resources and capable of wired/wireless communications with cloud services. IoT devices are being increasingly targeted by attackers using malware as they are easier to infect than conventional computers. This is due to several reasons [2] such as presence of legacy devices with no security updates, low priority given to security within the development cycle, weak login credentials, etc.
In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [3] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack.
The source code for Mirai was leaked in 2017 and since then there has been a proliferation of IoT malware. Script "kiddies" as well as professional blackhat/greyhat hackers have used the leaked source code to build their own IoT malware. These malware are usually variants of Mirai using a similar brute force technique of scanning random IP addresses for open TELNET ports and attempting to login using a built-in dictionary of commonly used credentials (Remaiten, Hajime), or more sophisticated malware that exploit software vulnerabilities to execute remote command injections on vulnerable devices (Reaper, Satori, Masuta, Linux.Darlloz, Amnesia etc.). Even though TELNET port scanning can be countered by deploying firewalls (at the user access gateway) which block incoming/outgoing TELNET traffic, malware exploiting software vulnerabilities involving application protocols such as HTTP, SOAP, PHP etc. are more difficult to block using firewalls because those application protocols form a part of legitimate traffic as well.
Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [4]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. Bitdefender mentioned in its blog in September 2017 [5] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through TELNET scanning telemetry data. In an October 2017 article [6], Arbor researchers estimated that the actual size of the Reaper botnet fluctuated between 10,000-20,000 bots but warned that this number could change at any time with an additional 2 million devices having been identified by botnet scanners as potential Reaper bots. A Kaspersky lab report [7] released in September 2018 says that 121,588 IoT malware samples were identified in the first half of 2018 which was three times the number of IoT malware samples in the whole of 2017.
Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage. As pointed out in [8], attempting to ensure that all IoT devices are secure-by-construction is futile and it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be networkbased rather than host-based.
In this research, we propose a solution towards detecting the network activity of IoT malware in large-scale networks such as enterprise and ISP (Internet Service Provider) networks.
Our proposed solution consists of machine learning (ML) algorithms running at the user access gateway which detect malware activity based on their scanning traffic patterns, a database that stores the malware scanning traffic patterns and can be used to retrieve or update those patterns, and a policy module which decides the further course of action after gateway traffic has been classified as malicious. It also includes an optional packet sub-sampling module which can be deployed for example, in case of enterprises where a number of IoT devices (≈ 10-100) are connected to a single access gateway. The bot detection solution can be deployed both on physical access gateways supplied by the ISP companies or as NFV (Network Function Virtualization) functions at the customer premises/enterprise in a SDN-NFV based network architecture, where SDN stands for Software-Defined Networking.
Bots scanning for and infecting vulnerable devices are targeted in particular by our solution. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Once the IoT bots are detected, the network operators can take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. The major contributions of this paper are listed below: 1) We have categorized most of the current IoT malware into a few categories to help identify similar malware and simplify the task of designing detection methods for them. 2) We have analyzed the traffic patterns for IoT malware from each category through testbed experiments and packet capture utilities. 3) We have proposed a modular solution towards detection of IoT malware activity by using ML techniques with the above traffic patterns.
III. EDIMA ARCHITECTURE
Our proposed solution towards detecting the scanning packet traffic generated by IoT malware through the use of ML algorithms is called EDIMA (Early Detection of IoT Malware Network Activity) and is shown in Fig.1 infected with known IoT malware as well as gateways connected to uninfected devices. The database is updated frequently for newly discovered malware. The feature vectors and corrresponding class labels are retrieved by the ML model constructor for training ML classifier for the first time and also for re-training the classifier whenever a new malware is discovered. We envisage a community of security researchers, industry personnel and users who will collect traffic data for IoT malware through honeypots, consumer access gateways etc. The feature vectors extracted from the raw traffic data samples and the class labels assigned to those samples will be updated to the online feature database. 4) Policy Module: The policy module consists of a list of policies defined by network administrator which decide the course of actions to be taken once the traffic from an access gateway has been classified as malicious by the ML classifier module. For instance, the network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. 5) Sub-sampling Module (optional): For premises having thousands of IoT devices such as enterprises, industries etc. we also propose an optional sub-sampling module as introduced in [19]. This module samples the packet traffic from IoT devices both along time as well across the devices and presents them as input to the ML classifier module. The sub-sampling module would help reduce the computational overhead for ML classifier module by forwarding only a fraction of the incoming IoT packet traffic. We have categorized known IoT malware into three categories based on type of vulnerability that they target: TELNET, HTTP POST and HTTP GET. TELNET is an application-layer protocol used for bidirectional byte-oriented communication. Typically, a user with a terminal and running a TELNET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. HTTP GET and POST are methods based on HTTP (HyperText Transfer Protocol) applicationlayer protocol which are used to request data from and send data to server resources respectively. For example, HTTP GET is commonly used for requesting web pages from remote web servers through a browser. We have presented the malware categories, various malware belonging to those categories and brief descriptions of their operation in Table I.
B. ML Classification
The classification is performed on IoT access gatewaylevel traffic rather than device-level traffic as working on aggregate traffic is faster and reduces the memory space required. We define two classes of gateway-level traffic : benign and malicious. Benign traffic refers to the gateway traffic with no malware-induced scanning packets while malicious traffic refers to gateway traffic that includes malware-induced scanning packets from one of the three malware categories. For classification of gateway traffic, we have to first generate training data samples consisting of packet captures belonging to those classes. Benign traffic is not difficult to generate since it involves the normal operation of uninfected devices. However, malicious traffic would contain both benign traffic as well as scanning/infection packets generated by malware. To keep things simple, we chose to collect the gateway traffic statically in fixed session intervals. Further, we apply the classification algorithm on these traffic sessions rather than individual packets because per-packet classification is computationally much more costly and doesn't yield any significant benefits. The [20]. Hajime Same propagation mechanism as Mirai, but no CnC server. Instead, it is built on a P2P network. Purpose seems to be to improve security of IoT devices [21]. Remaiten Same propagation mechanism as Mirai. Downloads binary specific to targeted platform. Uses IRC protocol for CnC server communication [22]. Linux.Wifatch Same propagation mechanism as Mirai. Apparently, it tries to secure IoT devices from other malware [23]. Brickerbot
Rewrites the device firmware, rendering the device permanently inoperable [24].
HTTP POST
Satori
Sends NewInternalClient request through miniigd SOAP service (REALTEK SDK) or sends malicious packets to port 37215 (Huwaei home gateway) [25].
Masuta
Forms SOAP request which bypasses authentication and causes arbitrary code execution [26]. Linux.Darlloz Sends HTTP POST requests by using PHP 'php-cgi' Information Disclosure Vulnerability to download the worm from a malicious server on an unpatched device [27].
Reaper
Scans first on a list of TCP Ports to fingerprint devices, then second wave of scans on TCP ports running web services such as 80, 8080. . . , sends HTTP POST request for command injection [28].
HTTP GET Reaper
Scanning behavior similar as above, sends HTTP request for remote command execution, usually through CGI or PHP. Amnesia
Makes simple HTTP requests, searches for a special string "Cross Web Server" in the HTTP response from target. If successful, sends four more HTTP requests which contain exploit payloads of four different shell commands [29]. The steps for gateway-level traffic classification are given below:
1) Filter each traffic session to include only TCP packets with SYN flag activated and destination port numbers belonging to a target list. 2) Extract the feature vectors for each traffic session.
3) Retrieve the trained classifier from ML model constructor and apply it on the extracted feature vectors to classify the corresponding sessions. The target list of destination port numbers is made on the basis of information obtained from public malware exploits. For example, in 'TELNET' category, target destination port numbers are 23 and 2323. In 'HTTP POST' category, target destination port numbers are 37215, 80, 20736, 36895 etc. In 'HTTP GET' category, target destination port number is always 80.
In this work, we use a total of 4 features for ML model training and traffic classification: 1) Number of unique destination IP addresses 2) Number of packets per destination IP address (maximum, minimum, mean) The motivation behind selecting the first feature is that the malware generate random IP addresses and send malicious requests to them. Hence, the number of unique destination IP addresses in case of malware-induced scannning traffic will be far more than benign traffic. The second feature set seeks to exploit the fact that malware typically do not send multiple malicious packets to the same IP address (only a single packet is sent in most cases), possibly to cover as many devices as possible during the scanning/propagation phase.
One may argue that the malware author/attacker can adopt a less aggressive scanning strategy to avoid detection. The attacker will incur a cost though, in terms of the malware performance, resulting in fewer infected devices in a fixed time period. We plan to investigate this malware performancescanning behavior trade off by formulating an optimization problem in the future. For now, the duration of traffic sessions collected for training/classification can be increased to counter any decrease in scanning rates by the attacker.
V. PERFORMANCE EVALUATION A. Testbed Description
We built a testbed with IoT devices, a laptop PC, Android smartphone and a wireless access gateway to collect ingress/egress traffic at the gateway which would form a part of the training data used to train the ML algorithms to be deployed in the ML Classifier module. The IoT devices were: Philips Hue bridge, D-Link DCS-930L Wi-Fi IP camera and TP-Link HS110 Smart Wi-Fi Plug. The laptop PC has an Intel Core i3-5020U 2.2 GHz processor with 4GB RAM and runs Windows 10 OS. Network applications such as web browser (accessing web pages, video streaming sites e.g. YouTube), email client, WiFi camera online platform etc. were run on the the laptop PC by a user. The Android smartphone has Cortex-A53 Octa-core 1.6 GHz processor with 3GB RAM and runs Android 8.0 OS. Again, the same user ran applications such as web browser, social media (Facebook/Twitter/LinkedIn), chat (WhatsApp), Wi-Fi plug app, Hue app etc. on the smartphone which also ran a few other network applications in the background. The wireless access gateway was a D-Link DIR-600 router with an Atheros AR7240 350 MHz network processor, Atheros AR9285 network adapter, 32MB RAM, 4MB flash supporting 1EEE 802.11b/g/n Wi-Fi standards. The testbed is shown in Fig. 2. We used a TP-Link TL-SG108E Gigabit Ethernet switch with port-mirroring feature to mirror the traffic from all of the above devices (IoT, laptop, smartphone) to a Raspberry Pi 3B+ Ethernet port and monitor the cumulative traffic.
B. Evaluation Methodology
As we can't use real malware due to legal and ethical considerations, we wrote scripts to simulate the generation of malicious packets based on publicly available exploits [30] for the vulnerabilities exploited by those malware. The script generates random IP addresses and sends malicious requests to them in order to execute remote command injection attacks. The injected commands were non-malicious (for ex. ls -l, uname -a), thus causing no actual harm to any device in the network even if it was vulnerable. The scanning/infection rates in our scripts were designed keeping in the mind the scanning/infection behavior reported online and the Mirai source code which is the basis for most of the current IoT malware. We selected one malware per category for our performance evaluation since the malware in each category have similar scanning/infection behavior.
A total of 60 traffic sessions of 15 minutes duration each were collected for both benign and malicious classes through our testbed. The traffic sessions collected for each case were divided into two sets: training and test data using a 70:30 split. For the training data, the class labels were assigned to each feature vector extracted from the traffic sessions included in the training data.
C. Results
The distributions of the feature values for benign and malicious training data where the malware belongs to TELNET, HTTP POST and HTTP GET categories are shown in Fig. 3 The scikit-learn ML algorithms library [31] was used for training and classification purposes. We trained Gaussian Naive Bayes, k-NN (k-Nearest Neighbor) and Random Forest algorithms with our training data and evaluated the trained ML models with test data for all three malware categories. The classification accuracy, precision, recall and F-1 scores obtained for the above three classification algorithms are shown in Table II.
The classification accuracy refers to the fraction of the total number of input samples whose labels are correctly predicted by a classifier. The precision is the ratio T P T P+FP , where T P is the number of true positives and FP is the number of false positives. It represents the ability of a classifier to avoid labeling samples that are negative as positive. The recall is the ratio T P T P+FN , where T P is the number of true positives and FN is the number of false negatives. It represents the ability of a classifier to avoid labeling samples that are positive as negative. The F1 score is the harmonic mean of precision and recall, expressed as 2× precision×recall precision+recall . It represents balance between precision and recall offered by a classifier. The scores in Table II show that the k-NN classifier performs the best followed by Random Forest classifier and Gaussian Naive Bayes classifier.
VI. CONCLUSION
In this paper, we proposed EDIMA, a modular solution for early detection of network activity originating from IoT malware using ML classification techniques. Existing IoT malware were distributed among multiple categories based on their targeted software vulnerabilities. Later, steps for the ML classifier operation and the features used for classification were listed. A testbed consisting of PC, smartphone and IoT devices connected to an access gateway was used to evaluate the classification performance of EDIMA. Using packet traffic captures at access gateway-level, feature vectors were extracted with class labels (benign or malicious) assigned to them. Subsequently, we depicted the distribution of benign and malicious traffic feature vectors for different malware categories. A proportion of the feature vectors extracted were used as training data to train few standard ML algorithms and the ML models thus obtained were applied to test data with their classification scores reported. As part of our future work, we are working on the software-based implementation of EDIMA and its performance evaluation. We are also planning to adapt some state-of-the-art botnet detection techniques using bot-CnC communication features and ML algorithms for malware activity detection and compare their performance with EDIMA.
| 2,971 |
1906.09715
|
2952773240
|
The widespread adoption of Internet of Things has led to many security issues. Post the Mirai-based DDoS attack in 2016 which compromised IoT devices, a host of new malware using Mirai's leaked source code and targeting IoT devices have cropped up, e.g. Satori, Reaper, Amnesia, Masuta etc. These malware exploit software vulnerabilities to infect IoT devices instead of open TELNET ports (like Mirai) making them more difficult to block using existing solutions such as firewalls. In this research, we present EDIMA, a distributed modular solution which can be used towards the detection of IoT malware network activity in large-scale networks (e.g. ISP, enterprise networks) during the scanning infecting phase rather than during an attack. EDIMA employs machine learning algorithms for edge devices' traffic classification, a packet traffic feature vector database, a policy module and an optional packet sub-sampling module. We evaluate the classification performance of EDIMA through testbed experiments and present the results obtained.
|
Of late, there has been an interest in IoT botnet and attack detection in the research community resulting in a number of papers addressing these problems. In @cite_6 , deep-autoencoders based anomaly detection has been used to detect attacks launched from IoT botnets. A few works have focused on building normal communication profiles for IoT devices which are not expected to deviate much over a long period of time. DEFT @cite_10 has used ML algorithms at SDN controllers and access gateways to build normal device traffic fingerprints while @cite_25 proposes a tool to automatically generate MUD (Manufacturer Usage Description) profiles for a number of consumer IoT devices. In DIoT @cite_9 , the authors have proposed a method to classify typically used IoT devices into various device types and build their normal traffic profiles so that a deviation from those profiles is flagged as anomalous traffic.
|
{
"abstract": [
"IoT devices are being widely deployed. Many of them are vulnerable due to insecure implementations and configuration. As a result, many networks already have vulnerable devices that are easy to compromise. This has led to a new category of malware specifically targeting IoT devices. Existing intrusion detection techniques are not effective in detecting compromised IoT devices given the massive scale of the problem in terms of the number of different manufacturers involved. In this paper, we present D \"IoT, a system for detecting compromised IoT devices effectively. In contrast to prior work, D \"IoT uses a novel self-learning approach to classify devices into device types and build for each of these normal communication profiles that can subsequently be used to detect anomalous deviations in communication patterns. D \"IoT is completely autonomous and can be trained in a distributed crowdsourced manner without requiring human intervention or labeled training data. Consequently, D \"IoT copes with the emergence of new device types as well as new attacks. By systematic experiments using more than 30 real-world IoT devices, we show that D \"IoT is effective (96 detection rate with 1 false alarms) and fast (<0.03 s.) at detecting devices compromised by the infamous Mirai malware.",
"Identifying IoT devices connected to a network has multiple security benefits, such as deployment of behavior-based anomaly detectors, automated vulnerability patching of specific device types, dynamic attack mitigation, etc. In this paper, we look into the problem of IoT device identification at network level, in particular from an ISP’s perspective. The simple solution of deploying a supervised machine learning algorithm at a centralized location in the network neither scales well nor can identify new devices. To tackle these challenges, we propose and develop a distributed device fingerprinting technique (DEFT), a distributed fingerprinting solution that addresses and exploits the presence of common devices, including new devices, across smart homes and enterprises in a network. A DEFT controller develops and maintains classifiers for fingerprinting, while gateways located closer to the IoT devices at homes perform device classification. Importantly, the controller and gateways coordinate to identify new devices in the network. DEFT is designed to be scalable and dynamic—it can be deployed, orchestrated, and controlled using software-defined networking and network function virtualization. DEFT is able to identify new device types automatically, while achieving high accuracy and low false positive rate. We demonstrate the effectiveness of DEFT by experimenting on data obtained from real-world IoT devices.",
"IoT devices are increasingly being implicated in cyber-attacks, raising community concern about the risks they pose to critical infrastructure, corporations, and citizens. In order to reduce this risk, the IETF is pushing IoT vendors to develop formal specifications of the intended purpose of their IoT devices, in the form of a Manufacturer Usage Description (MUD), so that their network behavior in any operating environment can be locked down and verified rigorously. This paper aims to assist IoT manufacturers in developing and verifying MUD profiles, while also helping adopters of these devices to ensure they are compatible with their organizational policies. Our first contribution is to develop a tool that takes the traffic trace of an arbitrary IoT device as input and automatically generates the MUD profile for it. We contribute our tool as open source, apply it to 28 consumer IoT devices, and highlight insights and challenges encountered in the process. Our second contribution is to apply a formal semantic framework that not only validates a given MUD profile for consistency, but also checks its compatibility with a given organizational policy. Finally, we apply our framework to representative organizations and selected devices, to demonstrate how MUD can reduce the effort needed for IoT acceptance testing.",
"The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet."
],
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_25",
"@cite_6"
],
"mid": [
"2804338997",
"2888096149",
"2887792592",
"2799758613"
]
}
|
EDIMA: Early Detection of IoT Malware Network Activity Using Machine Learning Techniques
|
The Internet of Things (IoT) [1] is a network of sensing devices with limited resources and capable of wired/wireless communications with cloud services. IoT devices are being increasingly targeted by attackers using malware as they are easier to infect than conventional computers. This is due to several reasons [2] such as presence of legacy devices with no security updates, low priority given to security within the development cycle, weak login credentials, etc.
In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [3] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack.
The source code for Mirai was leaked in 2017 and since then there has been a proliferation of IoT malware. Script "kiddies" as well as professional blackhat/greyhat hackers have used the leaked source code to build their own IoT malware. These malware are usually variants of Mirai using a similar brute force technique of scanning random IP addresses for open TELNET ports and attempting to login using a built-in dictionary of commonly used credentials (Remaiten, Hajime), or more sophisticated malware that exploit software vulnerabilities to execute remote command injections on vulnerable devices (Reaper, Satori, Masuta, Linux.Darlloz, Amnesia etc.). Even though TELNET port scanning can be countered by deploying firewalls (at the user access gateway) which block incoming/outgoing TELNET traffic, malware exploiting software vulnerabilities involving application protocols such as HTTP, SOAP, PHP etc. are more difficult to block using firewalls because those application protocols form a part of legitimate traffic as well.
Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [4]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. Bitdefender mentioned in its blog in September 2017 [5] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through TELNET scanning telemetry data. In an October 2017 article [6], Arbor researchers estimated that the actual size of the Reaper botnet fluctuated between 10,000-20,000 bots but warned that this number could change at any time with an additional 2 million devices having been identified by botnet scanners as potential Reaper bots. A Kaspersky lab report [7] released in September 2018 says that 121,588 IoT malware samples were identified in the first half of 2018 which was three times the number of IoT malware samples in the whole of 2017.
Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage. As pointed out in [8], attempting to ensure that all IoT devices are secure-by-construction is futile and it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be networkbased rather than host-based.
In this research, we propose a solution towards detecting the network activity of IoT malware in large-scale networks such as enterprise and ISP (Internet Service Provider) networks.
Our proposed solution consists of machine learning (ML) algorithms running at the user access gateway which detect malware activity based on their scanning traffic patterns, a database that stores the malware scanning traffic patterns and can be used to retrieve or update those patterns, and a policy module which decides the further course of action after gateway traffic has been classified as malicious. It also includes an optional packet sub-sampling module which can be deployed for example, in case of enterprises where a number of IoT devices (≈ 10-100) are connected to a single access gateway. The bot detection solution can be deployed both on physical access gateways supplied by the ISP companies or as NFV (Network Function Virtualization) functions at the customer premises/enterprise in a SDN-NFV based network architecture, where SDN stands for Software-Defined Networking.
Bots scanning for and infecting vulnerable devices are targeted in particular by our solution. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Once the IoT bots are detected, the network operators can take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. The major contributions of this paper are listed below: 1) We have categorized most of the current IoT malware into a few categories to help identify similar malware and simplify the task of designing detection methods for them. 2) We have analyzed the traffic patterns for IoT malware from each category through testbed experiments and packet capture utilities. 3) We have proposed a modular solution towards detection of IoT malware activity by using ML techniques with the above traffic patterns.
III. EDIMA ARCHITECTURE
Our proposed solution towards detecting the scanning packet traffic generated by IoT malware through the use of ML algorithms is called EDIMA (Early Detection of IoT Malware Network Activity) and is shown in Fig.1 infected with known IoT malware as well as gateways connected to uninfected devices. The database is updated frequently for newly discovered malware. The feature vectors and corrresponding class labels are retrieved by the ML model constructor for training ML classifier for the first time and also for re-training the classifier whenever a new malware is discovered. We envisage a community of security researchers, industry personnel and users who will collect traffic data for IoT malware through honeypots, consumer access gateways etc. The feature vectors extracted from the raw traffic data samples and the class labels assigned to those samples will be updated to the online feature database. 4) Policy Module: The policy module consists of a list of policies defined by network administrator which decide the course of actions to be taken once the traffic from an access gateway has been classified as malicious by the ML classifier module. For instance, the network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. 5) Sub-sampling Module (optional): For premises having thousands of IoT devices such as enterprises, industries etc. we also propose an optional sub-sampling module as introduced in [19]. This module samples the packet traffic from IoT devices both along time as well across the devices and presents them as input to the ML classifier module. The sub-sampling module would help reduce the computational overhead for ML classifier module by forwarding only a fraction of the incoming IoT packet traffic. We have categorized known IoT malware into three categories based on type of vulnerability that they target: TELNET, HTTP POST and HTTP GET. TELNET is an application-layer protocol used for bidirectional byte-oriented communication. Typically, a user with a terminal and running a TELNET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. HTTP GET and POST are methods based on HTTP (HyperText Transfer Protocol) applicationlayer protocol which are used to request data from and send data to server resources respectively. For example, HTTP GET is commonly used for requesting web pages from remote web servers through a browser. We have presented the malware categories, various malware belonging to those categories and brief descriptions of their operation in Table I.
B. ML Classification
The classification is performed on IoT access gatewaylevel traffic rather than device-level traffic as working on aggregate traffic is faster and reduces the memory space required. We define two classes of gateway-level traffic : benign and malicious. Benign traffic refers to the gateway traffic with no malware-induced scanning packets while malicious traffic refers to gateway traffic that includes malware-induced scanning packets from one of the three malware categories. For classification of gateway traffic, we have to first generate training data samples consisting of packet captures belonging to those classes. Benign traffic is not difficult to generate since it involves the normal operation of uninfected devices. However, malicious traffic would contain both benign traffic as well as scanning/infection packets generated by malware. To keep things simple, we chose to collect the gateway traffic statically in fixed session intervals. Further, we apply the classification algorithm on these traffic sessions rather than individual packets because per-packet classification is computationally much more costly and doesn't yield any significant benefits. The [20]. Hajime Same propagation mechanism as Mirai, but no CnC server. Instead, it is built on a P2P network. Purpose seems to be to improve security of IoT devices [21]. Remaiten Same propagation mechanism as Mirai. Downloads binary specific to targeted platform. Uses IRC protocol for CnC server communication [22]. Linux.Wifatch Same propagation mechanism as Mirai. Apparently, it tries to secure IoT devices from other malware [23]. Brickerbot
Rewrites the device firmware, rendering the device permanently inoperable [24].
HTTP POST
Satori
Sends NewInternalClient request through miniigd SOAP service (REALTEK SDK) or sends malicious packets to port 37215 (Huwaei home gateway) [25].
Masuta
Forms SOAP request which bypasses authentication and causes arbitrary code execution [26]. Linux.Darlloz Sends HTTP POST requests by using PHP 'php-cgi' Information Disclosure Vulnerability to download the worm from a malicious server on an unpatched device [27].
Reaper
Scans first on a list of TCP Ports to fingerprint devices, then second wave of scans on TCP ports running web services such as 80, 8080. . . , sends HTTP POST request for command injection [28].
HTTP GET Reaper
Scanning behavior similar as above, sends HTTP request for remote command execution, usually through CGI or PHP. Amnesia
Makes simple HTTP requests, searches for a special string "Cross Web Server" in the HTTP response from target. If successful, sends four more HTTP requests which contain exploit payloads of four different shell commands [29]. The steps for gateway-level traffic classification are given below:
1) Filter each traffic session to include only TCP packets with SYN flag activated and destination port numbers belonging to a target list. 2) Extract the feature vectors for each traffic session.
3) Retrieve the trained classifier from ML model constructor and apply it on the extracted feature vectors to classify the corresponding sessions. The target list of destination port numbers is made on the basis of information obtained from public malware exploits. For example, in 'TELNET' category, target destination port numbers are 23 and 2323. In 'HTTP POST' category, target destination port numbers are 37215, 80, 20736, 36895 etc. In 'HTTP GET' category, target destination port number is always 80.
In this work, we use a total of 4 features for ML model training and traffic classification: 1) Number of unique destination IP addresses 2) Number of packets per destination IP address (maximum, minimum, mean) The motivation behind selecting the first feature is that the malware generate random IP addresses and send malicious requests to them. Hence, the number of unique destination IP addresses in case of malware-induced scannning traffic will be far more than benign traffic. The second feature set seeks to exploit the fact that malware typically do not send multiple malicious packets to the same IP address (only a single packet is sent in most cases), possibly to cover as many devices as possible during the scanning/propagation phase.
One may argue that the malware author/attacker can adopt a less aggressive scanning strategy to avoid detection. The attacker will incur a cost though, in terms of the malware performance, resulting in fewer infected devices in a fixed time period. We plan to investigate this malware performancescanning behavior trade off by formulating an optimization problem in the future. For now, the duration of traffic sessions collected for training/classification can be increased to counter any decrease in scanning rates by the attacker.
V. PERFORMANCE EVALUATION A. Testbed Description
We built a testbed with IoT devices, a laptop PC, Android smartphone and a wireless access gateway to collect ingress/egress traffic at the gateway which would form a part of the training data used to train the ML algorithms to be deployed in the ML Classifier module. The IoT devices were: Philips Hue bridge, D-Link DCS-930L Wi-Fi IP camera and TP-Link HS110 Smart Wi-Fi Plug. The laptop PC has an Intel Core i3-5020U 2.2 GHz processor with 4GB RAM and runs Windows 10 OS. Network applications such as web browser (accessing web pages, video streaming sites e.g. YouTube), email client, WiFi camera online platform etc. were run on the the laptop PC by a user. The Android smartphone has Cortex-A53 Octa-core 1.6 GHz processor with 3GB RAM and runs Android 8.0 OS. Again, the same user ran applications such as web browser, social media (Facebook/Twitter/LinkedIn), chat (WhatsApp), Wi-Fi plug app, Hue app etc. on the smartphone which also ran a few other network applications in the background. The wireless access gateway was a D-Link DIR-600 router with an Atheros AR7240 350 MHz network processor, Atheros AR9285 network adapter, 32MB RAM, 4MB flash supporting 1EEE 802.11b/g/n Wi-Fi standards. The testbed is shown in Fig. 2. We used a TP-Link TL-SG108E Gigabit Ethernet switch with port-mirroring feature to mirror the traffic from all of the above devices (IoT, laptop, smartphone) to a Raspberry Pi 3B+ Ethernet port and monitor the cumulative traffic.
B. Evaluation Methodology
As we can't use real malware due to legal and ethical considerations, we wrote scripts to simulate the generation of malicious packets based on publicly available exploits [30] for the vulnerabilities exploited by those malware. The script generates random IP addresses and sends malicious requests to them in order to execute remote command injection attacks. The injected commands were non-malicious (for ex. ls -l, uname -a), thus causing no actual harm to any device in the network even if it was vulnerable. The scanning/infection rates in our scripts were designed keeping in the mind the scanning/infection behavior reported online and the Mirai source code which is the basis for most of the current IoT malware. We selected one malware per category for our performance evaluation since the malware in each category have similar scanning/infection behavior.
A total of 60 traffic sessions of 15 minutes duration each were collected for both benign and malicious classes through our testbed. The traffic sessions collected for each case were divided into two sets: training and test data using a 70:30 split. For the training data, the class labels were assigned to each feature vector extracted from the traffic sessions included in the training data.
C. Results
The distributions of the feature values for benign and malicious training data where the malware belongs to TELNET, HTTP POST and HTTP GET categories are shown in Fig. 3 The scikit-learn ML algorithms library [31] was used for training and classification purposes. We trained Gaussian Naive Bayes, k-NN (k-Nearest Neighbor) and Random Forest algorithms with our training data and evaluated the trained ML models with test data for all three malware categories. The classification accuracy, precision, recall and F-1 scores obtained for the above three classification algorithms are shown in Table II.
The classification accuracy refers to the fraction of the total number of input samples whose labels are correctly predicted by a classifier. The precision is the ratio T P T P+FP , where T P is the number of true positives and FP is the number of false positives. It represents the ability of a classifier to avoid labeling samples that are negative as positive. The recall is the ratio T P T P+FN , where T P is the number of true positives and FN is the number of false negatives. It represents the ability of a classifier to avoid labeling samples that are positive as negative. The F1 score is the harmonic mean of precision and recall, expressed as 2× precision×recall precision+recall . It represents balance between precision and recall offered by a classifier. The scores in Table II show that the k-NN classifier performs the best followed by Random Forest classifier and Gaussian Naive Bayes classifier.
VI. CONCLUSION
In this paper, we proposed EDIMA, a modular solution for early detection of network activity originating from IoT malware using ML classification techniques. Existing IoT malware were distributed among multiple categories based on their targeted software vulnerabilities. Later, steps for the ML classifier operation and the features used for classification were listed. A testbed consisting of PC, smartphone and IoT devices connected to an access gateway was used to evaluate the classification performance of EDIMA. Using packet traffic captures at access gateway-level, feature vectors were extracted with class labels (benign or malicious) assigned to them. Subsequently, we depicted the distribution of benign and malicious traffic feature vectors for different malware categories. A proportion of the feature vectors extracted were used as training data to train few standard ML algorithms and the ML models thus obtained were applied to test data with their classification scores reported. As part of our future work, we are working on the software-based implementation of EDIMA and its performance evaluation. We are also planning to adapt some state-of-the-art botnet detection techniques using bot-CnC communication features and ML algorithms for malware activity detection and compare their performance with EDIMA.
| 2,971 |
1906.09715
|
2952773240
|
The widespread adoption of Internet of Things has led to many security issues. Post the Mirai-based DDoS attack in 2016 which compromised IoT devices, a host of new malware using Mirai's leaked source code and targeting IoT devices have cropped up, e.g. Satori, Reaper, Amnesia, Masuta etc. These malware exploit software vulnerabilities to infect IoT devices instead of open TELNET ports (like Mirai) making them more difficult to block using existing solutions such as firewalls. In this research, we present EDIMA, a distributed modular solution which can be used towards the detection of IoT malware network activity in large-scale networks (e.g. ISP, enterprise networks) during the scanning infecting phase rather than during an attack. EDIMA employs machine learning algorithms for edge devices' traffic classification, a packet traffic feature vector database, a policy module and an optional packet sub-sampling module. We evaluate the classification performance of EDIMA through testbed experiments and present the results obtained.
|
Our work addresses a few important gaps in the literature when it comes to distinguishing between legitimate and botnet IoT traffic. First, the works on detecting botnets using their CnC communication features @cite_0 @cite_20 @cite_7 @cite_17 are designed for PC-based botnets rather than IoT botnets which are the focus of our work. Second, we do not aim to detect botnets (networks of bots) but instead, network activity generated by individual bots. IoT botnets tend to consist of hundreds of thousands to millions of devices spread over vast geographies, hence, it is impractical to detect a whole network of IoT bots. Therefore, we do not require computationally expensive clustering algorithms as used in @cite_7 @cite_17 .
|
{
"abstract": [
"To date, techniques to counter cyber-attacks have predominantly been reactive; they focus on monitoring network traffic, detecting anomalies and cyber-attack traffic patterns, and, a posteriori, combating the cyber-attacks and mitigating their effects. Contrary to such approaches, we advocate proactively detecting and identifying botnets prior to their being used as part of a cyber-attack (, 2006). In this paper, we present our work on using machine learning-based classification techniques to identify the command and control (C2) traffic of IRC-based botnets - compromised hosts that are collectively commanded using Internet relay chat (IRC). We split this task into two stages: (I) distinguishing between IRC and non-IRC traffic, and (II) distinguishing between botnet and real IRC traffic. For stage I, we compare the performance of J48, naive Bayes, and Bayesian network classifiers, identify the features that achieve good overall classification accuracy, and determine the classification sensitivity to the training set size. While sensitive to the training data and the attributes used to characterize communication flows, machine learning-based classifiers show promise in identifying IRC traffic. Using classification in stage II is trickier, since accurately labeling IRC traffic as botnet and non-botnet is challenging. We are currently exploring labeling flows as suspicious and non-suspicious based on telltales of hosts being compromised",
"Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.",
"",
"Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names addresses). We start from the definition and essential properties of botnets. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets (IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate."
],
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_20",
"@cite_17"
],
"mid": [
"1988741337",
"1583098994",
"",
"1775772884"
]
}
|
EDIMA: Early Detection of IoT Malware Network Activity Using Machine Learning Techniques
|
The Internet of Things (IoT) [1] is a network of sensing devices with limited resources and capable of wired/wireless communications with cloud services. IoT devices are being increasingly targeted by attackers using malware as they are easier to infect than conventional computers. This is due to several reasons [2] such as presence of legacy devices with no security updates, low priority given to security within the development cycle, weak login credentials, etc.
In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [3] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack.
The source code for Mirai was leaked in 2017 and since then there has been a proliferation of IoT malware. Script "kiddies" as well as professional blackhat/greyhat hackers have used the leaked source code to build their own IoT malware. These malware are usually variants of Mirai using a similar brute force technique of scanning random IP addresses for open TELNET ports and attempting to login using a built-in dictionary of commonly used credentials (Remaiten, Hajime), or more sophisticated malware that exploit software vulnerabilities to execute remote command injections on vulnerable devices (Reaper, Satori, Masuta, Linux.Darlloz, Amnesia etc.). Even though TELNET port scanning can be countered by deploying firewalls (at the user access gateway) which block incoming/outgoing TELNET traffic, malware exploiting software vulnerabilities involving application protocols such as HTTP, SOAP, PHP etc. are more difficult to block using firewalls because those application protocols form a part of legitimate traffic as well.
Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [4]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. Bitdefender mentioned in its blog in September 2017 [5] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through TELNET scanning telemetry data. In an October 2017 article [6], Arbor researchers estimated that the actual size of the Reaper botnet fluctuated between 10,000-20,000 bots but warned that this number could change at any time with an additional 2 million devices having been identified by botnet scanners as potential Reaper bots. A Kaspersky lab report [7] released in September 2018 says that 121,588 IoT malware samples were identified in the first half of 2018 which was three times the number of IoT malware samples in the whole of 2017.
Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage. As pointed out in [8], attempting to ensure that all IoT devices are secure-by-construction is futile and it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be networkbased rather than host-based.
In this research, we propose a solution towards detecting the network activity of IoT malware in large-scale networks such as enterprise and ISP (Internet Service Provider) networks.
Our proposed solution consists of machine learning (ML) algorithms running at the user access gateway which detect malware activity based on their scanning traffic patterns, a database that stores the malware scanning traffic patterns and can be used to retrieve or update those patterns, and a policy module which decides the further course of action after gateway traffic has been classified as malicious. It also includes an optional packet sub-sampling module which can be deployed for example, in case of enterprises where a number of IoT devices (≈ 10-100) are connected to a single access gateway. The bot detection solution can be deployed both on physical access gateways supplied by the ISP companies or as NFV (Network Function Virtualization) functions at the customer premises/enterprise in a SDN-NFV based network architecture, where SDN stands for Software-Defined Networking.
Bots scanning for and infecting vulnerable devices are targeted in particular by our solution. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Once the IoT bots are detected, the network operators can take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. The major contributions of this paper are listed below: 1) We have categorized most of the current IoT malware into a few categories to help identify similar malware and simplify the task of designing detection methods for them. 2) We have analyzed the traffic patterns for IoT malware from each category through testbed experiments and packet capture utilities. 3) We have proposed a modular solution towards detection of IoT malware activity by using ML techniques with the above traffic patterns.
III. EDIMA ARCHITECTURE
Our proposed solution towards detecting the scanning packet traffic generated by IoT malware through the use of ML algorithms is called EDIMA (Early Detection of IoT Malware Network Activity) and is shown in Fig.1 infected with known IoT malware as well as gateways connected to uninfected devices. The database is updated frequently for newly discovered malware. The feature vectors and corrresponding class labels are retrieved by the ML model constructor for training ML classifier for the first time and also for re-training the classifier whenever a new malware is discovered. We envisage a community of security researchers, industry personnel and users who will collect traffic data for IoT malware through honeypots, consumer access gateways etc. The feature vectors extracted from the raw traffic data samples and the class labels assigned to those samples will be updated to the online feature database. 4) Policy Module: The policy module consists of a list of policies defined by network administrator which decide the course of actions to be taken once the traffic from an access gateway has been classified as malicious by the ML classifier module. For instance, the network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. 5) Sub-sampling Module (optional): For premises having thousands of IoT devices such as enterprises, industries etc. we also propose an optional sub-sampling module as introduced in [19]. This module samples the packet traffic from IoT devices both along time as well across the devices and presents them as input to the ML classifier module. The sub-sampling module would help reduce the computational overhead for ML classifier module by forwarding only a fraction of the incoming IoT packet traffic. We have categorized known IoT malware into three categories based on type of vulnerability that they target: TELNET, HTTP POST and HTTP GET. TELNET is an application-layer protocol used for bidirectional byte-oriented communication. Typically, a user with a terminal and running a TELNET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. HTTP GET and POST are methods based on HTTP (HyperText Transfer Protocol) applicationlayer protocol which are used to request data from and send data to server resources respectively. For example, HTTP GET is commonly used for requesting web pages from remote web servers through a browser. We have presented the malware categories, various malware belonging to those categories and brief descriptions of their operation in Table I.
B. ML Classification
The classification is performed on IoT access gatewaylevel traffic rather than device-level traffic as working on aggregate traffic is faster and reduces the memory space required. We define two classes of gateway-level traffic : benign and malicious. Benign traffic refers to the gateway traffic with no malware-induced scanning packets while malicious traffic refers to gateway traffic that includes malware-induced scanning packets from one of the three malware categories. For classification of gateway traffic, we have to first generate training data samples consisting of packet captures belonging to those classes. Benign traffic is not difficult to generate since it involves the normal operation of uninfected devices. However, malicious traffic would contain both benign traffic as well as scanning/infection packets generated by malware. To keep things simple, we chose to collect the gateway traffic statically in fixed session intervals. Further, we apply the classification algorithm on these traffic sessions rather than individual packets because per-packet classification is computationally much more costly and doesn't yield any significant benefits. The [20]. Hajime Same propagation mechanism as Mirai, but no CnC server. Instead, it is built on a P2P network. Purpose seems to be to improve security of IoT devices [21]. Remaiten Same propagation mechanism as Mirai. Downloads binary specific to targeted platform. Uses IRC protocol for CnC server communication [22]. Linux.Wifatch Same propagation mechanism as Mirai. Apparently, it tries to secure IoT devices from other malware [23]. Brickerbot
Rewrites the device firmware, rendering the device permanently inoperable [24].
HTTP POST
Satori
Sends NewInternalClient request through miniigd SOAP service (REALTEK SDK) or sends malicious packets to port 37215 (Huwaei home gateway) [25].
Masuta
Forms SOAP request which bypasses authentication and causes arbitrary code execution [26]. Linux.Darlloz Sends HTTP POST requests by using PHP 'php-cgi' Information Disclosure Vulnerability to download the worm from a malicious server on an unpatched device [27].
Reaper
Scans first on a list of TCP Ports to fingerprint devices, then second wave of scans on TCP ports running web services such as 80, 8080. . . , sends HTTP POST request for command injection [28].
HTTP GET Reaper
Scanning behavior similar as above, sends HTTP request for remote command execution, usually through CGI or PHP. Amnesia
Makes simple HTTP requests, searches for a special string "Cross Web Server" in the HTTP response from target. If successful, sends four more HTTP requests which contain exploit payloads of four different shell commands [29]. The steps for gateway-level traffic classification are given below:
1) Filter each traffic session to include only TCP packets with SYN flag activated and destination port numbers belonging to a target list. 2) Extract the feature vectors for each traffic session.
3) Retrieve the trained classifier from ML model constructor and apply it on the extracted feature vectors to classify the corresponding sessions. The target list of destination port numbers is made on the basis of information obtained from public malware exploits. For example, in 'TELNET' category, target destination port numbers are 23 and 2323. In 'HTTP POST' category, target destination port numbers are 37215, 80, 20736, 36895 etc. In 'HTTP GET' category, target destination port number is always 80.
In this work, we use a total of 4 features for ML model training and traffic classification: 1) Number of unique destination IP addresses 2) Number of packets per destination IP address (maximum, minimum, mean) The motivation behind selecting the first feature is that the malware generate random IP addresses and send malicious requests to them. Hence, the number of unique destination IP addresses in case of malware-induced scannning traffic will be far more than benign traffic. The second feature set seeks to exploit the fact that malware typically do not send multiple malicious packets to the same IP address (only a single packet is sent in most cases), possibly to cover as many devices as possible during the scanning/propagation phase.
One may argue that the malware author/attacker can adopt a less aggressive scanning strategy to avoid detection. The attacker will incur a cost though, in terms of the malware performance, resulting in fewer infected devices in a fixed time period. We plan to investigate this malware performancescanning behavior trade off by formulating an optimization problem in the future. For now, the duration of traffic sessions collected for training/classification can be increased to counter any decrease in scanning rates by the attacker.
V. PERFORMANCE EVALUATION A. Testbed Description
We built a testbed with IoT devices, a laptop PC, Android smartphone and a wireless access gateway to collect ingress/egress traffic at the gateway which would form a part of the training data used to train the ML algorithms to be deployed in the ML Classifier module. The IoT devices were: Philips Hue bridge, D-Link DCS-930L Wi-Fi IP camera and TP-Link HS110 Smart Wi-Fi Plug. The laptop PC has an Intel Core i3-5020U 2.2 GHz processor with 4GB RAM and runs Windows 10 OS. Network applications such as web browser (accessing web pages, video streaming sites e.g. YouTube), email client, WiFi camera online platform etc. were run on the the laptop PC by a user. The Android smartphone has Cortex-A53 Octa-core 1.6 GHz processor with 3GB RAM and runs Android 8.0 OS. Again, the same user ran applications such as web browser, social media (Facebook/Twitter/LinkedIn), chat (WhatsApp), Wi-Fi plug app, Hue app etc. on the smartphone which also ran a few other network applications in the background. The wireless access gateway was a D-Link DIR-600 router with an Atheros AR7240 350 MHz network processor, Atheros AR9285 network adapter, 32MB RAM, 4MB flash supporting 1EEE 802.11b/g/n Wi-Fi standards. The testbed is shown in Fig. 2. We used a TP-Link TL-SG108E Gigabit Ethernet switch with port-mirroring feature to mirror the traffic from all of the above devices (IoT, laptop, smartphone) to a Raspberry Pi 3B+ Ethernet port and monitor the cumulative traffic.
B. Evaluation Methodology
As we can't use real malware due to legal and ethical considerations, we wrote scripts to simulate the generation of malicious packets based on publicly available exploits [30] for the vulnerabilities exploited by those malware. The script generates random IP addresses and sends malicious requests to them in order to execute remote command injection attacks. The injected commands were non-malicious (for ex. ls -l, uname -a), thus causing no actual harm to any device in the network even if it was vulnerable. The scanning/infection rates in our scripts were designed keeping in the mind the scanning/infection behavior reported online and the Mirai source code which is the basis for most of the current IoT malware. We selected one malware per category for our performance evaluation since the malware in each category have similar scanning/infection behavior.
A total of 60 traffic sessions of 15 minutes duration each were collected for both benign and malicious classes through our testbed. The traffic sessions collected for each case were divided into two sets: training and test data using a 70:30 split. For the training data, the class labels were assigned to each feature vector extracted from the traffic sessions included in the training data.
C. Results
The distributions of the feature values for benign and malicious training data where the malware belongs to TELNET, HTTP POST and HTTP GET categories are shown in Fig. 3 The scikit-learn ML algorithms library [31] was used for training and classification purposes. We trained Gaussian Naive Bayes, k-NN (k-Nearest Neighbor) and Random Forest algorithms with our training data and evaluated the trained ML models with test data for all three malware categories. The classification accuracy, precision, recall and F-1 scores obtained for the above three classification algorithms are shown in Table II.
The classification accuracy refers to the fraction of the total number of input samples whose labels are correctly predicted by a classifier. The precision is the ratio T P T P+FP , where T P is the number of true positives and FP is the number of false positives. It represents the ability of a classifier to avoid labeling samples that are negative as positive. The recall is the ratio T P T P+FN , where T P is the number of true positives and FN is the number of false negatives. It represents the ability of a classifier to avoid labeling samples that are positive as negative. The F1 score is the harmonic mean of precision and recall, expressed as 2× precision×recall precision+recall . It represents balance between precision and recall offered by a classifier. The scores in Table II show that the k-NN classifier performs the best followed by Random Forest classifier and Gaussian Naive Bayes classifier.
VI. CONCLUSION
In this paper, we proposed EDIMA, a modular solution for early detection of network activity originating from IoT malware using ML classification techniques. Existing IoT malware were distributed among multiple categories based on their targeted software vulnerabilities. Later, steps for the ML classifier operation and the features used for classification were listed. A testbed consisting of PC, smartphone and IoT devices connected to an access gateway was used to evaluate the classification performance of EDIMA. Using packet traffic captures at access gateway-level, feature vectors were extracted with class labels (benign or malicious) assigned to them. Subsequently, we depicted the distribution of benign and malicious traffic feature vectors for different malware categories. A proportion of the feature vectors extracted were used as training data to train few standard ML algorithms and the ML models thus obtained were applied to test data with their classification scores reported. As part of our future work, we are working on the software-based implementation of EDIMA and its performance evaluation. We are also planning to adapt some state-of-the-art botnet detection techniques using bot-CnC communication features and ML algorithms for malware activity detection and compare their performance with EDIMA.
| 2,971 |
1906.09715
|
2952773240
|
The widespread adoption of Internet of Things has led to many security issues. Post the Mirai-based DDoS attack in 2016 which compromised IoT devices, a host of new malware using Mirai's leaked source code and targeting IoT devices have cropped up, e.g. Satori, Reaper, Amnesia, Masuta etc. These malware exploit software vulnerabilities to infect IoT devices instead of open TELNET ports (like Mirai) making them more difficult to block using existing solutions such as firewalls. In this research, we present EDIMA, a distributed modular solution which can be used towards the detection of IoT malware network activity in large-scale networks (e.g. ISP, enterprise networks) during the scanning infecting phase rather than during an attack. EDIMA employs machine learning algorithms for edge devices' traffic classification, a packet traffic feature vector database, a policy module and an optional packet sub-sampling module. We evaluate the classification performance of EDIMA through testbed experiments and present the results obtained.
|
Third, unlike @cite_6 @cite_9 , we aim to detect IoT malware activity much before the actual attack, during the scanning infection phase. Finally, instead of fingerprinting the normal traffic of IoT devices @cite_10 @cite_9 and using those fingerprints towards anomaly detection, we detect the malware-induced scanning packet traffic generated by infected IoT devices. This is because the former approach suffers from limitations such as possibility of misclassification of an infected device as a legitimate device type, testing against only simple malware e.g. Mirai which may result in failure to detect other, more sophisticated malware, etc. The latter approach is not free from limitations as well, since it is not resilient against new undiscovered malware whose scanning traffic features have not been updated in the database. We advocate for a combined approach consisting of both IoT device fingerprinting anomaly detection and IoT malware scanning traffic detection.
|
{
"abstract": [
"IoT devices are being widely deployed. Many of them are vulnerable due to insecure implementations and configuration. As a result, many networks already have vulnerable devices that are easy to compromise. This has led to a new category of malware specifically targeting IoT devices. Existing intrusion detection techniques are not effective in detecting compromised IoT devices given the massive scale of the problem in terms of the number of different manufacturers involved. In this paper, we present D \"IoT, a system for detecting compromised IoT devices effectively. In contrast to prior work, D \"IoT uses a novel self-learning approach to classify devices into device types and build for each of these normal communication profiles that can subsequently be used to detect anomalous deviations in communication patterns. D \"IoT is completely autonomous and can be trained in a distributed crowdsourced manner without requiring human intervention or labeled training data. Consequently, D \"IoT copes with the emergence of new device types as well as new attacks. By systematic experiments using more than 30 real-world IoT devices, we show that D \"IoT is effective (96 detection rate with 1 false alarms) and fast (<0.03 s.) at detecting devices compromised by the infamous Mirai malware.",
"Identifying IoT devices connected to a network has multiple security benefits, such as deployment of behavior-based anomaly detectors, automated vulnerability patching of specific device types, dynamic attack mitigation, etc. In this paper, we look into the problem of IoT device identification at network level, in particular from an ISP’s perspective. The simple solution of deploying a supervised machine learning algorithm at a centralized location in the network neither scales well nor can identify new devices. To tackle these challenges, we propose and develop a distributed device fingerprinting technique (DEFT), a distributed fingerprinting solution that addresses and exploits the presence of common devices, including new devices, across smart homes and enterprises in a network. A DEFT controller develops and maintains classifiers for fingerprinting, while gateways located closer to the IoT devices at homes perform device classification. Importantly, the controller and gateways coordinate to identify new devices in the network. DEFT is designed to be scalable and dynamic—it can be deployed, orchestrated, and controlled using software-defined networking and network function virtualization. DEFT is able to identify new device types automatically, while achieving high accuracy and low false positive rate. We demonstrate the effectiveness of DEFT by experimenting on data obtained from real-world IoT devices.",
"The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet."
],
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_6"
],
"mid": [
"2804338997",
"2888096149",
"2799758613"
]
}
|
EDIMA: Early Detection of IoT Malware Network Activity Using Machine Learning Techniques
|
The Internet of Things (IoT) [1] is a network of sensing devices with limited resources and capable of wired/wireless communications with cloud services. IoT devices are being increasingly targeted by attackers using malware as they are easier to infect than conventional computers. This is due to several reasons [2] such as presence of legacy devices with no security updates, low priority given to security within the development cycle, weak login credentials, etc.
In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [3] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack.
The source code for Mirai was leaked in 2017 and since then there has been a proliferation of IoT malware. Script "kiddies" as well as professional blackhat/greyhat hackers have used the leaked source code to build their own IoT malware. These malware are usually variants of Mirai using a similar brute force technique of scanning random IP addresses for open TELNET ports and attempting to login using a built-in dictionary of commonly used credentials (Remaiten, Hajime), or more sophisticated malware that exploit software vulnerabilities to execute remote command injections on vulnerable devices (Reaper, Satori, Masuta, Linux.Darlloz, Amnesia etc.). Even though TELNET port scanning can be countered by deploying firewalls (at the user access gateway) which block incoming/outgoing TELNET traffic, malware exploiting software vulnerabilities involving application protocols such as HTTP, SOAP, PHP etc. are more difficult to block using firewalls because those application protocols form a part of legitimate traffic as well.
Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [4]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. Bitdefender mentioned in its blog in September 2017 [5] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through TELNET scanning telemetry data. In an October 2017 article [6], Arbor researchers estimated that the actual size of the Reaper botnet fluctuated between 10,000-20,000 bots but warned that this number could change at any time with an additional 2 million devices having been identified by botnet scanners as potential Reaper bots. A Kaspersky lab report [7] released in September 2018 says that 121,588 IoT malware samples were identified in the first half of 2018 which was three times the number of IoT malware samples in the whole of 2017.
Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage. As pointed out in [8], attempting to ensure that all IoT devices are secure-by-construction is futile and it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be networkbased rather than host-based.
In this research, we propose a solution towards detecting the network activity of IoT malware in large-scale networks such as enterprise and ISP (Internet Service Provider) networks.
Our proposed solution consists of machine learning (ML) algorithms running at the user access gateway which detect malware activity based on their scanning traffic patterns, a database that stores the malware scanning traffic patterns and can be used to retrieve or update those patterns, and a policy module which decides the further course of action after gateway traffic has been classified as malicious. It also includes an optional packet sub-sampling module which can be deployed for example, in case of enterprises where a number of IoT devices (≈ 10-100) are connected to a single access gateway. The bot detection solution can be deployed both on physical access gateways supplied by the ISP companies or as NFV (Network Function Virtualization) functions at the customer premises/enterprise in a SDN-NFV based network architecture, where SDN stands for Software-Defined Networking.
Bots scanning for and infecting vulnerable devices are targeted in particular by our solution. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Once the IoT bots are detected, the network operators can take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. The major contributions of this paper are listed below: 1) We have categorized most of the current IoT malware into a few categories to help identify similar malware and simplify the task of designing detection methods for them. 2) We have analyzed the traffic patterns for IoT malware from each category through testbed experiments and packet capture utilities. 3) We have proposed a modular solution towards detection of IoT malware activity by using ML techniques with the above traffic patterns.
III. EDIMA ARCHITECTURE
Our proposed solution towards detecting the scanning packet traffic generated by IoT malware through the use of ML algorithms is called EDIMA (Early Detection of IoT Malware Network Activity) and is shown in Fig.1 infected with known IoT malware as well as gateways connected to uninfected devices. The database is updated frequently for newly discovered malware. The feature vectors and corrresponding class labels are retrieved by the ML model constructor for training ML classifier for the first time and also for re-training the classifier whenever a new malware is discovered. We envisage a community of security researchers, industry personnel and users who will collect traffic data for IoT malware through honeypots, consumer access gateways etc. The feature vectors extracted from the raw traffic data samples and the class labels assigned to those samples will be updated to the online feature database. 4) Policy Module: The policy module consists of a list of policies defined by network administrator which decide the course of actions to be taken once the traffic from an access gateway has been classified as malicious by the ML classifier module. For instance, the network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. 5) Sub-sampling Module (optional): For premises having thousands of IoT devices such as enterprises, industries etc. we also propose an optional sub-sampling module as introduced in [19]. This module samples the packet traffic from IoT devices both along time as well across the devices and presents them as input to the ML classifier module. The sub-sampling module would help reduce the computational overhead for ML classifier module by forwarding only a fraction of the incoming IoT packet traffic. We have categorized known IoT malware into three categories based on type of vulnerability that they target: TELNET, HTTP POST and HTTP GET. TELNET is an application-layer protocol used for bidirectional byte-oriented communication. Typically, a user with a terminal and running a TELNET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. HTTP GET and POST are methods based on HTTP (HyperText Transfer Protocol) applicationlayer protocol which are used to request data from and send data to server resources respectively. For example, HTTP GET is commonly used for requesting web pages from remote web servers through a browser. We have presented the malware categories, various malware belonging to those categories and brief descriptions of their operation in Table I.
B. ML Classification
The classification is performed on IoT access gatewaylevel traffic rather than device-level traffic as working on aggregate traffic is faster and reduces the memory space required. We define two classes of gateway-level traffic : benign and malicious. Benign traffic refers to the gateway traffic with no malware-induced scanning packets while malicious traffic refers to gateway traffic that includes malware-induced scanning packets from one of the three malware categories. For classification of gateway traffic, we have to first generate training data samples consisting of packet captures belonging to those classes. Benign traffic is not difficult to generate since it involves the normal operation of uninfected devices. However, malicious traffic would contain both benign traffic as well as scanning/infection packets generated by malware. To keep things simple, we chose to collect the gateway traffic statically in fixed session intervals. Further, we apply the classification algorithm on these traffic sessions rather than individual packets because per-packet classification is computationally much more costly and doesn't yield any significant benefits. The [20]. Hajime Same propagation mechanism as Mirai, but no CnC server. Instead, it is built on a P2P network. Purpose seems to be to improve security of IoT devices [21]. Remaiten Same propagation mechanism as Mirai. Downloads binary specific to targeted platform. Uses IRC protocol for CnC server communication [22]. Linux.Wifatch Same propagation mechanism as Mirai. Apparently, it tries to secure IoT devices from other malware [23]. Brickerbot
Rewrites the device firmware, rendering the device permanently inoperable [24].
HTTP POST
Satori
Sends NewInternalClient request through miniigd SOAP service (REALTEK SDK) or sends malicious packets to port 37215 (Huwaei home gateway) [25].
Masuta
Forms SOAP request which bypasses authentication and causes arbitrary code execution [26]. Linux.Darlloz Sends HTTP POST requests by using PHP 'php-cgi' Information Disclosure Vulnerability to download the worm from a malicious server on an unpatched device [27].
Reaper
Scans first on a list of TCP Ports to fingerprint devices, then second wave of scans on TCP ports running web services such as 80, 8080. . . , sends HTTP POST request for command injection [28].
HTTP GET Reaper
Scanning behavior similar as above, sends HTTP request for remote command execution, usually through CGI or PHP. Amnesia
Makes simple HTTP requests, searches for a special string "Cross Web Server" in the HTTP response from target. If successful, sends four more HTTP requests which contain exploit payloads of four different shell commands [29]. The steps for gateway-level traffic classification are given below:
1) Filter each traffic session to include only TCP packets with SYN flag activated and destination port numbers belonging to a target list. 2) Extract the feature vectors for each traffic session.
3) Retrieve the trained classifier from ML model constructor and apply it on the extracted feature vectors to classify the corresponding sessions. The target list of destination port numbers is made on the basis of information obtained from public malware exploits. For example, in 'TELNET' category, target destination port numbers are 23 and 2323. In 'HTTP POST' category, target destination port numbers are 37215, 80, 20736, 36895 etc. In 'HTTP GET' category, target destination port number is always 80.
In this work, we use a total of 4 features for ML model training and traffic classification: 1) Number of unique destination IP addresses 2) Number of packets per destination IP address (maximum, minimum, mean) The motivation behind selecting the first feature is that the malware generate random IP addresses and send malicious requests to them. Hence, the number of unique destination IP addresses in case of malware-induced scannning traffic will be far more than benign traffic. The second feature set seeks to exploit the fact that malware typically do not send multiple malicious packets to the same IP address (only a single packet is sent in most cases), possibly to cover as many devices as possible during the scanning/propagation phase.
One may argue that the malware author/attacker can adopt a less aggressive scanning strategy to avoid detection. The attacker will incur a cost though, in terms of the malware performance, resulting in fewer infected devices in a fixed time period. We plan to investigate this malware performancescanning behavior trade off by formulating an optimization problem in the future. For now, the duration of traffic sessions collected for training/classification can be increased to counter any decrease in scanning rates by the attacker.
V. PERFORMANCE EVALUATION A. Testbed Description
We built a testbed with IoT devices, a laptop PC, Android smartphone and a wireless access gateway to collect ingress/egress traffic at the gateway which would form a part of the training data used to train the ML algorithms to be deployed in the ML Classifier module. The IoT devices were: Philips Hue bridge, D-Link DCS-930L Wi-Fi IP camera and TP-Link HS110 Smart Wi-Fi Plug. The laptop PC has an Intel Core i3-5020U 2.2 GHz processor with 4GB RAM and runs Windows 10 OS. Network applications such as web browser (accessing web pages, video streaming sites e.g. YouTube), email client, WiFi camera online platform etc. were run on the the laptop PC by a user. The Android smartphone has Cortex-A53 Octa-core 1.6 GHz processor with 3GB RAM and runs Android 8.0 OS. Again, the same user ran applications such as web browser, social media (Facebook/Twitter/LinkedIn), chat (WhatsApp), Wi-Fi plug app, Hue app etc. on the smartphone which also ran a few other network applications in the background. The wireless access gateway was a D-Link DIR-600 router with an Atheros AR7240 350 MHz network processor, Atheros AR9285 network adapter, 32MB RAM, 4MB flash supporting 1EEE 802.11b/g/n Wi-Fi standards. The testbed is shown in Fig. 2. We used a TP-Link TL-SG108E Gigabit Ethernet switch with port-mirroring feature to mirror the traffic from all of the above devices (IoT, laptop, smartphone) to a Raspberry Pi 3B+ Ethernet port and monitor the cumulative traffic.
B. Evaluation Methodology
As we can't use real malware due to legal and ethical considerations, we wrote scripts to simulate the generation of malicious packets based on publicly available exploits [30] for the vulnerabilities exploited by those malware. The script generates random IP addresses and sends malicious requests to them in order to execute remote command injection attacks. The injected commands were non-malicious (for ex. ls -l, uname -a), thus causing no actual harm to any device in the network even if it was vulnerable. The scanning/infection rates in our scripts were designed keeping in the mind the scanning/infection behavior reported online and the Mirai source code which is the basis for most of the current IoT malware. We selected one malware per category for our performance evaluation since the malware in each category have similar scanning/infection behavior.
A total of 60 traffic sessions of 15 minutes duration each were collected for both benign and malicious classes through our testbed. The traffic sessions collected for each case were divided into two sets: training and test data using a 70:30 split. For the training data, the class labels were assigned to each feature vector extracted from the traffic sessions included in the training data.
C. Results
The distributions of the feature values for benign and malicious training data where the malware belongs to TELNET, HTTP POST and HTTP GET categories are shown in Fig. 3 The scikit-learn ML algorithms library [31] was used for training and classification purposes. We trained Gaussian Naive Bayes, k-NN (k-Nearest Neighbor) and Random Forest algorithms with our training data and evaluated the trained ML models with test data for all three malware categories. The classification accuracy, precision, recall and F-1 scores obtained for the above three classification algorithms are shown in Table II.
The classification accuracy refers to the fraction of the total number of input samples whose labels are correctly predicted by a classifier. The precision is the ratio T P T P+FP , where T P is the number of true positives and FP is the number of false positives. It represents the ability of a classifier to avoid labeling samples that are negative as positive. The recall is the ratio T P T P+FN , where T P is the number of true positives and FN is the number of false negatives. It represents the ability of a classifier to avoid labeling samples that are positive as negative. The F1 score is the harmonic mean of precision and recall, expressed as 2× precision×recall precision+recall . It represents balance between precision and recall offered by a classifier. The scores in Table II show that the k-NN classifier performs the best followed by Random Forest classifier and Gaussian Naive Bayes classifier.
VI. CONCLUSION
In this paper, we proposed EDIMA, a modular solution for early detection of network activity originating from IoT malware using ML classification techniques. Existing IoT malware were distributed among multiple categories based on their targeted software vulnerabilities. Later, steps for the ML classifier operation and the features used for classification were listed. A testbed consisting of PC, smartphone and IoT devices connected to an access gateway was used to evaluate the classification performance of EDIMA. Using packet traffic captures at access gateway-level, feature vectors were extracted with class labels (benign or malicious) assigned to them. Subsequently, we depicted the distribution of benign and malicious traffic feature vectors for different malware categories. A proportion of the feature vectors extracted were used as training data to train few standard ML algorithms and the ML models thus obtained were applied to test data with their classification scores reported. As part of our future work, we are working on the software-based implementation of EDIMA and its performance evaluation. We are also planning to adapt some state-of-the-art botnet detection techniques using bot-CnC communication features and ML algorithms for malware activity detection and compare their performance with EDIMA.
| 2,971 |
1906.09686
|
2952816888
|
Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.
|
In literature, posteriors for Bayesian Neural Network models obtained by Hamiltonian Monte Carlo (HMC) @cite_12 are frequently used as ground truth. However, HMC scales poorly on high dimensional parameter space and large datasets @cite_0 @cite_4 . Mini-batched versions of HMC, such as Stochastic Gradient Langevin Dynamics (SGLD) @cite_0 and Stochastic Gadient HMC @cite_5 , have been introduced to address the issue of scalability. However, these methods still suffer from lower mixing rate and are not theoretically guaranteed to converge to the true posterior when model assumptions are not met (e.g. when the true model of the gradient noise is not well-estimated).
|
{
"abstract": [
"In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which monitors a \"sampling threshold\" and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients.",
"Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals. The popularity of such methods has grown significantly in recent years. However, a limitation of HMC methods is the required gradient computation for simulation of the Hamiltonian dynamical system--such computation is infeasible in problems involving a large sample size or streaming data. Instead, we must rely on a noisy gradient estimate computed from a subset of the data. In this paper, we explore the properties of such a stochastic gradient HMC approach. Surprisingly, the natural implementation of the stochastic approximation can be arbitrarily bad. To address this problem we introduce a variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution. Results on simulated data validate our theory. We also provide an application of our methods to a classification task using neural networks and to online Bayesian matrix factorization.",
"",
"Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights."
],
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_4",
"@cite_12"
],
"mid": [
"2167433878",
"2144193737",
"",
"1719489212"
]
}
|
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
|
While deep learning provides a flexible framework for function approximation that achieves impressive performance on many real-life tasks (LeCun et al., 2015), there has been a recent focus on providing predictive uncertainty estimates for deep models, making them better suited for use in risk-sensitive applications. Bayesian neural networks (BNNs) are neural network models that include uncertainty through priors on network weights, and thus provide uncertainty about the functional mean through posterior predictive distributions (MacKay, 1992;Neal, 2012). (Note: one can also place priors directly on functions rather than network weights (Sun et al., 2019); in this work, we focus on the more commonly used approach of placing priors over weights.) Unfortunately, characterizing uncertainty over parameters of neural networks is challenging due to the highdimensionality of the weight space and potentially complex dependencies among the weights. Markov-chain Monte Carlo (MCMC) techniques are often slow to mix. Standard variational inference methods with mean field approximations may struggle to escape local optima and furthermore, are unable to capture dependencies between the weights.
There exists a large body of work to improve the quality of inference for Bayesian neural networks (BNNs) by improving the approximate inference procedure (e.g. Graves 2011;Blundell et al. 2015;Hernández-Lobato et al. 2016, to name a few), or by improving the flexibility of the variational approximation for variational inference (e.g. Gershman et al. 2012;Ranganath et al. 2016;Louizos & Welling 2017;Miller et al. 2017). On the other hand, a number of frequentist approaches, like ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018), provide predictive uncertainty estimates for neural network while by-passing the challenges of Bayesian inference all together.
The objective of this work is to provide an empirical comparison of common BNN inference approaches with a focus on the quality of uncertainty quantification. We perform a careful empirical comparison of 8 state-of-the-art approximate inference methods and 2 non-Bayesian frameworks, where we find that performance depends heavily on the training data. We characterize situations where metrics like log-likelihood and RMSE fail to distinguish good vs poor approximations of the true posterior, and, based on our observations, engineer synthetic datasets for comparing the predictive uncertainty estimates.
Related Works
In literature, posteriors for Bayesian Neural Network models obtained by Hamiltonian Monte Carlo (HMC) (Neal, 2012) are frequently used as ground truth. However, HMC scales poorly on high dimensional parameter space and large datasets (Welling & Teh, 2011;Chen et al., 2014). Mini-batched versions of HMC, such as Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011) and Stochastic Gadient HMC (Chen et al., 2014), have been introduced to address the issue of scalability. However, these methods still suffer from lower mixing rate and are not theoretically guaranteed to converge to the true posterior when model assumptions are not met (e.g. when the true model of the gradient noise is not well-estimated).
As a result, much effort has been spent on variational methods. Mean Field Variational Bayes for BNNs were in intro-arXiv:1906.09686v1 [cs.LG] 24 Jun 2019 duced in (Graves, 2011), the gradient computation of which was later improved in Bayes by Backprop (BBB) (Blundell et al., 2015). However, the fully factorized Gaussian variational family used in BBB is unable to capture correlation amongst the parameters in the posterior. In contrast, Matrix Gaussian Posteriors (MVG) (Louizos & Welling, 2016), Multiplicative Normalizing Flows (MNF) (Louizos & Welling, 2017), and Bayes by Hypernet (BBH) (Pawlowski et al., 2017) are explicitly designed to capture posterior correlation by imposing structured approximation families; works like Black Box α-Divergence (Hernández-Lobato et al., 2016) and Probabilistic Backpropagation (PBP) (Hernández-Lobato & Adams, 2015) use a richer family of divergence measures, encouraging approximate posteriors to capture important properties of true posterior distributions.
Finally, Dropout (Gal & Ghahramani, 2016) and ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018) by-pass the difficulties of performing Bayesian inference and obtain predictive uncertainty estimates through implicitly or explicitly training multiple models on the same data.
While there are numerous inference methods, there have been few exhaustive, independent comparisons (Myshkov & Julier, 2016;Zhao & Ji, 2018). In (Myshkov & Julier, 2016) BBB, PBP, and Dropout are compared with minibatched HMC on regression. The evaluation metrics consist of RMSE and divergence from HMC posteriors (considered as ground truth). In (Zhao & Ji, 2018), BBB and Dropout are compared with HMC and SGHMC on classification. Accuracy and calibration (how well predictive uncertainty align with empirical uncertainty) of the posterior predictive distribution are analyzed. Neither work indicates when predictive accuracy and calibration correspond to the fidelity of posterior approximation.
In this work, we provide a comparison of a wide range of inference methods on both regression and classification tasks. Furthermore, we investigate the usefulness of metrics for posterior predictive generalization and calibration for measuring the fidelity of posterior approximations. In particular, we identify situations in which these metrics are poor proxies for measuring divergence from true posteriors.
Challenges in Evaluating Uncertainty
Frequently in literature, high test log likelihood is used as evidence that the inference procedure has more faithfully captured the true posterior. However, here we argue that while test log likelihood may be a good criteria for model selection, it is not a reliable criteria for determining how well an approximate posterior aligns with the true posterior.
Consider the example in Figure 1. The training data has a 'gap', namely there are no samples from [−1, 1]. We see that the posterior predictive means of the true posterior (i.e. the ground truth), as given by HMC (details in Section 5), and that obtained by PBP are identical. However, the PBP posterior predictive uncertainty is far smaller. The average test log-likelihood for data evenly spaced in [−4, 4] is -0.25 for PBP and -0.42 for HMC. In this case, the better number does not indicate a better model class (e.g. a prior p(W ) that appropriately puts more weight where the data lie). Rather, it is an artifact of the fact that the data happens to lie where an incorrect inference procedure put more mass. In short, the average test log-likelihood indicates that the approximate posterior predictive better aligns with the data and not that it is a faithful approximation of the true posterior predictive.
For the same reason, RMSE and other metrics for measuring predictive calibration (such as Prediction Interval Coverage Probability) are also unreliable indicators of the degree to which approximate posteriors align with the true ones. In this paper, we argue that issues of model selection should be addressed separately from issues associated with the approximation gaps of inference. For this, we engineer synthetic datasets on which our ground-truth BNN model produces well-calibrated posterior predictive distributions and hence generalization and calibration metrics are proxies for how well a given inference method captures the true posterior.
Experimental Set Up
Data Sets We perform experiments on univariate regression and two-dimensional binary classification tasks so that the ground truth distributions can be visualized. For each task, we consider two synthetic datasets. In one of these datasets, the a priori model uncertainty will be higher than the variation in the data warrants, whereas in the other dataset the data variation will match the a priori model uncertainty. Data generation details are in Appendix A. Ground Truth Baselines We use Hamiltonian Monte Carlo (HMC) (Neal, 2012) to construct 'ground-truth' posterior and posterior predictive distributions. We run HMC for 50k iterations with 100 leapfrog steps and check for mixing. See Appendix E for full description.
Methods We evaluate 10 inference methods: Bayes by Backprop (BBB), Probabilistic Backpropagation (PBP), Black-box α-Divergence (BB-ALPHA), Multiplicative Normalizing Flows (MNF), Matrix-Variate Gaussian (MVG), Bayes by Hypernet (BBH), Dropout, Ensemble, Stochastic Gradient Langevin Dynamics (SGLD), and Stochastic Gradient HMC (SGHMC). We do not evaluate PBP and BB-ALPHA on classification tasks as they assume exponential family as likelihood distributions. All optimization is done with Adam except for HMC, SGLD and SG-HMC which have their own scheduled gradient updates. We use existing code-bases for methods when available (BB-ALPHA, MVG, BBH, NMF). Full description of tuning schemes is in Appendix E.
Experimental Parameters For all tasks, we use neural networks with ReLU nonlinearities. We use 1 hidden layer with 50 hidden nodes for regression and 2 hidden layers with 10 nodes each for classification. Every method is run with 20 random restarts, each until convergence, using a fixed weight prior W ∼ N (0, I) and true output noise. For methods that include priors on the output noise, we disable these in the experiments. Out of the 20 restarts, we select the solution with the highest validation log-likelihood and estimated the posterior predictive distributions with 500 posterior samples (results given by selection by ELBO are indistinguishable and are in Appendix 6). Full description in Appendix E.
Evaluation Metrics Evaluating the fidelity of posterior approximations is challenging. As a result, in BNN literature, accuracy, average marginal log-likelihood, and frequentist metrics such as Prediction Interval Coverage Probability (PICP) -the percentage of observations for which the ground truth y lies within a 95% predictive-interval (PI) of the learned model -and the average width of the 95% PI (MPIW) are commonly used as indicators of the quality of posterior approximation (full description of metrics in Appendix B). Our experiments provide insights on when these metrics correspond with high quality posterior approximation and when they do not.
Results
Generalization and calibration metrics are not reliable indicators for quality of posterior approximation. Figure 1 and 2 show that when the model class has large flexibility for the data, the ground truth posterior predictive may have lower log-likelihood and calibration scores than a poor approximation. In Figure 1, we see most inference methods, though underestimate the uncertainty, still produce high log-likelyhood because the predictive mean aligns well with the true function. But HMC gets penalized by giving large uncertainty in the middle due to model class flexibility. On the other hand, when the model class has the right capacity for the data, posterior predictive generalization and calibration are good but not definitive indicators of the quality of posterior approximation ( Figure 3, Table 1). This is especially concerning for high-dimensional or large datasets on which ground truth distributions are hard to compute and appropriate model capacity is hard to ascertain. Here, generalization/calibration metrics often conflate bad models with bad inference. We note that evaluations of uncertainty estimates based on active learning will struggle similarly in distinguishing model and inference issues.
Inference methods designed to capture structure in the posterior do not necessarily produce better approximations of the true posterior. In our experiments, we do not see that methods using a richer divergence metric or structured variational family are able to better capture the ground truth posterior. This is likely due to the fact that the true posteriors in our experiments lack patterns of dependencies that those inference methods aim to capture (Appendix D). However, this observation indicates the need for developing concrete guidelines for when it is beneficial to use alterna- (Figure 1), generalization metrics combined with calibration metrics give a reasonable indication for the quality of posterior approximation (HMC scores highest). However, even here these metric do not entirely capture our intuition for quality of fit (for example, the test log-likelihood of BB-ALPHA is higher than Ensemble).
tive divergence metrics and structured variational families, since the extra flexibility of these methods often invites additional optimization challenges on real-data.
Ensemble methods do not consistently produce the types of uncertainty estimates we want. Methods using an ensemble (whether explicit or implicit) of models to produce predictive distributions rely on the model diversity to produce accurate uncertainty estimates. Ensemble methods may produce similar solutions due to initialization or optimization issues. When the ensemble includes many dissimilar plausible models for the data ( Figure 3) the uncertainty estimate can be good; when ensemble training finds local optima with highly similar models for the data the uncertain estimates can be poor (Figure 7). Thus, uncertainty estimates from ensembles can be unreliable absent a structured way of including diversity training objectives.
SGHMC produces posterior predictives that are most similar to that of HMC. In our experiments, we see that SGLD drastically underestimates posterior predictive uncertainty. SGHMC, while tending to overestimate uncertainty, produces predictive distributions qualitatively similar to those of HMC.
Conclusion
In this paper, we compare 10 commonly used approximate inference procedures for Bayesian Neural Networks. Frequently, measurements of generalization and calibration of the posterior predictive are used to evaluate the quality of inference. We show that these metrics conflate issues of model selection with those of inference. On our data, we see that approximate Bayesian inference methods struggle to capture true posteriors and the non-Bayesian methods often do not capture the type of predictive uncertainty that we want. Our experiments show that we need more exhaustive and
B. Evaluation Metrics
The average marginal log-likelihood is computed as:
E (xn,yn)∼D E q(W ) [p(y n |x n , W )] .(2)
The predictive RMSE is computed as:
1 N N n=1 y n − E q(W ) [f (x n , W )] 2 2 .(3)
The Prediction Interval Coverage Probability (PICP) is computed as:
1 N N n=1 1 yn≤ y high n · 1 yn≥ y low n ,(4)
and the Mean Prediction Interval Width (MPIW) is computed as:
1 N N n=1 y high n − y low n ,(5)
where y high n is the 97.5% percentile and y low n is the 2.5% percentile of the predicted outputs for x n . We want models to have PICP values to be close to 95% while minimizing the MPIW, thus formalizing our desiderata that well-calibrated posterior predictive uncertainty should be both necessary and sufficient to capture the variation in the data.
C. Additional Results
• Figure 4 represents the posterior predictive distribution and Table 2 summarizes the metrics for all inference methods of regression task 1.
• Figure 5 is the complete plot of posterior predictive distribution for all inference methods of regression task 2.
• Figure 6 summarizes the posterior predictive distribution and Table 3 shows the metrics for all inference methods of classification task 1.
• Figure 7 summarizes the posterior predictive distribution and Table 4 shows the metrics for all inference methods of classification task 2.
• For BBB, MNF, MVG, BBH, we ran additional experiments by using the KL divergence as model selection criterion instead of log-likelyhood. Smaller KL divergence suggests that the approximated posterior is more similar to the true posterior. For BB-ALPHA, the measurements are not comparable when α is different. We fixed α to be 0.3 (chosen by cross validation based on test loglikehood) and selected the run with the smallest α-divergence. Figure 8 and Figure 9 summarizes the posterior predictive distribution for those models on regression tasks. Overall, those methods still do not produce satisfying approximations of the true posterior. Also, BB-ALPH does not fit the data well and significantly overestimates the uncertainty.
• For BBB, BB-ALPHA, MVG, we helped with the optimization by initializing the variational parameters with the empirical mean of HMC samples. Figure 10 and Figure 11 summarizes the posterior predictive distribution for those models on regression tasks. Overall, the results are similar to Figure 4 and Figure 3.
D. Exploration of Structure in the HMC Posterior
We investigated the structure of the HMC posterior as it is critical to understand the types of dependencies among the weights. We found out that for regression task 1, the marginal distribution of HMC samples is close to a normal distribution, which is suggested by Figure 12. We thus approximated the posterior with a multivariate Gaussian distribution q ∼ N (µ, Σ)
where µ, Σ is approximated witĥ respectively and w i denotes the ith HMC sample. Figure 13 shows that such an approximation is not sufficient to capture the dependencies among weights as both posterior mean and posterior variance is very different from ground truth. The experiment suggests that there may be higher moments correlation among the weight space. In the future, we are dedicated to investigate what types of dependencies exist in the true weight space, what types of dependencies initial stepsize of 2 × 10 −3 . Acceptance rate α is checked every 100 iteration. is increased by 1.1 times if α > 0.8 or decreased by 0.9 times if α < 0.2. We used 50K iterations and a burnin of 40K and a thinning of interval 20. Convergence is verified through trace-plots and autocorrelation for weights. • Probabilistic BackPropagation (PBP): There are no hyperparameters to tune. We randomized the order of the data before each data sweep. The code is adapted from https://github.com/HIPS/ Probabilistic-Backpropagation.
µ = 1 S S i=1 w i ,Σ = 1 S − 1 S s=1 (w i −μ)(w i −μ)
E. Hyperparameter Settings
• [64,64] [10]
[10] Table 9. Optimal hyperparameter for BBH.
• Dropout:
We implement Dropout ourselves, which is essentially identical to the code provided in https://github.com/yaringal/ DropoutUncertaintyExps. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1} and Bernoulli dropout rate γ ∈ {0.005, 0.01, 0.05}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.05 0.005 0.01 γ 0.005 0.01 0.005 0.005 Table 10. Optimal hyperparameter for Dropout.
• Ensemble: We implement Ensemble ourselves. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5. The regularization term is chosen so that minimizing the objective function corresponds to maximizing the posterior. We collected 500 prediction samples from 500 random restarts.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.005 0.1 0.1 Table 11. Optimal hyperparameter for Ensemble.
• Stochatic Gradient Langevin Dynamics (SGLD):
We implement SGLD ourselves. We set the batch size to be 32. We tested learning rate ∈ {0.0005, 0.001, 0.005, 0.01}. We used 500K iterations and a burnin of 450K and a thinning of interval 100.
Reg 1 Reg 2 Class 1 Class 2 0.001 0.001 0.01 0.01 Table 12. Optimal hyperparameter for SGLD.
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
• Stochatic Gradient Hamiltonian Monte Carlo (SGHMC): We implement SGHMC ourselves. We set the batch size to be 32. The momentum variable is sampled from N (0, I ). L = 100 leapfrog steps are used and we tested stepsize ∈ {0.001, 0.002, 0.005}. We used stepsize = 0.002 for all tasks. We used the friction term C = 10I andB = 0. We used 50K iterations and a burnin of 40K and a thinning of interval 20.
| 3,100 |
1906.09686
|
2952816888
|
Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.
|
As a result, much effort has been spent on variational methods. Mean Field Variational Bayes for BNNs were in introduced in @cite_3 , the gradient computation of which was later improved in Bayes by Backprop (BBB) . However, the fully factorized Gaussian variational family used in BBB is unable to capture correlation amongst the parameters in the posterior. In contrast, Matrix Gaussian Posteriors (MVG) @cite_2 , Multiplicative Normalizing Flows (MNF) @cite_8 , and Bayes by Hypernet (BBH) @cite_7 are explicitly designed to capture posterior correlation by imposing structured approximation families; works like Black Box @math -Divergence @cite_1 and Probabilistic Backpropagation (PBP) use a richer family of divergence measures, encouraging approximate posteriors to capture important properties of true posterior distributions.
|
{
"abstract": [
"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.",
"We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward to improve the approximation by employing normalizing flows while still allowing for local reparametrizations and a tractable lower bound. In experiments we show that with this new approximation we can significantly improve upon classical mean field for Bayesian neural networks on both predictive accuracy as well as predictive uncertainty.",
"",
"Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.",
"We introduce a variational Bayesian neural network where the parameters are governed via a probability distribution on random matrices. Specifically, we employ a matrix variate Gaussian (Gupta & Nagar, 1999) parameter posterior distribution where we explicitly model the covariance among the input and output dimensions of each layer. Furthermore, with approximate covariance matrices we can achieve a more efficient way to represent those correlations that is also cheaper than fully factorized parameter posteriors. We further show that with the \"local reprarametrization trick\" (, 2015) on this posterior distribution we arrive at a Gaussian Process (Rasmussen, 2006) interpretation of the hidden units in each layer and we, similarly with (Gal & Ghahramani, 2015), provide connections with deep Gaussian processes. We continue in taking advantage of this duality and incorporate \"pseudo-data\" (Snelson & Ghahramani, 2005) in our model, which in turn allows for more efficient posterior sampling while maintaining the properties of the original model. The validity of the proposed approach is verified through extensive experiments."
],
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2951266961",
"2950471516",
"",
"2108677974",
"2302053044"
]
}
|
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
|
While deep learning provides a flexible framework for function approximation that achieves impressive performance on many real-life tasks (LeCun et al., 2015), there has been a recent focus on providing predictive uncertainty estimates for deep models, making them better suited for use in risk-sensitive applications. Bayesian neural networks (BNNs) are neural network models that include uncertainty through priors on network weights, and thus provide uncertainty about the functional mean through posterior predictive distributions (MacKay, 1992;Neal, 2012). (Note: one can also place priors directly on functions rather than network weights (Sun et al., 2019); in this work, we focus on the more commonly used approach of placing priors over weights.) Unfortunately, characterizing uncertainty over parameters of neural networks is challenging due to the highdimensionality of the weight space and potentially complex dependencies among the weights. Markov-chain Monte Carlo (MCMC) techniques are often slow to mix. Standard variational inference methods with mean field approximations may struggle to escape local optima and furthermore, are unable to capture dependencies between the weights.
There exists a large body of work to improve the quality of inference for Bayesian neural networks (BNNs) by improving the approximate inference procedure (e.g. Graves 2011;Blundell et al. 2015;Hernández-Lobato et al. 2016, to name a few), or by improving the flexibility of the variational approximation for variational inference (e.g. Gershman et al. 2012;Ranganath et al. 2016;Louizos & Welling 2017;Miller et al. 2017). On the other hand, a number of frequentist approaches, like ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018), provide predictive uncertainty estimates for neural network while by-passing the challenges of Bayesian inference all together.
The objective of this work is to provide an empirical comparison of common BNN inference approaches with a focus on the quality of uncertainty quantification. We perform a careful empirical comparison of 8 state-of-the-art approximate inference methods and 2 non-Bayesian frameworks, where we find that performance depends heavily on the training data. We characterize situations where metrics like log-likelihood and RMSE fail to distinguish good vs poor approximations of the true posterior, and, based on our observations, engineer synthetic datasets for comparing the predictive uncertainty estimates.
Related Works
In literature, posteriors for Bayesian Neural Network models obtained by Hamiltonian Monte Carlo (HMC) (Neal, 2012) are frequently used as ground truth. However, HMC scales poorly on high dimensional parameter space and large datasets (Welling & Teh, 2011;Chen et al., 2014). Mini-batched versions of HMC, such as Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011) and Stochastic Gadient HMC (Chen et al., 2014), have been introduced to address the issue of scalability. However, these methods still suffer from lower mixing rate and are not theoretically guaranteed to converge to the true posterior when model assumptions are not met (e.g. when the true model of the gradient noise is not well-estimated).
As a result, much effort has been spent on variational methods. Mean Field Variational Bayes for BNNs were in intro-arXiv:1906.09686v1 [cs.LG] 24 Jun 2019 duced in (Graves, 2011), the gradient computation of which was later improved in Bayes by Backprop (BBB) (Blundell et al., 2015). However, the fully factorized Gaussian variational family used in BBB is unable to capture correlation amongst the parameters in the posterior. In contrast, Matrix Gaussian Posteriors (MVG) (Louizos & Welling, 2016), Multiplicative Normalizing Flows (MNF) (Louizos & Welling, 2017), and Bayes by Hypernet (BBH) (Pawlowski et al., 2017) are explicitly designed to capture posterior correlation by imposing structured approximation families; works like Black Box α-Divergence (Hernández-Lobato et al., 2016) and Probabilistic Backpropagation (PBP) (Hernández-Lobato & Adams, 2015) use a richer family of divergence measures, encouraging approximate posteriors to capture important properties of true posterior distributions.
Finally, Dropout (Gal & Ghahramani, 2016) and ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018) by-pass the difficulties of performing Bayesian inference and obtain predictive uncertainty estimates through implicitly or explicitly training multiple models on the same data.
While there are numerous inference methods, there have been few exhaustive, independent comparisons (Myshkov & Julier, 2016;Zhao & Ji, 2018). In (Myshkov & Julier, 2016) BBB, PBP, and Dropout are compared with minibatched HMC on regression. The evaluation metrics consist of RMSE and divergence from HMC posteriors (considered as ground truth). In (Zhao & Ji, 2018), BBB and Dropout are compared with HMC and SGHMC on classification. Accuracy and calibration (how well predictive uncertainty align with empirical uncertainty) of the posterior predictive distribution are analyzed. Neither work indicates when predictive accuracy and calibration correspond to the fidelity of posterior approximation.
In this work, we provide a comparison of a wide range of inference methods on both regression and classification tasks. Furthermore, we investigate the usefulness of metrics for posterior predictive generalization and calibration for measuring the fidelity of posterior approximations. In particular, we identify situations in which these metrics are poor proxies for measuring divergence from true posteriors.
Challenges in Evaluating Uncertainty
Frequently in literature, high test log likelihood is used as evidence that the inference procedure has more faithfully captured the true posterior. However, here we argue that while test log likelihood may be a good criteria for model selection, it is not a reliable criteria for determining how well an approximate posterior aligns with the true posterior.
Consider the example in Figure 1. The training data has a 'gap', namely there are no samples from [−1, 1]. We see that the posterior predictive means of the true posterior (i.e. the ground truth), as given by HMC (details in Section 5), and that obtained by PBP are identical. However, the PBP posterior predictive uncertainty is far smaller. The average test log-likelihood for data evenly spaced in [−4, 4] is -0.25 for PBP and -0.42 for HMC. In this case, the better number does not indicate a better model class (e.g. a prior p(W ) that appropriately puts more weight where the data lie). Rather, it is an artifact of the fact that the data happens to lie where an incorrect inference procedure put more mass. In short, the average test log-likelihood indicates that the approximate posterior predictive better aligns with the data and not that it is a faithful approximation of the true posterior predictive.
For the same reason, RMSE and other metrics for measuring predictive calibration (such as Prediction Interval Coverage Probability) are also unreliable indicators of the degree to which approximate posteriors align with the true ones. In this paper, we argue that issues of model selection should be addressed separately from issues associated with the approximation gaps of inference. For this, we engineer synthetic datasets on which our ground-truth BNN model produces well-calibrated posterior predictive distributions and hence generalization and calibration metrics are proxies for how well a given inference method captures the true posterior.
Experimental Set Up
Data Sets We perform experiments on univariate regression and two-dimensional binary classification tasks so that the ground truth distributions can be visualized. For each task, we consider two synthetic datasets. In one of these datasets, the a priori model uncertainty will be higher than the variation in the data warrants, whereas in the other dataset the data variation will match the a priori model uncertainty. Data generation details are in Appendix A. Ground Truth Baselines We use Hamiltonian Monte Carlo (HMC) (Neal, 2012) to construct 'ground-truth' posterior and posterior predictive distributions. We run HMC for 50k iterations with 100 leapfrog steps and check for mixing. See Appendix E for full description.
Methods We evaluate 10 inference methods: Bayes by Backprop (BBB), Probabilistic Backpropagation (PBP), Black-box α-Divergence (BB-ALPHA), Multiplicative Normalizing Flows (MNF), Matrix-Variate Gaussian (MVG), Bayes by Hypernet (BBH), Dropout, Ensemble, Stochastic Gradient Langevin Dynamics (SGLD), and Stochastic Gradient HMC (SGHMC). We do not evaluate PBP and BB-ALPHA on classification tasks as they assume exponential family as likelihood distributions. All optimization is done with Adam except for HMC, SGLD and SG-HMC which have their own scheduled gradient updates. We use existing code-bases for methods when available (BB-ALPHA, MVG, BBH, NMF). Full description of tuning schemes is in Appendix E.
Experimental Parameters For all tasks, we use neural networks with ReLU nonlinearities. We use 1 hidden layer with 50 hidden nodes for regression and 2 hidden layers with 10 nodes each for classification. Every method is run with 20 random restarts, each until convergence, using a fixed weight prior W ∼ N (0, I) and true output noise. For methods that include priors on the output noise, we disable these in the experiments. Out of the 20 restarts, we select the solution with the highest validation log-likelihood and estimated the posterior predictive distributions with 500 posterior samples (results given by selection by ELBO are indistinguishable and are in Appendix 6). Full description in Appendix E.
Evaluation Metrics Evaluating the fidelity of posterior approximations is challenging. As a result, in BNN literature, accuracy, average marginal log-likelihood, and frequentist metrics such as Prediction Interval Coverage Probability (PICP) -the percentage of observations for which the ground truth y lies within a 95% predictive-interval (PI) of the learned model -and the average width of the 95% PI (MPIW) are commonly used as indicators of the quality of posterior approximation (full description of metrics in Appendix B). Our experiments provide insights on when these metrics correspond with high quality posterior approximation and when they do not.
Results
Generalization and calibration metrics are not reliable indicators for quality of posterior approximation. Figure 1 and 2 show that when the model class has large flexibility for the data, the ground truth posterior predictive may have lower log-likelihood and calibration scores than a poor approximation. In Figure 1, we see most inference methods, though underestimate the uncertainty, still produce high log-likelyhood because the predictive mean aligns well with the true function. But HMC gets penalized by giving large uncertainty in the middle due to model class flexibility. On the other hand, when the model class has the right capacity for the data, posterior predictive generalization and calibration are good but not definitive indicators of the quality of posterior approximation ( Figure 3, Table 1). This is especially concerning for high-dimensional or large datasets on which ground truth distributions are hard to compute and appropriate model capacity is hard to ascertain. Here, generalization/calibration metrics often conflate bad models with bad inference. We note that evaluations of uncertainty estimates based on active learning will struggle similarly in distinguishing model and inference issues.
Inference methods designed to capture structure in the posterior do not necessarily produce better approximations of the true posterior. In our experiments, we do not see that methods using a richer divergence metric or structured variational family are able to better capture the ground truth posterior. This is likely due to the fact that the true posteriors in our experiments lack patterns of dependencies that those inference methods aim to capture (Appendix D). However, this observation indicates the need for developing concrete guidelines for when it is beneficial to use alterna- (Figure 1), generalization metrics combined with calibration metrics give a reasonable indication for the quality of posterior approximation (HMC scores highest). However, even here these metric do not entirely capture our intuition for quality of fit (for example, the test log-likelihood of BB-ALPHA is higher than Ensemble).
tive divergence metrics and structured variational families, since the extra flexibility of these methods often invites additional optimization challenges on real-data.
Ensemble methods do not consistently produce the types of uncertainty estimates we want. Methods using an ensemble (whether explicit or implicit) of models to produce predictive distributions rely on the model diversity to produce accurate uncertainty estimates. Ensemble methods may produce similar solutions due to initialization or optimization issues. When the ensemble includes many dissimilar plausible models for the data ( Figure 3) the uncertainty estimate can be good; when ensemble training finds local optima with highly similar models for the data the uncertain estimates can be poor (Figure 7). Thus, uncertainty estimates from ensembles can be unreliable absent a structured way of including diversity training objectives.
SGHMC produces posterior predictives that are most similar to that of HMC. In our experiments, we see that SGLD drastically underestimates posterior predictive uncertainty. SGHMC, while tending to overestimate uncertainty, produces predictive distributions qualitatively similar to those of HMC.
Conclusion
In this paper, we compare 10 commonly used approximate inference procedures for Bayesian Neural Networks. Frequently, measurements of generalization and calibration of the posterior predictive are used to evaluate the quality of inference. We show that these metrics conflate issues of model selection with those of inference. On our data, we see that approximate Bayesian inference methods struggle to capture true posteriors and the non-Bayesian methods often do not capture the type of predictive uncertainty that we want. Our experiments show that we need more exhaustive and
B. Evaluation Metrics
The average marginal log-likelihood is computed as:
E (xn,yn)∼D E q(W ) [p(y n |x n , W )] .(2)
The predictive RMSE is computed as:
1 N N n=1 y n − E q(W ) [f (x n , W )] 2 2 .(3)
The Prediction Interval Coverage Probability (PICP) is computed as:
1 N N n=1 1 yn≤ y high n · 1 yn≥ y low n ,(4)
and the Mean Prediction Interval Width (MPIW) is computed as:
1 N N n=1 y high n − y low n ,(5)
where y high n is the 97.5% percentile and y low n is the 2.5% percentile of the predicted outputs for x n . We want models to have PICP values to be close to 95% while minimizing the MPIW, thus formalizing our desiderata that well-calibrated posterior predictive uncertainty should be both necessary and sufficient to capture the variation in the data.
C. Additional Results
• Figure 4 represents the posterior predictive distribution and Table 2 summarizes the metrics for all inference methods of regression task 1.
• Figure 5 is the complete plot of posterior predictive distribution for all inference methods of regression task 2.
• Figure 6 summarizes the posterior predictive distribution and Table 3 shows the metrics for all inference methods of classification task 1.
• Figure 7 summarizes the posterior predictive distribution and Table 4 shows the metrics for all inference methods of classification task 2.
• For BBB, MNF, MVG, BBH, we ran additional experiments by using the KL divergence as model selection criterion instead of log-likelyhood. Smaller KL divergence suggests that the approximated posterior is more similar to the true posterior. For BB-ALPHA, the measurements are not comparable when α is different. We fixed α to be 0.3 (chosen by cross validation based on test loglikehood) and selected the run with the smallest α-divergence. Figure 8 and Figure 9 summarizes the posterior predictive distribution for those models on regression tasks. Overall, those methods still do not produce satisfying approximations of the true posterior. Also, BB-ALPH does not fit the data well and significantly overestimates the uncertainty.
• For BBB, BB-ALPHA, MVG, we helped with the optimization by initializing the variational parameters with the empirical mean of HMC samples. Figure 10 and Figure 11 summarizes the posterior predictive distribution for those models on regression tasks. Overall, the results are similar to Figure 4 and Figure 3.
D. Exploration of Structure in the HMC Posterior
We investigated the structure of the HMC posterior as it is critical to understand the types of dependencies among the weights. We found out that for regression task 1, the marginal distribution of HMC samples is close to a normal distribution, which is suggested by Figure 12. We thus approximated the posterior with a multivariate Gaussian distribution q ∼ N (µ, Σ)
where µ, Σ is approximated witĥ respectively and w i denotes the ith HMC sample. Figure 13 shows that such an approximation is not sufficient to capture the dependencies among weights as both posterior mean and posterior variance is very different from ground truth. The experiment suggests that there may be higher moments correlation among the weight space. In the future, we are dedicated to investigate what types of dependencies exist in the true weight space, what types of dependencies initial stepsize of 2 × 10 −3 . Acceptance rate α is checked every 100 iteration. is increased by 1.1 times if α > 0.8 or decreased by 0.9 times if α < 0.2. We used 50K iterations and a burnin of 40K and a thinning of interval 20. Convergence is verified through trace-plots and autocorrelation for weights. • Probabilistic BackPropagation (PBP): There are no hyperparameters to tune. We randomized the order of the data before each data sweep. The code is adapted from https://github.com/HIPS/ Probabilistic-Backpropagation.
µ = 1 S S i=1 w i ,Σ = 1 S − 1 S s=1 (w i −μ)(w i −μ)
E. Hyperparameter Settings
• [64,64] [10]
[10] Table 9. Optimal hyperparameter for BBH.
• Dropout:
We implement Dropout ourselves, which is essentially identical to the code provided in https://github.com/yaringal/ DropoutUncertaintyExps. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1} and Bernoulli dropout rate γ ∈ {0.005, 0.01, 0.05}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.05 0.005 0.01 γ 0.005 0.01 0.005 0.005 Table 10. Optimal hyperparameter for Dropout.
• Ensemble: We implement Ensemble ourselves. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5. The regularization term is chosen so that minimizing the objective function corresponds to maximizing the posterior. We collected 500 prediction samples from 500 random restarts.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.005 0.1 0.1 Table 11. Optimal hyperparameter for Ensemble.
• Stochatic Gradient Langevin Dynamics (SGLD):
We implement SGLD ourselves. We set the batch size to be 32. We tested learning rate ∈ {0.0005, 0.001, 0.005, 0.01}. We used 500K iterations and a burnin of 450K and a thinning of interval 100.
Reg 1 Reg 2 Class 1 Class 2 0.001 0.001 0.01 0.01 Table 12. Optimal hyperparameter for SGLD.
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
• Stochatic Gradient Hamiltonian Monte Carlo (SGHMC): We implement SGHMC ourselves. We set the batch size to be 32. The momentum variable is sampled from N (0, I ). L = 100 leapfrog steps are used and we tested stepsize ∈ {0.001, 0.002, 0.005}. We used stepsize = 0.002 for all tasks. We used the friction term C = 10I andB = 0. We used 50K iterations and a burnin of 40K and a thinning of interval 20.
| 3,100 |
1906.09686
|
2952816888
|
Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.
|
Finally, Dropout @cite_15 and ensemble methods @cite_10 @cite_9 @cite_14 by-pass the difficulties of performing Bayesian inference and obtain predictive uncertainty estimates through implicitly or explicitly training multiple models on the same data.
|
{
"abstract": [
"Understanding the uncertainty of a neural network's (NN) predictions is essential for many applications. The Bayesian framework provides a principled approach to this, however applying it to NNs is challenging due to the large number of parameters and data. Ensembling NNs provides an easily implementable, scalable method for uncertainty quantification, however, it has been criticised for not being Bayesian. In this work we propose one modification to the usual ensembling process that does result in Bayesian behaviour: regularising parameters about values drawn from a prior distribution. We provide theoretical support for this procedure as well as empirical evaluations on regression, image classification, and reinforcement learning problems.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",
"We provide frequentist estimates of aleatoric and epistemic uncertainty for deep neural networks. To estimate aleatoric uncertainty we propose simultaneous quantile regression, a loss function to learn all the conditional quantiles of a given target variable. These quantiles lead to well-calibrated prediction intervals. To estimate epistemic uncertainty we propose training certificates, a collection of diverse non-trivial functions that map all training samples to zero. These certificates map out-of-distribution examples to non-zero values, signaling high epistemic uncertainty. We compare our proposals to prior art in various experiments.",
"Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet."
],
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_14",
"@cite_10"
],
"mid": [
"2896860804",
"2964059111",
"2898622970",
"2963238274"
]
}
|
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
|
While deep learning provides a flexible framework for function approximation that achieves impressive performance on many real-life tasks (LeCun et al., 2015), there has been a recent focus on providing predictive uncertainty estimates for deep models, making them better suited for use in risk-sensitive applications. Bayesian neural networks (BNNs) are neural network models that include uncertainty through priors on network weights, and thus provide uncertainty about the functional mean through posterior predictive distributions (MacKay, 1992;Neal, 2012). (Note: one can also place priors directly on functions rather than network weights (Sun et al., 2019); in this work, we focus on the more commonly used approach of placing priors over weights.) Unfortunately, characterizing uncertainty over parameters of neural networks is challenging due to the highdimensionality of the weight space and potentially complex dependencies among the weights. Markov-chain Monte Carlo (MCMC) techniques are often slow to mix. Standard variational inference methods with mean field approximations may struggle to escape local optima and furthermore, are unable to capture dependencies between the weights.
There exists a large body of work to improve the quality of inference for Bayesian neural networks (BNNs) by improving the approximate inference procedure (e.g. Graves 2011;Blundell et al. 2015;Hernández-Lobato et al. 2016, to name a few), or by improving the flexibility of the variational approximation for variational inference (e.g. Gershman et al. 2012;Ranganath et al. 2016;Louizos & Welling 2017;Miller et al. 2017). On the other hand, a number of frequentist approaches, like ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018), provide predictive uncertainty estimates for neural network while by-passing the challenges of Bayesian inference all together.
The objective of this work is to provide an empirical comparison of common BNN inference approaches with a focus on the quality of uncertainty quantification. We perform a careful empirical comparison of 8 state-of-the-art approximate inference methods and 2 non-Bayesian frameworks, where we find that performance depends heavily on the training data. We characterize situations where metrics like log-likelihood and RMSE fail to distinguish good vs poor approximations of the true posterior, and, based on our observations, engineer synthetic datasets for comparing the predictive uncertainty estimates.
Related Works
In literature, posteriors for Bayesian Neural Network models obtained by Hamiltonian Monte Carlo (HMC) (Neal, 2012) are frequently used as ground truth. However, HMC scales poorly on high dimensional parameter space and large datasets (Welling & Teh, 2011;Chen et al., 2014). Mini-batched versions of HMC, such as Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011) and Stochastic Gadient HMC (Chen et al., 2014), have been introduced to address the issue of scalability. However, these methods still suffer from lower mixing rate and are not theoretically guaranteed to converge to the true posterior when model assumptions are not met (e.g. when the true model of the gradient noise is not well-estimated).
As a result, much effort has been spent on variational methods. Mean Field Variational Bayes for BNNs were in intro-arXiv:1906.09686v1 [cs.LG] 24 Jun 2019 duced in (Graves, 2011), the gradient computation of which was later improved in Bayes by Backprop (BBB) (Blundell et al., 2015). However, the fully factorized Gaussian variational family used in BBB is unable to capture correlation amongst the parameters in the posterior. In contrast, Matrix Gaussian Posteriors (MVG) (Louizos & Welling, 2016), Multiplicative Normalizing Flows (MNF) (Louizos & Welling, 2017), and Bayes by Hypernet (BBH) (Pawlowski et al., 2017) are explicitly designed to capture posterior correlation by imposing structured approximation families; works like Black Box α-Divergence (Hernández-Lobato et al., 2016) and Probabilistic Backpropagation (PBP) (Hernández-Lobato & Adams, 2015) use a richer family of divergence measures, encouraging approximate posteriors to capture important properties of true posterior distributions.
Finally, Dropout (Gal & Ghahramani, 2016) and ensemble methods (Lakshminarayanan et al., 2017;Pearce et al., 2018;Tagasovska & Lopez-Paz, 2018) by-pass the difficulties of performing Bayesian inference and obtain predictive uncertainty estimates through implicitly or explicitly training multiple models on the same data.
While there are numerous inference methods, there have been few exhaustive, independent comparisons (Myshkov & Julier, 2016;Zhao & Ji, 2018). In (Myshkov & Julier, 2016) BBB, PBP, and Dropout are compared with minibatched HMC on regression. The evaluation metrics consist of RMSE and divergence from HMC posteriors (considered as ground truth). In (Zhao & Ji, 2018), BBB and Dropout are compared with HMC and SGHMC on classification. Accuracy and calibration (how well predictive uncertainty align with empirical uncertainty) of the posterior predictive distribution are analyzed. Neither work indicates when predictive accuracy and calibration correspond to the fidelity of posterior approximation.
In this work, we provide a comparison of a wide range of inference methods on both regression and classification tasks. Furthermore, we investigate the usefulness of metrics for posterior predictive generalization and calibration for measuring the fidelity of posterior approximations. In particular, we identify situations in which these metrics are poor proxies for measuring divergence from true posteriors.
Challenges in Evaluating Uncertainty
Frequently in literature, high test log likelihood is used as evidence that the inference procedure has more faithfully captured the true posterior. However, here we argue that while test log likelihood may be a good criteria for model selection, it is not a reliable criteria for determining how well an approximate posterior aligns with the true posterior.
Consider the example in Figure 1. The training data has a 'gap', namely there are no samples from [−1, 1]. We see that the posterior predictive means of the true posterior (i.e. the ground truth), as given by HMC (details in Section 5), and that obtained by PBP are identical. However, the PBP posterior predictive uncertainty is far smaller. The average test log-likelihood for data evenly spaced in [−4, 4] is -0.25 for PBP and -0.42 for HMC. In this case, the better number does not indicate a better model class (e.g. a prior p(W ) that appropriately puts more weight where the data lie). Rather, it is an artifact of the fact that the data happens to lie where an incorrect inference procedure put more mass. In short, the average test log-likelihood indicates that the approximate posterior predictive better aligns with the data and not that it is a faithful approximation of the true posterior predictive.
For the same reason, RMSE and other metrics for measuring predictive calibration (such as Prediction Interval Coverage Probability) are also unreliable indicators of the degree to which approximate posteriors align with the true ones. In this paper, we argue that issues of model selection should be addressed separately from issues associated with the approximation gaps of inference. For this, we engineer synthetic datasets on which our ground-truth BNN model produces well-calibrated posterior predictive distributions and hence generalization and calibration metrics are proxies for how well a given inference method captures the true posterior.
Experimental Set Up
Data Sets We perform experiments on univariate regression and two-dimensional binary classification tasks so that the ground truth distributions can be visualized. For each task, we consider two synthetic datasets. In one of these datasets, the a priori model uncertainty will be higher than the variation in the data warrants, whereas in the other dataset the data variation will match the a priori model uncertainty. Data generation details are in Appendix A. Ground Truth Baselines We use Hamiltonian Monte Carlo (HMC) (Neal, 2012) to construct 'ground-truth' posterior and posterior predictive distributions. We run HMC for 50k iterations with 100 leapfrog steps and check for mixing. See Appendix E for full description.
Methods We evaluate 10 inference methods: Bayes by Backprop (BBB), Probabilistic Backpropagation (PBP), Black-box α-Divergence (BB-ALPHA), Multiplicative Normalizing Flows (MNF), Matrix-Variate Gaussian (MVG), Bayes by Hypernet (BBH), Dropout, Ensemble, Stochastic Gradient Langevin Dynamics (SGLD), and Stochastic Gradient HMC (SGHMC). We do not evaluate PBP and BB-ALPHA on classification tasks as they assume exponential family as likelihood distributions. All optimization is done with Adam except for HMC, SGLD and SG-HMC which have their own scheduled gradient updates. We use existing code-bases for methods when available (BB-ALPHA, MVG, BBH, NMF). Full description of tuning schemes is in Appendix E.
Experimental Parameters For all tasks, we use neural networks with ReLU nonlinearities. We use 1 hidden layer with 50 hidden nodes for regression and 2 hidden layers with 10 nodes each for classification. Every method is run with 20 random restarts, each until convergence, using a fixed weight prior W ∼ N (0, I) and true output noise. For methods that include priors on the output noise, we disable these in the experiments. Out of the 20 restarts, we select the solution with the highest validation log-likelihood and estimated the posterior predictive distributions with 500 posterior samples (results given by selection by ELBO are indistinguishable and are in Appendix 6). Full description in Appendix E.
Evaluation Metrics Evaluating the fidelity of posterior approximations is challenging. As a result, in BNN literature, accuracy, average marginal log-likelihood, and frequentist metrics such as Prediction Interval Coverage Probability (PICP) -the percentage of observations for which the ground truth y lies within a 95% predictive-interval (PI) of the learned model -and the average width of the 95% PI (MPIW) are commonly used as indicators of the quality of posterior approximation (full description of metrics in Appendix B). Our experiments provide insights on when these metrics correspond with high quality posterior approximation and when they do not.
Results
Generalization and calibration metrics are not reliable indicators for quality of posterior approximation. Figure 1 and 2 show that when the model class has large flexibility for the data, the ground truth posterior predictive may have lower log-likelihood and calibration scores than a poor approximation. In Figure 1, we see most inference methods, though underestimate the uncertainty, still produce high log-likelyhood because the predictive mean aligns well with the true function. But HMC gets penalized by giving large uncertainty in the middle due to model class flexibility. On the other hand, when the model class has the right capacity for the data, posterior predictive generalization and calibration are good but not definitive indicators of the quality of posterior approximation ( Figure 3, Table 1). This is especially concerning for high-dimensional or large datasets on which ground truth distributions are hard to compute and appropriate model capacity is hard to ascertain. Here, generalization/calibration metrics often conflate bad models with bad inference. We note that evaluations of uncertainty estimates based on active learning will struggle similarly in distinguishing model and inference issues.
Inference methods designed to capture structure in the posterior do not necessarily produce better approximations of the true posterior. In our experiments, we do not see that methods using a richer divergence metric or structured variational family are able to better capture the ground truth posterior. This is likely due to the fact that the true posteriors in our experiments lack patterns of dependencies that those inference methods aim to capture (Appendix D). However, this observation indicates the need for developing concrete guidelines for when it is beneficial to use alterna- (Figure 1), generalization metrics combined with calibration metrics give a reasonable indication for the quality of posterior approximation (HMC scores highest). However, even here these metric do not entirely capture our intuition for quality of fit (for example, the test log-likelihood of BB-ALPHA is higher than Ensemble).
tive divergence metrics and structured variational families, since the extra flexibility of these methods often invites additional optimization challenges on real-data.
Ensemble methods do not consistently produce the types of uncertainty estimates we want. Methods using an ensemble (whether explicit or implicit) of models to produce predictive distributions rely on the model diversity to produce accurate uncertainty estimates. Ensemble methods may produce similar solutions due to initialization or optimization issues. When the ensemble includes many dissimilar plausible models for the data ( Figure 3) the uncertainty estimate can be good; when ensemble training finds local optima with highly similar models for the data the uncertain estimates can be poor (Figure 7). Thus, uncertainty estimates from ensembles can be unreliable absent a structured way of including diversity training objectives.
SGHMC produces posterior predictives that are most similar to that of HMC. In our experiments, we see that SGLD drastically underestimates posterior predictive uncertainty. SGHMC, while tending to overestimate uncertainty, produces predictive distributions qualitatively similar to those of HMC.
Conclusion
In this paper, we compare 10 commonly used approximate inference procedures for Bayesian Neural Networks. Frequently, measurements of generalization and calibration of the posterior predictive are used to evaluate the quality of inference. We show that these metrics conflate issues of model selection with those of inference. On our data, we see that approximate Bayesian inference methods struggle to capture true posteriors and the non-Bayesian methods often do not capture the type of predictive uncertainty that we want. Our experiments show that we need more exhaustive and
B. Evaluation Metrics
The average marginal log-likelihood is computed as:
E (xn,yn)∼D E q(W ) [p(y n |x n , W )] .(2)
The predictive RMSE is computed as:
1 N N n=1 y n − E q(W ) [f (x n , W )] 2 2 .(3)
The Prediction Interval Coverage Probability (PICP) is computed as:
1 N N n=1 1 yn≤ y high n · 1 yn≥ y low n ,(4)
and the Mean Prediction Interval Width (MPIW) is computed as:
1 N N n=1 y high n − y low n ,(5)
where y high n is the 97.5% percentile and y low n is the 2.5% percentile of the predicted outputs for x n . We want models to have PICP values to be close to 95% while minimizing the MPIW, thus formalizing our desiderata that well-calibrated posterior predictive uncertainty should be both necessary and sufficient to capture the variation in the data.
C. Additional Results
• Figure 4 represents the posterior predictive distribution and Table 2 summarizes the metrics for all inference methods of regression task 1.
• Figure 5 is the complete plot of posterior predictive distribution for all inference methods of regression task 2.
• Figure 6 summarizes the posterior predictive distribution and Table 3 shows the metrics for all inference methods of classification task 1.
• Figure 7 summarizes the posterior predictive distribution and Table 4 shows the metrics for all inference methods of classification task 2.
• For BBB, MNF, MVG, BBH, we ran additional experiments by using the KL divergence as model selection criterion instead of log-likelyhood. Smaller KL divergence suggests that the approximated posterior is more similar to the true posterior. For BB-ALPHA, the measurements are not comparable when α is different. We fixed α to be 0.3 (chosen by cross validation based on test loglikehood) and selected the run with the smallest α-divergence. Figure 8 and Figure 9 summarizes the posterior predictive distribution for those models on regression tasks. Overall, those methods still do not produce satisfying approximations of the true posterior. Also, BB-ALPH does not fit the data well and significantly overestimates the uncertainty.
• For BBB, BB-ALPHA, MVG, we helped with the optimization by initializing the variational parameters with the empirical mean of HMC samples. Figure 10 and Figure 11 summarizes the posterior predictive distribution for those models on regression tasks. Overall, the results are similar to Figure 4 and Figure 3.
D. Exploration of Structure in the HMC Posterior
We investigated the structure of the HMC posterior as it is critical to understand the types of dependencies among the weights. We found out that for regression task 1, the marginal distribution of HMC samples is close to a normal distribution, which is suggested by Figure 12. We thus approximated the posterior with a multivariate Gaussian distribution q ∼ N (µ, Σ)
where µ, Σ is approximated witĥ respectively and w i denotes the ith HMC sample. Figure 13 shows that such an approximation is not sufficient to capture the dependencies among weights as both posterior mean and posterior variance is very different from ground truth. The experiment suggests that there may be higher moments correlation among the weight space. In the future, we are dedicated to investigate what types of dependencies exist in the true weight space, what types of dependencies initial stepsize of 2 × 10 −3 . Acceptance rate α is checked every 100 iteration. is increased by 1.1 times if α > 0.8 or decreased by 0.9 times if α < 0.2. We used 50K iterations and a burnin of 40K and a thinning of interval 20. Convergence is verified through trace-plots and autocorrelation for weights. • Probabilistic BackPropagation (PBP): There are no hyperparameters to tune. We randomized the order of the data before each data sweep. The code is adapted from https://github.com/HIPS/ Probabilistic-Backpropagation.
µ = 1 S S i=1 w i ,Σ = 1 S − 1 S s=1 (w i −μ)(w i −μ)
E. Hyperparameter Settings
• [64,64] [10]
[10] Table 9. Optimal hyperparameter for BBH.
• Dropout:
We implement Dropout ourselves, which is essentially identical to the code provided in https://github.com/yaringal/ DropoutUncertaintyExps. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1} and Bernoulli dropout rate γ ∈ {0.005, 0.01, 0.05}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.05 0.005 0.01 γ 0.005 0.01 0.005 0.005 Table 10. Optimal hyperparameter for Dropout.
• Ensemble: We implement Ensemble ourselves. We tested learning rate ∈ {0.001, 0.005, 0.01, 0.05, 0.1}. For regression tasks, the regularization term λ is set as the noise of the corresponding task. For classification tasks, λ = 0.5. The regularization term is chosen so that minimizing the objective function corresponds to maximizing the posterior. We collected 500 prediction samples from 500 random restarts.
Reg 1 Reg 2 Class 1 Class 2 0.05 0.005 0.1 0.1 Table 11. Optimal hyperparameter for Ensemble.
• Stochatic Gradient Langevin Dynamics (SGLD):
We implement SGLD ourselves. We set the batch size to be 32. We tested learning rate ∈ {0.0005, 0.001, 0.005, 0.01}. We used 500K iterations and a burnin of 450K and a thinning of interval 100.
Reg 1 Reg 2 Class 1 Class 2 0.001 0.001 0.01 0.01 Table 12. Optimal hyperparameter for SGLD.
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
• Stochatic Gradient Hamiltonian Monte Carlo (SGHMC): We implement SGHMC ourselves. We set the batch size to be 32. The momentum variable is sampled from N (0, I ). L = 100 leapfrog steps are used and we tested stepsize ∈ {0.001, 0.002, 0.005}. We used stepsize = 0.002 for all tasks. We used the friction term C = 10I andB = 0. We used 50K iterations and a burnin of 40K and a thinning of interval 20.
| 3,100 |
1811.03325
|
2900399720
|
This paper presents a discovery that the length of the entities in various datasets follows a family of scale-free power law distributions. The concept of entity here broadly includes the named entity, entity mention, time expression, aspect term, and domain-specific entity that are well investigated in natural language processing and related areas. The entity length denotes the number of words in an entity. The power law distributions in entity length possess the scale-free property and have well-defined means and finite variances. We explain the phenomenon of power laws in entity length by the principle of least effort in communication and the preferential mechanism.
|
Our work is related to Zipf's law and the distributions of word length and sentence length. Power laws have been observed to appear in numerous natural and man-made systems @cite_62 , we here concern them in language.
|
{
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out."
],
"cite_N": [
"@cite_62"
],
"mid": [
"2000042664"
]
}
|
Marshall-Olkin Power-Law Distributions in Length-Frequency of Entities
|
and Zipf (1936Zipf ( , 1949 found a very long time ago that the rank-frequency of words in natural languages follows a family of power-law distributions. During his exploration, Zipf also found that the meaning-frequency of words follows power-law distributions as well. The rank-frequency distribution of words is later credited as Zipf's law and provides a direction to understand the use of languages in our communicative system. Zipf's law has been observed in many languages (Zipf, 1949;Corominas-Murtra and Solé, 2010) and has attracted tremendous attention of researchers from diverse areas for more than eighty years (Piantadosi, 2014). The Zipf distribution has a linear behavior in the log-log scale and is widely used to model phenomena such as word frequencies, city sizes, income distribution, and network structures. However, the Zipf distribution may not fit well the probabilities of the first positive integer numbers, which are often observed to be higher or lower than expected by the linear model.
Besides the rank-frequency and meaning-frequency of words, Zipf also analyzed word length, sentence length, and phonemes (Zipf, 1949). Although Zipf explained the use of these three language units under the same principle of least effort as he explained word frequency and word meaning in a qualitative way, unfortunately, extensive studies have demonstrated that the frequencies of these three language units do not follow a power-law distribution, but follow variants of Poisson distributions, lognormal distributions, or gamma distributions (Williams, 1940;Fucks, 1955Fucks, , 1956Wake, 1957;Miller et al., 1958;Williams, 1975;Grotjahn and Altmann, 1993;Wimmer et al., 1994;Best, 1996;Sigurd et al., 2004).
In the last two decades, the field of natural language processing and related areas have constructed numerous datasets for diverse linguistic tasks (Manning and Schutze, 1999;Martin, 2008, 2020). Those datasets provide us opportunities to analyze some other forms of languages, among which entity is an important one. An entity is a real-world object, such as persons, locations, and organizations (Chinchor, 1997;Sang and Meulder, 2003). Entities generally involve important concepts with concrete meanings and usually act as (part of) the subject or the object or even both in a sentence. For example, in the sentence "Michael Jordan could be an NBA player, or a professor of University of California, Berkeley," the entity "Michael Jordan" acts as the subject while other two entities "NBA" and "University of California, Berkeley" are parts of the object. Because of its importance in language, entities have been extensively studied and are involved in diverse linguistic tasks, such as named entity recognition (Chinchor, 1997;Sang and Meulder, 2003) and entity linking (Ji and Grishman, 2011; Table 1: Some examples of entities in English and their corresponding entity lengths (l). Symbols and punctuations in entities are taken into account during the calculation.
Entity
Entity Length (l) NBA 1 Michael Jordan 2 United Arab Emirates 3 University of California , Berkeley 5 10:00 p.m. on August 20 , 1940 7 human cytomegalovirus ( HCMV ) major immediate 7 2015). To the best of our knowledge, however, there is no existing literature that investigates the underlying distribution(s) of entities which may provide a better understanding on language use and provide insights into designing effective and efficient algorithms for entity-related linguistic tasks.
In this paper, we fill in this gap and conduct a thorough investigation on the length-frequency distributions of entities in different types and different languages. We aim to fit the length-frequency of entities with a uniform model or a family of models. Entity length is defined by the number of words in an entity. Entity length is an important feature of natural language processing that reflects the complexity and structure of texts. Table 1 presents some examples of entities and their corresponding lengths. After a careful exploration, we find that the lengthfrequency of entities cannot be well characterized by pure power-law models, but can be well characterized by the Marshall-Olkin power-law (MOPL) models that are developed by Pérez-Casany and Casellas (2013). MOPL models are a family of generalized models of power-law models. Compared with pure power-law models, MOFL models have more flexibility to adjust the probabilities of the first few data points while keeping the linearity of the remaining probabilities.
Specifically, we collect twelve datasets about different types of entities (e.g., named entities and time expressions) and eighteen datasets about entities in different languages (e.g., English and French). Those datasets are dramatically diverse from each other in terms of their sources, domains, text genres, generated time, corpus sizes, and entity types, and those languages have significant differences in terms of their phonetic systems and spelling systems (see Section 4.1 for details). However, we find that the length of these diverse entities demonstrates some similar characteristics, and the length-frequency distributions of these diverse entities can be well characterized by a family of MOPL models.
To evaluate the quality of MOPL models fitting to the length-frequency of diverse entities, we use the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) and define an average-error metric to evaluate the goodness-of-fit of the MOFL models and compare the fitting results with two state-of-the-art power-law models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model. We conduct experiments on thirty datasets about entities in different types and different languages, and experimental results demonstrate that MOPL models well characterize the length-frequency distributions of diverse entities, and the fitting results of MOPL are much better than the ones of the three compared models. Specifically, MOPL achieves much better results in the KS test and average-error metric than the three compared models. Experimental results also demonstrate that MOPL models fit the length-frequency of entities in an individual dataset less than one minute, which is comparable with the most efficient model LS avg and much better than the CSN2009 model. This indicates that MOPL models are more suitable to characterize the length-frequency of diverse entities than the three compared models and that MOPL models are scalable to entities in large-scale real-world datasets. 1 To summarize, we mainly make in this paper the following contributions.
• We investigate the underlying distributions of diverse entities, finding that the length-frequency of entities in different types and languages can be characterized by MOPL models. Our finding adds a piece of stable knowledge to the filed of language and provides insights for entity-related linguistic tasks. • We demonstrate the superiority of MOPL models against two state-of-theart power-law models and a log-normal model in terms of fitting to the length-frequency of diverse entities in different types and languages. • Experiments demonstrate that MOPL is scalable to large-scale real-world datasets without linearly nor exponentially increasing the runtime when the number of entities increases. The remaining of this paper is organized as follows. Section 2 reviews the literature about power-law distributions in languages. Section 3 introduces the MOPL models that we use to characterize the length-frequency of divers entities. Section 4 reports experimental results and computational efficiency of MOPL models and compared models fitting to the length-frequency distributions of entities in different types and different languages. Section 5 discusses possible implications and limitations of this paper while Section 6 draws the conclusion.
Power-Law Distributions in Languages
The most famous power-law distribution in languages is the one in the rankfrequency of words. This linguistic phenomenon was originally discovered by Jean-Baptiste Estoup (Estoup, 1916) and then further explored by George K. Zipf (Zipf, 1936(Zipf, , 1949; such linguistic phenomenon is later credited as Zipf's law. Zipf's law reveals that the r-th most frequently occurring word in a corpus has the frequency defined by f (r) ∝ r −z , where r denotes the frequency rank of a word in the corpus and f (r) denotes its frequency. The Zipf's law has been observed in many languages (Zipf, 1949;Li, 2002;Corominas-Murtra and Solé, 2010;Piantadosi, 2014), and the scaling exponent z is observed to be close to 1. During his exploration, Zipf found as well that the meaning-frequency of words in a corpus also follows a family of power-law distributions.
Besides real languages, researchers have also explored randomly generated texts and genetic regulatory networks (Pratap et al., 2019;Anbalagan et al., 2021;Pratap et al., 2022). Miller (1957Miller ( , 1965 and Li (1992) found that the rank-frequency of random texts also follows power-law distributions. Malone and Maher (2012) and Wang et al. (2017) found that the rank-frequency of user passwords from different websites can be characterized by power-law distributions.
We now discover another form of human languages, namely entities, whose length-frequency distributions can be characterized by the Marshall-Olkin extended power-law distributions. There are significant differences between power-law distributions in the length-frequency of entities and in the rank-frequency of words. Firstly, the meanings and functions of words and of entities in a sentence are different. In the rank-frequency of words, those most frequent words are always auxiliary words without concrete meanings (random texts and user passwords have no concrete meanings as well), while entities generally involve important concepts with concrete meanings and play important roles in a sentence, such as the subject and the object.
Secondly, the numbers of their data points are different. In the rank-frequency of words, an r-rank word appears as a data point, while in the length-frequency of entities, all the l-length entities composite a data point. So the number of data points in the rank-frequency of words is as large as the size of vocabulary in a corpus, while the number of data points in the length-frequency of entities is generally less than 100, and our analysis shows that, in about 93.3% of datasets (28 out of 30), the longest entity contains no more than 100 words (see Table 2 and 3).
Thirdly, the scaling exponents of these two kinds of power-law distributions are different. The scaling exponents in the rank-frequency of words are observed to approximate to 1, indicating that these power-law distributions do not have theoretical means nor finite variances. By contrast, the exponents in the lengthfrequency of entities are greater than 2, theoretically indicating well-defined means in all these power-law distributions; and in real-world datasets, these power-law distributions have finite means and variances.
Length-Frequency Distributions of Words and Sentences
A line of researches that is somewhat related to our work is about the length distributions of words and sentences. According to a review article by Grotjahn and Altmann (1993), Fucks (1955Fucks ( , 1956 first theoretically and experimentally demonstrated that the length-frequency of words in a corpus follows a family of Poisson distributions. This linguistic phenomenon has been observed in more than 32 languages (Best, 1996). On the other hand, Williams (1940) and Wake (1957) observed that the length-frequency of sentences in different languages can be characterized by a family of log-normal distributions. Sigurd et al. (2004) observed that the length-frequencies of words and sentences from English, Swedish, and German corpora can be characterized by variants of log-normal distributions or gamma distributions.
Unlike the length-frequency of words and sentences that can be characterized by variants of Poisson distributions, log-normal distributions, or gamma distributions, we find from experiments on datasets about entities in different types and different languages that the length-frequency of entities cannot be characterized by Poisson distributions nor log-normal distributions but are well characterized by a family of Marshall-Olkin power-law (MOPL) distributions. Moreover, our extensive experiments demonstrate that MOPL models characterize the length-frequency of entities much better than two state-of-the-art power-law models and one alternative log-normal model and that MOPL models are scalable to the length-frequency of entities in large-scale real-world datasets.
Methodology
We first briefly introduce the discrete power-law distributions and then detail the Marshall-Olkin power-law (MOPL) models that we use to characterize the lengthfrequency distributions of entities in different types and different languages. After that we introduce the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) and the average-error metric that are used to evaluate the goodness-of-fit.
Discrete Power-Law Distribution
The discrete power-law distribution is given a special case of power-law distributions with discrete values. It is defined by Eq. (1):
P (X = x) = x −α ζ(α)(1)
where x ∈ N + , α > 0 is the scaling exponent, and ζ(α) = Σ ∞ k=1 k −α is the Riemann Zeta function.
Eq.
(1) can be written as Eq.
(2), which demonstrates the linear behavior in the log-log scale:
log P (X = k) = −α log x − log ζ(α)(2)
The survival function (SF) of the power-law distribution is given by Eq. (3):
F (X) = P (X > x) = ζ(α, x + 1) ζ(α)(3)
where ζ(α, x) = Σ ∞ k=x k −α is the Hurwitz zeta function.
Marshall-Olkin Power-Law Distribution
Pérez-Casany and Casellas (2013) explore a new form of power-law distributions by extending the original power-law function through the Marshall-Olkin transformation. They extend the original power-law function to a more general function called Marshall-Olkin power-law distribution. This function have two parameters, α and β, and its survival function (SF) is given as below:
P (X > x) = G(x; α, β) = βF (X) 1 − βF (X) = βζ(α, x + 1) ζ(α) − βζ(α + 1)(4)
where β > 0, α > 1 and β = 1 − β.
The probability mass function (PMF) can be computed through Eq. (5):
P (X = x) = G(x − 1; α, β) − G(x; α, β) = x −α βζ(α) [ζ(α) − βζ(α, x)][ζ(α) − (β)ζ(α, x + 1)](5)
where x ∈ N + and ζ(α, x) = Σ ∞ k=x+1 k −α stands for the Hurwitz Zeta function. The Marshall-Olkin power-law (MOPL) distributions are a generalization of power-law distributions and overcome some limitations of pure power-law distributions by introducing a parameter. Such parameter allows for more flexibility in adjusting the probabilities of small values while keeping the linearity in tails. The MOPL models are capable of fitting the concave and convex issues encountered in realistic situations, and have been applied to characterize various data such as music compositions and web page visits (Pérez-Casany and Casellas, 2013).
In this paper, we use the MOPL models to characterize the length-frequency distributions of entities in different types and different languages.
Kolmogorov-Smirnov Test
Like many previous researches (Clauset et al., 2009;Hanel et al., 2017;Wang et al., 2017;Gerlach and Altmann, 2019;Artico et al., 2020;Nettasinghe and Krishnamurthy, 2021;Zhong et al., 2022b), we employ the Kolmogorov-Smirnov (KS) test (Smirnov, 1948;Stephens, 1974) to examine the goodness-of-fit. The KS statistic (D n ) quantifies the distance between the cumulative distribution function (CDF) of a set of data points (F n (l)) and the CDF of a theoretic distribution (F (l)), as defined by Eq. (6):
D n = sup l |F n (l) − F (l)| (6)
where sup l is the supremum of the set of distances. The KS statistic D n ∈ [0, 1] is the maximal distance between the two CDF curves F n (l) and F (l). The smaller the D n value is, the better the theoretic distribution fits the data points.
The KS test can also be used to examine whether two underlying distributions are significantly different. In such case, the two-sample KS statistic (D n,m ) is defined by Eq. (7):
D n,m = sup l |F n (l) − F m (l)|(7)
where F n (l) and F m (l) are the CDF curves of two sets of data points.
In the KS test, the null hypothesis (H 0 ) is that the data points are drawn from a theoretic distribution, where the theoretic distribution can be any parametric distribution, such as zipf distribution, normal distribution, power law distribution, and lognormal distribution; the alternative (H 1 ) is that the data points are not drawn from the theoretic distribution. A larger p-value suggests that it is safer to draw a conclusion that these data points are not significantly different from the hypothesized distribution. In two-sample KS test, the null hypothesis (H ′ 0 ) is that the two sets of data points are drawn from the same underlying distribution, while the alternative (H ′ 1 ) is that they are not from the same distribution. Similarly, a larger p-value suggests that it is safer to draw a conclusion that the two sets of data points are drawn from the same underlying distribution.
Average Error
Besides the KS test, we also define a metric called average error to examine the goodness-of-fit. The average error is defined by Eq. (8):
E avg = 1 N x i |p N (x i ) − p (x i )| p N (x i ) · p (x i )(8)
where p N (x) and p(x) are the probability density functions (PDF) of the raw data and the hypothesized data. N =| {(x i , p N (x i )} | stands for the number of data points. Defining the average-error metric by Eq. (8) is to remove the impact of different sample sizes. For different models fitting to the same dataset, the smaller the model achieves the E avg , the better the model fits the dataset.
Experiments
We fit Marshall-Olkin power-law (MOPL) models to twelve datasets about different types of entities and eighteen datasets about entities in different languages and compare the fitting results of MOPL with two state-of-the-art models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model.
Datasets
The datasets we use in this paper mainly involve two kinds: (1) entities in different types and (2) entities in different languages. Most of these datasets contain annotated entities while some contain automatically annotated entities. We collect from both their training and test sets of these datasets for their entities.
Entities in Different Types
This kind of datasets contains twelve datasets regarding different types of entities collected from dramatically diverse sources, including general named entities (Grishman and Sundheim, 1996;Chinchor, 1997;Sang and Meulder, 2003), entity mentions (Ling and Weld, 2012;Pradhan et al., 2013), time expressions (Pustejovsky et al., 2003a,b;Zhong and Cambria, 2023), aspect terms (Liu, 2012;Pontiki et al., 2014), literary entities (Bamman et al., 2019), defense entities, informal entities (Ritter et al., 2011;Derczynski et al., 2016), and domain-specific entities (Fukuda et al., 1998;Takeuchi and Collier, 2005) that are well studied in the field of natural language processing and related areas. In this paper, we use the term of "entity" to broadly represent these diverse concepts, and these specific concepts are treated as different types of entities. In a specific type of entities, researchers may also assign some pre-defined labels (e.g., PERSON, LOCATION, and ORGANIZATION) to these entities. We use "different types of entities" or "entity types" to represent the above general named entities, time expressions, aspect terms, etc., while use "different categories of entities" or "entity categories" to represent these pre-defined labels. In our analysis, we are concerned with "different types of entities" and do not care much about "different categories of entities." Because each type of entities may also contain different categories/labels and can reveal general habits of our humans in using language, while a certain category of entities reveal only our specific/narrow habit(s). In this paper, we care more about those general habits and principles than specific/narrow one(s). Since English is the most studied language in natural language processing and related areas, we analyze these different types of entities in English.
The twelve datasets are (1) ABSA (Pontiki et al., 2014(Pontiki et al., , 2015, (2) (Pustejovsky et al., 2003b;Mazur and Dale, 2010;UzZaman et al., 2013;Zhong et al., 2017;Zhong and Cambria, 2018), (11) Twitter (Strauss et al., 2016;Derczynski et al., 2016), (12) WikiAnchor (Ling and Weld, 2012). They are briefly described below in alphabetical order.
ABSA contains two corpora that are used in SemEval-2014 (Pontiki et al., 2014) and SemEval-2015(Pontiki et al., 2015 for aspect-based sentiment analysis. While the two corpora have several language units for different tasks, we are concerned with aspect terms and collect these aspect terms for the analysis of their length-frequency distribution.
ACE04 is a benchmark dataset used for the 2004 Automatic Content Extraction (ACE) technology evaluation (Doddington et al., 2004). It consists of various types of data collected from different sources (e.g., newswire and broadcast news) for the analysis of entities and relations in three languages: Arabic, Chinese, and English. We use its English entities for the analysis of different types of entities, while use its Arabic entities for the analysis of entities in different languages.
BBN consists of Wall Street Journal articles for pronoun co-reference and entity analysis (Weischedel and Brunstein, 2005). It includes 28 entity categories in total. We collect all of its entities for analysis, without considering its entity categories.
BioMed contains fourteen corpora that are developed for the analysis of biomedical entities. Crichton et al. (2017) collect the fourteen corpora and we can get these corpora from their paper for the biomedical entities.
CoNLL03 is a benchmark dataset with 1,393 news articles derived from the Reuters RCV1 Corpus, which is collected between the period of August 1996 and August 1997 (Sang and Meulder, 2003). We collect its entities without entity categories for the analysis of the length-frequency distribution.
COVID19 is a newly constructed dataset for the analysis of entities related to the recent COVID-19 pandemic (Wang et al., 2020). We collect and analyze its entities for the length-frequency analysis.
LitBank is a dataset collected from 100 different English-language literary articles across over a long period of time and it is developed for the analysis of literary entities (Bamman et al., 2019).
OntoNotes5 is a large-scale dataset collected from different sources (e.g., news articles, newswire and web data) over a long period of time for the comprehensive analyses of syntax, co-reference, proposition, word sense, and named entities in three languages (i.e., English, Chinese, and Arabic) (Pradhan et al., 2013). In this paper we are concerned with its entities in English for analysis.
Re3d 2 is a dataset with various documents relevant to the conflict in Syria and Iraq. The dataset is constructed for the analysis of entity and relation extraction in the domain of defense and security. We collect its entities for analysis.
TimeExp consists of three corpora that are developed for the analysis of time expressions (Zhong et al., 2017;Zhong and Cambria, 2018;Zhong et al., 2020). These corpora include TempEval-3 (including TimeBank (Pustejovsky et al., 2003b), TE3-Silver, AQUAINT, and the Platinum corpus) (UzZaman et al., 2013), WikiWars (Mazur and Dale, 2010), and Tweets (Zhong et al., 2017). Twitter consists of two corpora whose text is collected from Twitter: WNUT16 (Strauss et al., 2016) and Broad Twitter Corpus (Derczynski et al., 2016). These two corpora are developed for the analysis of entities in informal text.
WikiAnchor treats the anchor text (i.e., the text in the hyperlinks) from Wikipedia (the 20110513 version) as entity mentions (Ling and Weld, 2012). We collect these entity mentions (i.e., anchor text) for length-frequency analysis.
For each of these datasets that contain two or more corpora (i.e., ABSA, BioMed, TimeExp, and Twitter), we simply merge all the entities from the whole corpora. Note again that we collect from these datasets only their entities for the analysis of length-frequency distribution; we do not care about their entity categories (or pre-defined labels). Table 2 reports the entity types and statistics of the twelve datasets. As mentioned in Section 3.2, the entity length l is defined by the number of words in an entity. Table 2 shows that the numbers of entities in the twelve datasets are diverse dramatically, ranging from 3,394 (Re3d) to 10,260,797 (COVID19); and the maximal lengths and standard deviations of these entities are also diverse: the maximal lengths are varied from 14 to 129 and the standard deviations are varied from 0.36 to 19.66, respectively. However, the average lengths of these entities are comparable and range around 2 (only from 1.26 to 2.93). This indicates that the average length is a common characteristic among these diverse entities.
Entities in Different Languages
This kind of datasets contains named entities in eighteen different languages. These datasets are collected from 2004 Automatic Content Extraction (ACE) evaluation (Doddington et al., 2004), European Newspapers 3 , NCHLT Afrikaans Named Entity Annotated Corpus 4 , Basque EIEC (version 1.0) 5 , BSNLP 2017 6 , Italian KIND (Paccosi and Aprosio, 2021), Norwegian Navnkjenner (Johansen, 2019), and RONEC (Dumitrescu and Avram, 2019).
The eighteen languages include (1) Afrikaans, (2) Arabic, (3) Basque, (4) Bokmal, (5) Croatian, (6) Czech, (7) France, (8) German, (9) Italian, (10) Netherland, (11) Nynorsk, (12) Polish, (13) Romanian, (14) Russian, (15) Samnorsk, (16) Slovak, (17) Slovene, and (18) Ukrainian. We do not include English in this kind of datasets because different types of entities are analyzed in English. Table 3 summarizes the statistics of entities in the eighteen languages. It shows that the numbers of these entities are significantly diverse, ranging from 4,748 (Basque) to 21,105,675 (Croatian). The maximal lengths and standard deviations of these entities in different languages are somewhat diverse but not that dramatical; while the average lengths of these entities are comparable, ranging around 2 (specifically, from 1.10 to 2.35). These statistics are consistent with corresponding ones of different types of entities reported in Table 2. This indicates that entities across different types and different languages share some similar characteristics.
Compared Methods
We evaluate the quality of MOPL models in fitting the length-frequency distributions of entities against two state-of-the-art models, namely CSN2009 (Clauset et al., 2009) and LS avg (Zhong et al., 2022b), and an alternative log-normal model.
CSN2009: Clauset et al. (2009) propose a maximum-likelihood fitting method, which is denoted by CSN2009, that combines with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic to fit power-law distributions to empirical data. CSN2009 estimates the exponent of a power-law model and the minimal value from which the power-law distribution starts. Besides data fitting, CSN2009 also adopts the KS test with likelihood ratios to evaluate the goodness-of-fit of how well a model fits to data. CSN2009 has been the most popular method in the last decade in fitting power-law distributions.
LS avg : Zhong et al. (2022b) demonstrate through extensive experiments that least-squares methods can accurately fit to power-law distributions. They propose a least-squares method to fit power-law distributions to empirical data and use an average strategy to reduce the impact of noisy data that deviate from the fitted line.
LogNormal: Log-normal distributions are alternative distributions that researchers usually use to fit data when considering power-law distributions. Therefore, besides CSN2009 and LS avg , we also compare MOPL models with the log-normal model in terms of fitting the length-frequency of entities.
Implementation Details
For the experiments of data fitting, we use the zipfextR package (Pérez-Casany and Casellas, 2013) in the R programming language to implement our method and apply the codes of CSN2009 7 and LS avg 8 to the datasets. For the KS test, we use (Dimitrova et al., 2020) packages in the R programming language for MOPL, LS avg , and the log-normal model, while use CSN2009's KS-test module for CSN2009. In experiments, we find that for the same model on the same dataset, dgof and KSgeneral achieve the same D n value (i.e., the KS statistic) but different p-values. This suggests that the D n values are accurate while the p-values may not be accurate. In this paper, we use the dgof package to report the D n values and make the final Accept/Reject decisions. All our experiments are conducted on a Dell PowerEdge R740 server with a 96-CPUs processor, 256GB memory, and the CentOS-7 system.
Experimental Results
Tables 4 and 5 report the fitting and goodness-of-fit testing results of MOPL and the three compared models on the length-frequency distributions of entities in different types. Specifically, Table 4 reports the estimated parameters of the models and the coverages (i.e., percentages of data that models cover) while Table 5 reports the goodness-of-fit testing results of the models on the datasets, including D n , E avg , and DEC where DEC indicates the decision to accept or reject the hypothesis H 0 . Figure 1 visualizes the results of MOPL and the three compared models fitting to the length-frequency distributions of entities in different types. Tables 6 reports the fitting results while Table 7 reports the goodness-of-fit testing results of MOPL and the three compared models fitting to the length-frequency of entities in different languages. Figures 2 and 3 visualize those fittings to the length-frequency of entities in different languages.
What follows are separate discussions on model fitting and testing results on the length-frequency of entities in different types and different languages.
Results on the length-frequency of entities in different types
Let us first look at the three measures that examine the goodness-of-fit in Table 5: D n , E avg , and DEC. Table 5 shows that MOPL achieves the best results in all the three measures on all the twelve datasets, in comparison with the three compared models. Specifically, MOPL achieves the performance of D n in the range from 7.88E-05 to 1.22E-02 and the E avg value from 0.18 to 1.40 as well as all the "Accept" across the twelve datasets. By contrast, LS avg achieves the performance of D n in the range from 2.73E-01 to 8.00E-01 and the E avg value from 1.12 to 4.57 as well as all the "Reject" across the datasets. The three measures that CSN2009 achieves are 4.46E-03∼6.02E-02 for D n , 0.25∼0.66 for E avg , and 5 "Accept" and 7 "Reject" for DEC. The three measures of LogNormal are 1.76E-02∼1.21E-01 for D n , 0.36∼11.27 for E avg , and all 12 "Reject" for DEC. This indicates that MOPL fits the length-frequency distributions of entities in different types much better than LS avg and CSN2009, which are developed to fit power-law distributions, and LogNormal, which is often used as an alternative model for power-law models to fit empirical data. Figure 1 intuitively visualizes the difference between MOPL and the three compared models in fitting the length-frequency distributions of entities on the twelve datasets. From Figure 1 we can see that the fittings of MOPL are much better than the ones of the three compared models. More importantly, MOPL achieving all the "Accept" on the twelve datasets indicates that MOPL is a suitable model to characterize the length-frequency of entities in different types.
The fact that MOPL achieves the best goodness-of-fit testing results indicates that MOPL achieves the best estimated parameters. As shown in Table 4, therefore, theα of MOPL should be considered as the relatively accurate estimated exponents fitting to the power-law segments of the length-frequency distributions of entities in different types. All theα of MOPL fitting to these different types of entities range from 2.69 to 5.83, and most of theseα range from 2.69 to 4.74. This indicates that the length-frequency of entities in different types have stable scaling property. Let us now look at the fittings of the two state-of-the-art compared models, LS avg and CSN2009. Theα of LS avg are deviated relatively far away from theα of MOPL. The reason is that LS avg assumes that a power-law starts from the very beginning of an empirical dataset, but Figure 1 shows that such assumption is not applicable to the length-frequency of entities. This indicates that a pure power-law model is unsuitable to characterize the length-frequency of entities in different types. On the other hand, theα of CSN2009 are deviated slightly from the theα of MOPL. The reason is that CSN2009 adopts a minimum-KS-statistic strategy to choose larger lower bound (i.e.,x min ) and fits only the long tails. Consequently, CSN2009 discards the majority of data and achieves low coverages, which are only from 1.23% to 70.99%. By contrast, other models cover more than 98.70% of data. This result that CSN2009 achieves low coverage in fitting to empirical data is consistent with the observation reported in Zhong et al. (2022b).
Results on the length-frequency of entities in different languages
Let us first look at the three goodness-of-fit testing measures in Table 7 as well: D n , E avg , and DEC. Table 7 shows that none of the four models (i.e., MOPL, LS avg , CSN2009, and LogNormal) can perfectly characterize the lengthfrequency distributions of entities in the eighteen languages. The fittings to the length-frequency of entities in different languages are much worse than the fittings to the length-frequency of entities in different types. A possible reason is that some of these datasets in the non-English languages contain a large number of noises. As we mentioned above, English is the most studied language in the field of natural language processing and related areas; other languages are also studied, but their annotated datasets may not be as accurate as the datasets in English. Another possible reason is that none of our authors are familiar with those languages and cannot guarantee the accuracy of the annotations for these datasets.
Let us now look at the comparison among the four models fitting to the lengthfrequency of entities. While MOPL does not well characterize the length-frequency distributions of entities in all the eighteen languages, MOPL outperforms the three compared models. Specifically, MOPL achieves the D n value in the range from 1.72E-03 to 4.01E-02, achieves the E avg value in the range from 0.17 to 2.47, and achieves 8 "Accept" and 10 "Reject" for DEC across all the eighteen languages. By contrast, LS avg achieves the D n value from 1.00E-01 to 7.69E-01, achieves the E avg value from 0.33 to 23.99, and achieves all 18 "Reject" for DEC across the eighteen languages. CSN2009 achieves the D n value from 4.92E-03 to 5.69E-02, achieves the E avg value from 0.15 to 3.18, and achieves 6 "Accept" and 12 "Reject" for DEC. LogNormal achieves the D n value from 1.70E-02 to 1.24E-01, achieves the E avg value from 0.34 to 6.81, and achieves all 18 "Reject" for DEC. The comparison among the four models fitting to the length-frequency of entities is intuitively visualized in Figures 2 and 3. The fitting and testing results indicate that MOPL is more suitable to characterize the length-frequency distributions of entities in different languages than LS avg , CSN2009, and LogNormal. Table 6 shows that theα of MOPL fitting to the length-frequency distributions of entities in different languages range only from 2.66 to 5.12, which is consistent with theα of MOPL fitting to different types of entities, as shown in Table 4. This indicates that the length-frequency distributions of entities in different languages also have stable scaling property. In terms of data coverage, MOPL, LS avg , and LogNormal cover almost all the data (i.e., from 99.91% to 100%), while CSN2009 achieves relatively low coverages (i.e., lower to 0.60%). Specifically, CSN2009 discards at least 50% of data in 13 out of 18 languages, and discards at least 90% of data in 8 out of 18 languages. The low coverage of CSN2009 on the lengthfrequency of entities in different languages is consistent with the one of CSN2009 on the length-frequency of entities in different types reported in Table 4 as well as the observation reported in Zhong et al. (2022b). Table 8 reports the runtimes of MOPL, LS avg , CSN2009 and LogNormal fitting to the length-frequency distributions of entities in different types and different languages. 11 Table 8 shows that while the runtimes of MOPL fitting to lengthfrequency of entities in both different types and different languages are less efficient than ones of LS avg and LogNormal, they are significantly more efficient than the ones of CSN2009. Moreover, while the number of entities in individual dataset ranges from 3,394 to 10,260,797 in different types (see Table 2) and from 4,748 to 21,105,675 in different languages (see Table 3), the runtime of MOPL performing on individual dataset ranges only from 41.71 to 409.67 milliseconds, all of which are less than one second. That means the runtime of MOPL neither increases linearly nor exponentially as the number of entities increases. This suggests that MOPL can be easily applied on large-scale datasets with high efficiency.
Computational Efficiency
Discussion
Some Implications on Entity-related Linguistic Tasks
We here briefly discuss some implications of this linguistic phenomenon (i.e., the length-frequency of entities in different types and different languages can be characterized by Marshall-Olkin power-law distributions) on entity-related linguistic tasks. This linguistic phenomenon may be able to explain why many statistical models and deep-learning models, such as conditional random fields (Lafferty et al., 2001), long short-term memory networks (Hochreiter and Schmidhuber, 1997), and transformer (Devlin et al., 2018), can be applied for recognizing all these different types of entities from unstructured text (Fukuda et al., 1998;Sang and Meulder, 2003;Takeuchi and Collier, 2005;Nadeau and Sekine, 2007;Ritter et al., 2011;Liu, 2012;Pontiki et al., 2014;Krallinger et al., 2015;Derczynski et al., 2016;Yadav and Bethard, 2018;Zhong, 2020;Zhong et al., 2022a). This linguistic phenomenon may also be able to provide insights into analyzing those languages with low-resources. Since entities in different types and different languages share many common characteristics (e.g., their length-frequency distributions, average lengths, and scaling property), we could transfer knowledge and resource available in those well-studied languages to those low-resource languages. We could also apply those statistical modes and deep-learning models that have demonstrated to be effective and efficient in well-studied languages to those low-resource languages.
Distilling this knowledge about the length-frequency distributions of entities can also drive us to design effective and efficient algorithms for specific linguistic tasks. For example, Zhong et al. (2017) found that an average time expression contains only about two words of which one is time token and the other is modifier or numeral, and then they designed proper rules to recognize time expressions from unstructured text. To apply this linguistic knowledge and achieve more progress in linguistic tasks, however, we still need to explore into deeper understanding of this linguistic phenomenon.
Limitations
While we find that the length-frequency distributions of entities in different types can be well characterized by Marshall-Olkin power-law (MOPL) models, and the ones in different languages can also be roughly characterized by MOPL models, we should note that our analysis on these datasets about different languages may be inaccurate because many of these languages are not well studied in the field of natural language processing and related areas and we authors do not have sufficient expertise knowledge to cover our analysis on these different languages.
Conclusion
In this paper, we discover that the length-frequency distributions of entities in different types and different languages can be characterized by a family of Marshall-Olkin power-law (MOPL) models. Our discovery adds a stable knowledge to the field of language and provides some insights into conducting entity-related linguistic tasks and may also provide a new perspective for future potential research in understanding the language use. Experimental results on the length-frequency of entities in both different types and different languages demonstrate the superiority of MOPL models against a log-normal model and two state-of-the-art power-law models, namely LS avg that is developed by Zhong et al. (2022b) and CSN2009 that is developed by Clauset et al. (2009). Experimental results also demonstrate that MOPL models are scalable to the length-frequency of entities in large-scale real-world datasets.
| 6,482 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.