aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
1901.08043 | 2914868659 | With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2 on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9 , much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6 Mask AP. | Region-CNN family @cite_31 @cite_39 @cite_43 @cite_4 @cite_0 considers object detection as two sequential problems: first propose a (large) set of bounding box candidates, crop them, and use an image classification module to classify the cropped region or region feature. R-CNN @cite_31 uses selective search @cite_23 to generate region proposals and feeds them to an ImageNet classification network. SPP @cite_39 and Fast RCNN @cite_43 first feed an image through a convolutional network and crop an intermediate feature map to reduce computation. Faster RCNN @cite_4 further replaces region proposals @cite_23 with a Region Proposal Network. The detection-by-classification idea is intuitive and keeps the best performance so far @cite_14 @cite_32 @cite_21 @cite_25 @cite_33 @cite_40 @cite_38 @cite_41 @cite_12 @cite_26 . | {
"abstract": [
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [, 2015], R-FCN [, 2016] and SSD [, 2015] systems, which we view as \"meta-architectures\" and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.",
"We demonstrate that many detection methods are designed to identify only a sufficently accurate bounding box, rather than the best available one. To address this issue we propose a simple and fast modification to the existing methods called Fitness NMS. This method is tested with the DeNet model and obtains a significantly improved MAP at greater localization accuracies without a loss in evaluation rate, and can be used in conjunction with Soft NMS for additional improvements. Next we derive a novel bounding box regression loss based on a set of IoU upper bounds that better matches the goal of IoU maximization while still providing good convergence properties. Following these novelties we investigate RoI clustering schemes for improving evaluation rates for the DeNet wide model variants and provide an analysis of localization performance at various input image dimensions. We obtain a MAP of 33.6 @79Hz and 41.8 @5Hz for MSCOCO and a Titan X (Maxwell). Source code available from: https: github.com lachlants denet",
"The development of object detection in the era of deep learning, from R-CNN [11], Fast Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from novel network, new framework, or loss design. However, mini-batch size, a key factor for the training of deep neural networks, has not been well studied for object detection. In this paper, we propose a Large Mini-Batch Object Detector (MegDet) to enable the training with a large minibatch size up to 256, so that we can effectively utilize at most 128 GPUs to significantly shorten the training time. Technically, we suggest a warmup learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5 ) to COCO 2017 Challenge, where we won the 1st place of Detection task.",
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets.",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"",
"",
"The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Yet they are useful and make our PANet reach the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. PANet is also state-of-the-art on MVD and Cityscapes.",
"We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: this https URL"
],
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_26",
"@cite_41",
"@cite_21",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_43",
"@cite_40",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_12"
],
"mid": [
"2953390309",
"2407521645",
"2613718673",
"2565639579",
"2886904239",
"2963402592",
"2963016543",
"2601564443",
"2179352600",
"",
"",
"2743620784",
"2088049833",
"2102605133",
"2963857746",
"2951649698"
]
} | Bottom-up Object Detection by Grouping Extreme and Center Points | Top-down approaches have dominated object detection for years. Prevalent detectors convert object detection into rectangular region classification, by either explicitly cropping the region [12] or region feature [11,41] (two-stage object detection) or implicitly setting fix-sized anchors for region proxies [25,28,38] (one-stage object detection). However, top-down detection is not without limits. A rectangular bounding box is not a natural object representation. Most objects are not axis-aligned boxes, and fitting them inside a box includes many distracting background pixels ( Figure. 1). In addition, top-down object detectors enumerate a large number of possible box locations without truly understanding the compositional visual grammars [9,13] of objects themselves. This is computationally expensive. Finally, boxes are a bad proxy for the object themselves. They convey little detailed object information, e.g., object shape and pose. Figure 1: We propose to detect objects by finding their extreme points. They directly form a bounding box , but also give a much tighter octagonal approximation of the object.
In this paper, we propose ExtremeNet, a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. We use a state-of-the-art keypoint estimation framework [3,5,30,31,49] to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, we use one heatmap per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold. We enumerate all O(n 4 ) combinations of extreme point prediction, and select the valid ones. The number of extreme point prediction n is usually quite small, for COCO [26] n ≤ 40, and a brute force algorithm implemented on GPU is sufficient. Figure 2 shows an overview of the proposed method.
We are not the first to use deep keypoint prediction for object detection. CornerNet [22] predicts two opposing corners of a bounding box. They group corner points into bounding boxes using an associative embedding feature [30]. Our approach differs in two key aspects: key- point definition and grouping. A corner is another form of bounding box, and suffers many of the issues top-down detection suffers from. A corner often lies outside an object, without strong appearance features. Extreme points, on the other hand, lie on objects, are visually distinguishable, and have consistent local appearance features. For example, the top-most point of human is often the head, and the bottommost point of a car or airplane will be a wheel. This makes the extreme point detection easier. The second difference to CornerNet is the geometric grouping. Our detection framework is fully appearance-based, without any implicit feature learning. In our experiments, the appearance-based grouping works significantly better.
Our idea is motivated by Papadopoulos et al. [33], who proposed to annotate bounding boxes by clicking the four extreme points. This annotation is roughly four times faster to collect and provides richer information than bounding boxes. Extreme points also have a close connection to object masks. Directly connecting the inflated extreme points offers a more fine-grained object mask than the bounding box. In our experiment, we show that fitting a simple octagon to the extreme points yields a good object mask estimation. Our method can be further combined with Deep Extreme Cut (DEXTR) [29], which turns extreme point annotations into a segmentation mask for the indicated object. Directly feeding our extreme point predictions as guidance to DEXTR [29] leads to close to state-of-the-art instance segmentation results.
Our proposed method achieves a bounding box AP of 43.2% on COCO test-dev, out-performing all reported onestage object detectors [22,25,40,52] and on-par with sophisticated two-stage detectors. A Pascal VOC [8,14] pre-trained DEXTR [29] model yields a Mask AP of 34.6%, without using any COCO mask annotations. Code is available at https://github.com/xingyizhou/ ExtremeNet.
Preliminaries
Extreme and center points Let (x (tl) , y (tl) , x (br) , y (br) ) denote the four sides of a bounding box. To annotate a bounding box, a user commonly clicks on the top-left (x (tl) , y (tl) ) and bottom-right (x (br) , y (br) ) corners. As both points regularly lie outside an object, these clicks are often inaccuracy and need to be adjusted a few times. The whole process takes 34.5 seconds on average [44]. Papadopoulos et al. [33] propose to annotate the bounding box by clicking the four extreme points
(x (t) , y (t) ), (x (l) , y (l) ), (x (b) , y (b) ), (x (r) , y (r) ), where the box is (x (l) , y (t) , x (r) , y (b)
). An extreme point is a point (x (a) , y (a) ) such that no other point (x, y) on the object lies further along one of the four cardinal directions a: top, bottom, left, right. Extreme click annotation time is 7.2 seconds on average [33]. The resulting annotation is on-par with the more time-consuming box annotation. Here, we use the extreme click annotations directly and bypass the bounding box. We additionally use the center point of each object as (
x (l) +x (r) 2 , y (t) +y (b) 2 ).
Keypoint detection Keypoint estimation, e.g., human joint estimation [3,5,15,30,49] or chair corner point estimation [36,53], commonly uses a fully convolutional encoderdecoder network to predict a multi-channel heatmap for each type of keypoint (e.g., one heatmap for human head, another heatmap for human wrist). The network is trained in a fully supervised way, with either an L2 loss to a rendered Gaussian map [3,5,30,49] or with a per-pixel logistic regression loss [22,34,35]. State-of-the-art keypoint estimation networks, e.g., 104-layer HourglassNetwork [22,31], are trained in a fully convolutional manner. They regress to a heatmapŶ ∈ (0, 1) H×W of width W and height H for each output channel. The training is guided by a multi-peak Gaussian heatmap Y ∈ (0, 1) H×W , where each keypoint defines the mean of a Gaussian Kernel. The standard deviation is either fixed, or set proportional to the object size [22]. The Gaussian heatmap serves as the regression target in the L2 loss case or as the weight map to reduce penalty near a positive location in the logistic regression case [22].
CornerNet CornerNet [22] uses keypoint estimation with an HourglassNetwork [31] as an object detector. They predict two sets of heatmaps for the opposing corners of the box. In order to balance the positive and negative locations they use a modified focal loss [25] for training:
L det = − 1 N H i=1 W j=1 (1 −Ŷ ij ) α log(Ŷ ij ) if Y ij = 1 (1−Y ij ) β (Ŷ ij ) α log(1−Ŷ ij ) o.w. ,(1)
where α and β are hyper-parameters and fixed to α = 2 and β = 4 during training. N is the number of objects in the image.
For sub-pixel accuracy of extreme points, CornerNet additionally regresses to category-agnostic keypoint offset ∆ (a) for each corner. This regression recovers part of the information lost in the down-sampling of the hourglass network. The offset map is trained with smooth L1 Loss [11] SL 1 on ground truth extreme point locations:
L of f = 1 N N k=1 SL 1 (∆ (a) , x/s − x/s ),(2)
where s is the down-sampling factor (s = 4 for Hourglass-Net), x is the coordinate of the keypoint. CornerNet then groups opposing corners into detection using an associative embedding [30]. Our extreme point estimation uses the CornerNet architecture and loss, but not the associative embedding.
Deep Extreme Cut Deep Extreme Cut (DEXTR) [29] is an extreme point guided image segmentation method. It takes four extreme points and the cropped image region surrounding the bounding box spanned by the extreme points as input. From this it produces a category-agnostic foreground segmentation of the indicated object using the semantic segmentation network of Chen et al. [4]. The network learns to generate the segmentation mask that matches the input extreme point.
ExtremeNet for Object detection
ExtremeNet uses an HourglassNetwork [31] to detect five keypoints per class (four extreme points, and one center). We follow the training setup, loss and offset prediction of CornerNet [22]. The offset prediction is categoryagnostic, but extreme-point specific. There is no offset prediction for the center map. The output of our network is thus 5 × C heatmaps and 4 × 2 offset maps, where C is the number of classes (C = 80 for MS COCO [26]). Figure 3 shows an overview. Once the extreme points are extracted, we group them into detections in a purely geometric manner.
Algorithm 1: Center Grouping
Input : Center and Extremepoint heatmaps of an image for one category:Ŷ (c) ,Ŷ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W Center and peak selection thresholds: τc and τp Output: Bounding box with score // Convert heatmaps into coordinates of keypoints. // T , L, B, R are sets of points.
T ← ExtractPeak(Ŷ (t) , τp) L ← ExtractPeak(Ŷ (l) , τp) B ← ExtractPeak(Ŷ (b) , τp) R ← ExtractPeak(Ŷ (r) , τp) for t ∈ T , l ∈ L, b ∈ B, r ∈ R do // If
Center Grouping
Extreme points lie on different sides of an object. This complicates grouping. For example, an associative embedding [30] might not have a global enough view to group these keypoints. Here, we take a different approach that exploits the spread out nature of extreme points.
The input to our grouping algorithm is five heatmaps per class: one center heatmapŶ (c) ∈ (0, 1) H×W and four extreme heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W for the top, left, bottom, right, respectively. Given a heatmap, we extract the corresponding keypoints by detecting all peaks. A peak is any pixel location with a value greater than τ p , that is locally maximal in a 3 × 3 window surrounding the pixel. We name this procedure as ExtrectPeak.
Given four extreme points t, b, r, l extracted from heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) , we compute their geometric center c = ( lx+tx 2 , ty+by 2 ). If this center is predicted with a high response in the center mapŶ (c) , we commit the extreme points as a valid detection:Ŷ (c) cx,cy ≥ τ c for a threshold τ c . We then enumerate over all quadruples of keypoints t, b, r, l in a brute force manner. We extract detections for each class independently. Algorithm 1 summarizes this procedure. We set τ p = 0.1 and τ c = 0.1 in all experiments.
This brute force grouping algorithm has a runtime of O(n 4 ), where n is the number of extracted extreme points for each cardinal direction. Supplementary material presents a O(n 2 ) algorithm that is faster on paper. However, then it is harder to accelerate on a GPU and slower in practice for the MS COCO dataset, where n ≤ 40.
Ghost box suppression
Center grouping may give a high-confidence falsepositive detection for three equally spaced colinear objects of the same size. The center object has two choices here, commit to the correct small box, or predict a much larger box containing the extreme points of its neighbors. We call these false-positive detections "ghost" boxes. As we'll show in our experiments, these ghost boxes are infrequent, but nonetheless a consistent error mode of our grouping.
We present a simple post-processing step to remove ghost boxes. By definition a ghost box contains many other smaller detections. To discourage ghost boxes, we use a form of soft non-maxima suppression [1]. If the sum of scores of all boxes contained in a certain bounding box exceeds 3 times of the score of itself, we divide its score by 2. This non-maxima suppression is similar to the standard overlap-based non-maxima suppression, but penalizes potential ghost boxes instead of multiple overlapping boxes.
Edge aggregation
Extreme points are not always uniquely defined. If vertical or horizontal edges of an object form the extreme points (e.g., the top of a car) any point along that edge might be considered an extreme point. As a result, our network produces a weak response along any aligned edges of the object, instead of a single strong peak response. This weak response has two issues: First, the weaker response might be below our peak selection threshold τ p , and we will miss the extreme point entirely. Second, even if we detect the keypoint, its score will be lower than a slightly rotated object with a strong peak response.
We use edge aggregation to address this issue. For each extreme point, extracted as a local maximum, we aggregate its score in either the vertical direction, for left and right extreme points, or the horizontal direction, for top and bottom keypoints. We aggregate all monotonically de-creasing scores, and stop the aggregation at a local minimum along the aggregation direction. Specifically, let m be an extreme point and N
Extreme Instance Segmentation
Extreme points carry considerable more information about an object, than a simple bounding box, with at least twice as many annotated values (8 vs 4). We propose a simple method to approximate the object mask using extreme points by creating an octagon whose edges are centered on the extreme points. Specifically, for an extreme point, we extend it in both directions on its corresponding edge to a segment of 1/4 of the entire edge length. The segment is truncated when it meets a corner. We then connect the end points of the four segments to form the octagon. See Figure 1 for an example.
To further refine the bounding box segmentation, we use Deep Extreme Cut (DEXTR) [29], a deep network trained to convert the manually provided extreme points into instance segmentation mask. In this work, we simply replace the manual input of DEXTR [29] with our extreme point prediction, to perform a 2-stage instance segmentation. Specifically, for each of our predicted bounding box , we crop the bounding box region, render a Gaussian map with our predicted extreme point, and then feed the concatenated image to the pre-trained DEXTR model. DEXTR [29] is class-agnostic, thus we directly use the detected class and score of ExtremeNet. No further post-processing is used.
Experiments
We evaluate our method on the popular MS COCO dataset [26]. COCO contains rich bounding box and instance segmentation annotations for 80 categories. We train on the train2017 split, which contains 118k images and 860k annotated objects. We perform all ablation studies on val2017 split, with 5k images and 36k objects, and compare to prior work on the test-dev split with contains 20k images The main evaluation metric is average precision over a dense set of fixed recall threshold We show average precision at IOU threshold 0.5 (AP 50 ), 0.75 (AP 75 ), and averaged over all thresholds between 0.5 and 1 (AP ). We also report AP for small, median and large objects (AP S , AP M , AP L ). The test evaluation is done on the official evaluation server. Qualitative results are shown in Table. 4 and can be found more in the supplementary material.
Extreme point annotations
There are no direct extreme point annotation in the COCO [26]. However, there are complete annotations for object segmentation masks. We thus find extreme points as extrema in the polygonal mask annotations. In cases where an edge is parallel to an axis or within a 3 • angle, we place the extreme point at the center of the edge. Although our training data is derived from the more expensive segmentation annotation, the extreme point data itself is 4× cheaper to collect than the standard bounding box [33].
Training details
Our implementation is based on the public implementation of CornerNet [22]. We strictly follow CornerNets hyper-parameters: we set the input resolution to 511 × 511 and output resolution to 128×128. Data augmentation consists of flipping, random scaling between 0.6 and 1.3, random cropping, and random color jittering. The network is optimized with Adam [21] with learning rate 2.5e − 4. Cor-nerNet [22] was originally trained on 10 GPUs for 500k iterations, and an equivalent of over 140 GPU days. Due to limited GPU resources, we fine-tune our network from a pre-trained CornerNet model with randomly initialized head layers on 5 GPUs for 250k iterations with a batch size of 24. Learning rate is dropped 10× at the 200k iterations.
Testing details
For each input image, our network produces four C-channel heatmaps for extreme points, one C-channel heatmap for center points, and four 2-channel offset maps. We apply edge aggregation (Section. 4.3) to each extreme point heatmap, and multiply the center heatmap by 2 to correct for the overall scale change. We then apply the center grouping algorithm (Section. 4.1) to the heatmaps. At most 40 top points are extracted in ExtrectPeak to keep the enumerating efficiency. The predicted bounding box coordinates are refined by adding an offset at the corresponding location of offsetmaps.
Following CornerNet [22], we keep the original image resolution instead of resizing it to a fixed size. We use flip augmentation for testing. In our main comparison, we use additional 5× multi-scale (0.5, 0.75, 1, 1.25, 1.5) augmentation. Finally, Soft-NMS [1] filters all augmented detection results. Testing on one image takes 322ms (3.1FPS), with 168ms on network forwarding, 130ms on grouping and rest time on image loading and post processing (NMS).
Ablation studies
Center Grouping vs. Associative Embedding Our Ex-tremeNet can also be trained with an Associative Embedding [30] the center map with a four-channel associative embedding feature map trained with a Hinge Loss [22]. Table 1 shows the result. We observe a 2.1% AP drop when using the associative embedding. While associative embeddings work well for human pose estimation and CornerNet, our extreme points lie on the very side of objects. Learning the identity and appearance of entire objects from the vantage point of its extreme points might simply be too hard. While it might work well for small objects, where the entire object easily fits into the effective receptive field of a keypoint, it fails for medium and large objects as shown in Table 1. Furthermore, extreme points often lie at the intersection between overlapping objects, which further confuses the identity feature. Our geometric grouping method gracefully deals with these issues, as it only needs to reason about appearance.
Edge aggregation Edge aggregation (Section 4.3) gives a decent AP improvement of 0.7%. It proofs more effective for larger objects, that are more likely to have a long axis aligned edges without a single well defined extreme point. It effectively enhances the predicted heatmap in a simple post-processing step with minor additional cost.
Ghost box suppression Our simple ghost bounding box suppression (Section 4.2) yields 0.3% AP improvement. This suggests that ghost boxes are not a significant practical issue in MS COCO. A more sophisticated false-positive removal algorithm, e.g., learn NMS [18], might yield a slightly better result.
Error Analysis To better understand where the error comes from and how well each of our components is trained, we provide error analysis by replacing each output component with its ground truth. Table 1 shows the Table 2: State-of-the-art comparison on COCO test-dev. SS/ MS are short for single-scale/ multi-scale tesing, respectively. It shows that our ExtremeNet in on-par with state-of-the-art region-based object detectors. result. A ground truth center heatmap alone does not increase AP much. This indicates that our center heatmap is trained quite well, and shows that the implicit object center is learnable. Replacing the extreme point heatmap with ground truth gives 16.3% AP improvement, with near twice improvement for small object AP. This indicates that keypoints of small objects are still hard to learn. The keypoint extractor struggles with both exact localization and as well as outright missing objects. When replacing both extreme point heatmap and center heatmap, the result comes to 79.8%, much higher than replacing one of them. This is due to that our center grouping is very strict in the keypoint location and a high performance requires to improve AP AP 50 Table 2 compares ExtremeNet to other state-of-the-art methods on COCO test-dev. Our model with multi-scale testing achieves an AP of 43.2, outperforming all reported one-stage object detectors and on-par with popular twostage detectors. Notable, it performs 1.1% higher than Cor-nerNet, which shows the advantage of detecting extreme and center points over detecting corners with associative features. In single scale setting, our performance is 0.4% AP below CornerNet [22]. However, our method has higher AP for small and median objects than CornerNet, which is known to be more challenging. For larger objects our center response map might not be accurate enough to perform well, as a few pixel shift might make the difference between a detection and a false-negative. Further, note that we used the half number of GPUs to train our model.
State-of-the-art comparisons
Instance Segmentation
Finally, we compare our instance segmentation results with and without DEXTR [29] to other baselines. Table 3 Extreme point heatmap
Center heatmap
Octagon mask Extreme points+DEXTR [29] Table 4: Qualitative results on COCO val2017. First and second column: our predicted (combined four) extreme point heatmap and center heatmap, respectively. We show them overlaid on the input image. We show heatmaps of different categories in different colors. Third column: our predicted bounding box and the octagon mask formed by extreme points. Fourth column: resulting masks of feeding our extreme point predictions to DEXTR [29].
shows the results. As a dummy baseline, we directly assign all pixels inside the rectangular bounding box as the segmentation mask. The result on our best-model (with 43.3% bounding box AP) is 12.1% Mask AP. The simple octagon mask (Section. 4.4) based on our predicted extreme points gets a mask AP of 18.9%, much better than the bounding box baseline. This shows that this simple octagon mask can give a relatively reasonable object mask without additional cost. Note that directly using the quadrangle of the four extreme points yields a too-small mask, with a lower IoU.
When combined with DEXTR [29], our method achieves a mask AP of 34.6% on COCO val2017. To put this result in a context, the state-of-the-art Mask RCNN [15] gets a mask AP of 37.5% with ResNeXt-101-FPN [24,50] back-bone and 34.0% AP with Res50-FPN. Considering the fact that our model has not been trained on the COCO segmentation annotation, or any class specific segmentations at all, our result which is on-par with Res50 [17] and 2.9% AP below ResNeXt-101 is very competitive.
Conclusion
In conclusion, we present a novel object detection framework based on bottom-up extreme points estimation. Our framework extracts four extreme points and groups them in a purely geometric manner. The presented framework yields state-of-the-art detection results and produces competitive instance segmentation results on MSCOCO, without seeing any COCO training instance segmentations. | 3,844 |
1901.08043 | 2914868659 | With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2 on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9 , much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6 Mask AP. | One-stage object detectors @cite_48 @cite_5 @cite_42 @cite_15 @cite_20 @cite_6 @cite_18 do not have a region cropping module. They can be considered as region or anchor proposal networks and directly assign a class label to each positive anchor. SSD @cite_42 @cite_27 uses different scale anchors in different network layers. YOLOv2 @cite_15 learns category-specific anchor shape priors. RetinaNet @cite_48 proposes a focal loss to balance the training contribution between positive and negative anchors. RefineDet @cite_37 learns to early reject negative anchors. Well-designed single-stage object detectors achieve very close performance with two-stage ones at higher efficiency. | {
"abstract": [
"We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off the-shelf networks pre-trained on large-scale classification datasets like Image Net, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN.",
"For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression accuracy and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multitask loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https: github.com sfzhang15 RefineDet.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"Object detection is a core problem in computer vision. With the development of deep ConvNets, the performance of object detectors has been dramatically improved. The deep ConvNets based object detectors mainly focus on regressing the coordinates of bounding box, e.g., Faster-R-CNN, YOLO and SSD. Different from these methods that considering bounding box as a whole, we propose a novel object bounding box representation using points and links and implemented using deep ConvNets, termed as Point Linking Network (PLN). Specifically, we regress the corner center points of bounding-box and their links using a fully convolutional network; then we map the corner points and their links back to multiple bounding boxes; finally an object detection result is obtained by fusing the multiple bounding boxes. PLN is naturally robust to object occlusion and flexible to object scale variation and aspect ratio variation. In the experiments, PLN with the Inception-v2 model achieves state-of-the-art single-model and single-scale results on the PASCAL VOC 2007, the PASCAL VOC 2012 and the COCO detection benchmarks without bells and whistles. The source code will be released.",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.",
"",
"We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.",
"We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2 AP on MS COCO, outperforming all existing one-stage detectors."
],
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_48",
"@cite_42",
"@cite_6",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_20"
],
"mid": [
"2963813458",
"2963786238",
"2743473392",
"2193145675",
"2625952574",
"2579985080",
"",
"2570343428",
"2886335102"
]
} | Bottom-up Object Detection by Grouping Extreme and Center Points | Top-down approaches have dominated object detection for years. Prevalent detectors convert object detection into rectangular region classification, by either explicitly cropping the region [12] or region feature [11,41] (two-stage object detection) or implicitly setting fix-sized anchors for region proxies [25,28,38] (one-stage object detection). However, top-down detection is not without limits. A rectangular bounding box is not a natural object representation. Most objects are not axis-aligned boxes, and fitting them inside a box includes many distracting background pixels ( Figure. 1). In addition, top-down object detectors enumerate a large number of possible box locations without truly understanding the compositional visual grammars [9,13] of objects themselves. This is computationally expensive. Finally, boxes are a bad proxy for the object themselves. They convey little detailed object information, e.g., object shape and pose. Figure 1: We propose to detect objects by finding their extreme points. They directly form a bounding box , but also give a much tighter octagonal approximation of the object.
In this paper, we propose ExtremeNet, a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. We use a state-of-the-art keypoint estimation framework [3,5,30,31,49] to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, we use one heatmap per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold. We enumerate all O(n 4 ) combinations of extreme point prediction, and select the valid ones. The number of extreme point prediction n is usually quite small, for COCO [26] n ≤ 40, and a brute force algorithm implemented on GPU is sufficient. Figure 2 shows an overview of the proposed method.
We are not the first to use deep keypoint prediction for object detection. CornerNet [22] predicts two opposing corners of a bounding box. They group corner points into bounding boxes using an associative embedding feature [30]. Our approach differs in two key aspects: key- point definition and grouping. A corner is another form of bounding box, and suffers many of the issues top-down detection suffers from. A corner often lies outside an object, without strong appearance features. Extreme points, on the other hand, lie on objects, are visually distinguishable, and have consistent local appearance features. For example, the top-most point of human is often the head, and the bottommost point of a car or airplane will be a wheel. This makes the extreme point detection easier. The second difference to CornerNet is the geometric grouping. Our detection framework is fully appearance-based, without any implicit feature learning. In our experiments, the appearance-based grouping works significantly better.
Our idea is motivated by Papadopoulos et al. [33], who proposed to annotate bounding boxes by clicking the four extreme points. This annotation is roughly four times faster to collect and provides richer information than bounding boxes. Extreme points also have a close connection to object masks. Directly connecting the inflated extreme points offers a more fine-grained object mask than the bounding box. In our experiment, we show that fitting a simple octagon to the extreme points yields a good object mask estimation. Our method can be further combined with Deep Extreme Cut (DEXTR) [29], which turns extreme point annotations into a segmentation mask for the indicated object. Directly feeding our extreme point predictions as guidance to DEXTR [29] leads to close to state-of-the-art instance segmentation results.
Our proposed method achieves a bounding box AP of 43.2% on COCO test-dev, out-performing all reported onestage object detectors [22,25,40,52] and on-par with sophisticated two-stage detectors. A Pascal VOC [8,14] pre-trained DEXTR [29] model yields a Mask AP of 34.6%, without using any COCO mask annotations. Code is available at https://github.com/xingyizhou/ ExtremeNet.
Preliminaries
Extreme and center points Let (x (tl) , y (tl) , x (br) , y (br) ) denote the four sides of a bounding box. To annotate a bounding box, a user commonly clicks on the top-left (x (tl) , y (tl) ) and bottom-right (x (br) , y (br) ) corners. As both points regularly lie outside an object, these clicks are often inaccuracy and need to be adjusted a few times. The whole process takes 34.5 seconds on average [44]. Papadopoulos et al. [33] propose to annotate the bounding box by clicking the four extreme points
(x (t) , y (t) ), (x (l) , y (l) ), (x (b) , y (b) ), (x (r) , y (r) ), where the box is (x (l) , y (t) , x (r) , y (b)
). An extreme point is a point (x (a) , y (a) ) such that no other point (x, y) on the object lies further along one of the four cardinal directions a: top, bottom, left, right. Extreme click annotation time is 7.2 seconds on average [33]. The resulting annotation is on-par with the more time-consuming box annotation. Here, we use the extreme click annotations directly and bypass the bounding box. We additionally use the center point of each object as (
x (l) +x (r) 2 , y (t) +y (b) 2 ).
Keypoint detection Keypoint estimation, e.g., human joint estimation [3,5,15,30,49] or chair corner point estimation [36,53], commonly uses a fully convolutional encoderdecoder network to predict a multi-channel heatmap for each type of keypoint (e.g., one heatmap for human head, another heatmap for human wrist). The network is trained in a fully supervised way, with either an L2 loss to a rendered Gaussian map [3,5,30,49] or with a per-pixel logistic regression loss [22,34,35]. State-of-the-art keypoint estimation networks, e.g., 104-layer HourglassNetwork [22,31], are trained in a fully convolutional manner. They regress to a heatmapŶ ∈ (0, 1) H×W of width W and height H for each output channel. The training is guided by a multi-peak Gaussian heatmap Y ∈ (0, 1) H×W , where each keypoint defines the mean of a Gaussian Kernel. The standard deviation is either fixed, or set proportional to the object size [22]. The Gaussian heatmap serves as the regression target in the L2 loss case or as the weight map to reduce penalty near a positive location in the logistic regression case [22].
CornerNet CornerNet [22] uses keypoint estimation with an HourglassNetwork [31] as an object detector. They predict two sets of heatmaps for the opposing corners of the box. In order to balance the positive and negative locations they use a modified focal loss [25] for training:
L det = − 1 N H i=1 W j=1 (1 −Ŷ ij ) α log(Ŷ ij ) if Y ij = 1 (1−Y ij ) β (Ŷ ij ) α log(1−Ŷ ij ) o.w. ,(1)
where α and β are hyper-parameters and fixed to α = 2 and β = 4 during training. N is the number of objects in the image.
For sub-pixel accuracy of extreme points, CornerNet additionally regresses to category-agnostic keypoint offset ∆ (a) for each corner. This regression recovers part of the information lost in the down-sampling of the hourglass network. The offset map is trained with smooth L1 Loss [11] SL 1 on ground truth extreme point locations:
L of f = 1 N N k=1 SL 1 (∆ (a) , x/s − x/s ),(2)
where s is the down-sampling factor (s = 4 for Hourglass-Net), x is the coordinate of the keypoint. CornerNet then groups opposing corners into detection using an associative embedding [30]. Our extreme point estimation uses the CornerNet architecture and loss, but not the associative embedding.
Deep Extreme Cut Deep Extreme Cut (DEXTR) [29] is an extreme point guided image segmentation method. It takes four extreme points and the cropped image region surrounding the bounding box spanned by the extreme points as input. From this it produces a category-agnostic foreground segmentation of the indicated object using the semantic segmentation network of Chen et al. [4]. The network learns to generate the segmentation mask that matches the input extreme point.
ExtremeNet for Object detection
ExtremeNet uses an HourglassNetwork [31] to detect five keypoints per class (four extreme points, and one center). We follow the training setup, loss and offset prediction of CornerNet [22]. The offset prediction is categoryagnostic, but extreme-point specific. There is no offset prediction for the center map. The output of our network is thus 5 × C heatmaps and 4 × 2 offset maps, where C is the number of classes (C = 80 for MS COCO [26]). Figure 3 shows an overview. Once the extreme points are extracted, we group them into detections in a purely geometric manner.
Algorithm 1: Center Grouping
Input : Center and Extremepoint heatmaps of an image for one category:Ŷ (c) ,Ŷ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W Center and peak selection thresholds: τc and τp Output: Bounding box with score // Convert heatmaps into coordinates of keypoints. // T , L, B, R are sets of points.
T ← ExtractPeak(Ŷ (t) , τp) L ← ExtractPeak(Ŷ (l) , τp) B ← ExtractPeak(Ŷ (b) , τp) R ← ExtractPeak(Ŷ (r) , τp) for t ∈ T , l ∈ L, b ∈ B, r ∈ R do // If
Center Grouping
Extreme points lie on different sides of an object. This complicates grouping. For example, an associative embedding [30] might not have a global enough view to group these keypoints. Here, we take a different approach that exploits the spread out nature of extreme points.
The input to our grouping algorithm is five heatmaps per class: one center heatmapŶ (c) ∈ (0, 1) H×W and four extreme heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W for the top, left, bottom, right, respectively. Given a heatmap, we extract the corresponding keypoints by detecting all peaks. A peak is any pixel location with a value greater than τ p , that is locally maximal in a 3 × 3 window surrounding the pixel. We name this procedure as ExtrectPeak.
Given four extreme points t, b, r, l extracted from heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) , we compute their geometric center c = ( lx+tx 2 , ty+by 2 ). If this center is predicted with a high response in the center mapŶ (c) , we commit the extreme points as a valid detection:Ŷ (c) cx,cy ≥ τ c for a threshold τ c . We then enumerate over all quadruples of keypoints t, b, r, l in a brute force manner. We extract detections for each class independently. Algorithm 1 summarizes this procedure. We set τ p = 0.1 and τ c = 0.1 in all experiments.
This brute force grouping algorithm has a runtime of O(n 4 ), where n is the number of extracted extreme points for each cardinal direction. Supplementary material presents a O(n 2 ) algorithm that is faster on paper. However, then it is harder to accelerate on a GPU and slower in practice for the MS COCO dataset, where n ≤ 40.
Ghost box suppression
Center grouping may give a high-confidence falsepositive detection for three equally spaced colinear objects of the same size. The center object has two choices here, commit to the correct small box, or predict a much larger box containing the extreme points of its neighbors. We call these false-positive detections "ghost" boxes. As we'll show in our experiments, these ghost boxes are infrequent, but nonetheless a consistent error mode of our grouping.
We present a simple post-processing step to remove ghost boxes. By definition a ghost box contains many other smaller detections. To discourage ghost boxes, we use a form of soft non-maxima suppression [1]. If the sum of scores of all boxes contained in a certain bounding box exceeds 3 times of the score of itself, we divide its score by 2. This non-maxima suppression is similar to the standard overlap-based non-maxima suppression, but penalizes potential ghost boxes instead of multiple overlapping boxes.
Edge aggregation
Extreme points are not always uniquely defined. If vertical or horizontal edges of an object form the extreme points (e.g., the top of a car) any point along that edge might be considered an extreme point. As a result, our network produces a weak response along any aligned edges of the object, instead of a single strong peak response. This weak response has two issues: First, the weaker response might be below our peak selection threshold τ p , and we will miss the extreme point entirely. Second, even if we detect the keypoint, its score will be lower than a slightly rotated object with a strong peak response.
We use edge aggregation to address this issue. For each extreme point, extracted as a local maximum, we aggregate its score in either the vertical direction, for left and right extreme points, or the horizontal direction, for top and bottom keypoints. We aggregate all monotonically de-creasing scores, and stop the aggregation at a local minimum along the aggregation direction. Specifically, let m be an extreme point and N
Extreme Instance Segmentation
Extreme points carry considerable more information about an object, than a simple bounding box, with at least twice as many annotated values (8 vs 4). We propose a simple method to approximate the object mask using extreme points by creating an octagon whose edges are centered on the extreme points. Specifically, for an extreme point, we extend it in both directions on its corresponding edge to a segment of 1/4 of the entire edge length. The segment is truncated when it meets a corner. We then connect the end points of the four segments to form the octagon. See Figure 1 for an example.
To further refine the bounding box segmentation, we use Deep Extreme Cut (DEXTR) [29], a deep network trained to convert the manually provided extreme points into instance segmentation mask. In this work, we simply replace the manual input of DEXTR [29] with our extreme point prediction, to perform a 2-stage instance segmentation. Specifically, for each of our predicted bounding box , we crop the bounding box region, render a Gaussian map with our predicted extreme point, and then feed the concatenated image to the pre-trained DEXTR model. DEXTR [29] is class-agnostic, thus we directly use the detected class and score of ExtremeNet. No further post-processing is used.
Experiments
We evaluate our method on the popular MS COCO dataset [26]. COCO contains rich bounding box and instance segmentation annotations for 80 categories. We train on the train2017 split, which contains 118k images and 860k annotated objects. We perform all ablation studies on val2017 split, with 5k images and 36k objects, and compare to prior work on the test-dev split with contains 20k images The main evaluation metric is average precision over a dense set of fixed recall threshold We show average precision at IOU threshold 0.5 (AP 50 ), 0.75 (AP 75 ), and averaged over all thresholds between 0.5 and 1 (AP ). We also report AP for small, median and large objects (AP S , AP M , AP L ). The test evaluation is done on the official evaluation server. Qualitative results are shown in Table. 4 and can be found more in the supplementary material.
Extreme point annotations
There are no direct extreme point annotation in the COCO [26]. However, there are complete annotations for object segmentation masks. We thus find extreme points as extrema in the polygonal mask annotations. In cases where an edge is parallel to an axis or within a 3 • angle, we place the extreme point at the center of the edge. Although our training data is derived from the more expensive segmentation annotation, the extreme point data itself is 4× cheaper to collect than the standard bounding box [33].
Training details
Our implementation is based on the public implementation of CornerNet [22]. We strictly follow CornerNets hyper-parameters: we set the input resolution to 511 × 511 and output resolution to 128×128. Data augmentation consists of flipping, random scaling between 0.6 and 1.3, random cropping, and random color jittering. The network is optimized with Adam [21] with learning rate 2.5e − 4. Cor-nerNet [22] was originally trained on 10 GPUs for 500k iterations, and an equivalent of over 140 GPU days. Due to limited GPU resources, we fine-tune our network from a pre-trained CornerNet model with randomly initialized head layers on 5 GPUs for 250k iterations with a batch size of 24. Learning rate is dropped 10× at the 200k iterations.
Testing details
For each input image, our network produces four C-channel heatmaps for extreme points, one C-channel heatmap for center points, and four 2-channel offset maps. We apply edge aggregation (Section. 4.3) to each extreme point heatmap, and multiply the center heatmap by 2 to correct for the overall scale change. We then apply the center grouping algorithm (Section. 4.1) to the heatmaps. At most 40 top points are extracted in ExtrectPeak to keep the enumerating efficiency. The predicted bounding box coordinates are refined by adding an offset at the corresponding location of offsetmaps.
Following CornerNet [22], we keep the original image resolution instead of resizing it to a fixed size. We use flip augmentation for testing. In our main comparison, we use additional 5× multi-scale (0.5, 0.75, 1, 1.25, 1.5) augmentation. Finally, Soft-NMS [1] filters all augmented detection results. Testing on one image takes 322ms (3.1FPS), with 168ms on network forwarding, 130ms on grouping and rest time on image loading and post processing (NMS).
Ablation studies
Center Grouping vs. Associative Embedding Our Ex-tremeNet can also be trained with an Associative Embedding [30] the center map with a four-channel associative embedding feature map trained with a Hinge Loss [22]. Table 1 shows the result. We observe a 2.1% AP drop when using the associative embedding. While associative embeddings work well for human pose estimation and CornerNet, our extreme points lie on the very side of objects. Learning the identity and appearance of entire objects from the vantage point of its extreme points might simply be too hard. While it might work well for small objects, where the entire object easily fits into the effective receptive field of a keypoint, it fails for medium and large objects as shown in Table 1. Furthermore, extreme points often lie at the intersection between overlapping objects, which further confuses the identity feature. Our geometric grouping method gracefully deals with these issues, as it only needs to reason about appearance.
Edge aggregation Edge aggregation (Section 4.3) gives a decent AP improvement of 0.7%. It proofs more effective for larger objects, that are more likely to have a long axis aligned edges without a single well defined extreme point. It effectively enhances the predicted heatmap in a simple post-processing step with minor additional cost.
Ghost box suppression Our simple ghost bounding box suppression (Section 4.2) yields 0.3% AP improvement. This suggests that ghost boxes are not a significant practical issue in MS COCO. A more sophisticated false-positive removal algorithm, e.g., learn NMS [18], might yield a slightly better result.
Error Analysis To better understand where the error comes from and how well each of our components is trained, we provide error analysis by replacing each output component with its ground truth. Table 1 shows the Table 2: State-of-the-art comparison on COCO test-dev. SS/ MS are short for single-scale/ multi-scale tesing, respectively. It shows that our ExtremeNet in on-par with state-of-the-art region-based object detectors. result. A ground truth center heatmap alone does not increase AP much. This indicates that our center heatmap is trained quite well, and shows that the implicit object center is learnable. Replacing the extreme point heatmap with ground truth gives 16.3% AP improvement, with near twice improvement for small object AP. This indicates that keypoints of small objects are still hard to learn. The keypoint extractor struggles with both exact localization and as well as outright missing objects. When replacing both extreme point heatmap and center heatmap, the result comes to 79.8%, much higher than replacing one of them. This is due to that our center grouping is very strict in the keypoint location and a high performance requires to improve AP AP 50 Table 2 compares ExtremeNet to other state-of-the-art methods on COCO test-dev. Our model with multi-scale testing achieves an AP of 43.2, outperforming all reported one-stage object detectors and on-par with popular twostage detectors. Notable, it performs 1.1% higher than Cor-nerNet, which shows the advantage of detecting extreme and center points over detecting corners with associative features. In single scale setting, our performance is 0.4% AP below CornerNet [22]. However, our method has higher AP for small and median objects than CornerNet, which is known to be more challenging. For larger objects our center response map might not be accurate enough to perform well, as a few pixel shift might make the difference between a detection and a false-negative. Further, note that we used the half number of GPUs to train our model.
State-of-the-art comparisons
Instance Segmentation
Finally, we compare our instance segmentation results with and without DEXTR [29] to other baselines. Table 3 Extreme point heatmap
Center heatmap
Octagon mask Extreme points+DEXTR [29] Table 4: Qualitative results on COCO val2017. First and second column: our predicted (combined four) extreme point heatmap and center heatmap, respectively. We show them overlaid on the input image. We show heatmaps of different categories in different colors. Third column: our predicted bounding box and the octagon mask formed by extreme points. Fourth column: resulting masks of feeding our extreme point predictions to DEXTR [29].
shows the results. As a dummy baseline, we directly assign all pixels inside the rectangular bounding box as the segmentation mask. The result on our best-model (with 43.3% bounding box AP) is 12.1% Mask AP. The simple octagon mask (Section. 4.4) based on our predicted extreme points gets a mask AP of 18.9%, much better than the bounding box baseline. This shows that this simple octagon mask can give a relatively reasonable object mask without additional cost. Note that directly using the quadrangle of the four extreme points yields a too-small mask, with a lower IoU.
When combined with DEXTR [29], our method achieves a mask AP of 34.6% on COCO val2017. To put this result in a context, the state-of-the-art Mask RCNN [15] gets a mask AP of 37.5% with ResNeXt-101-FPN [24,50] back-bone and 34.0% AP with Res50-FPN. Considering the fact that our model has not been trained on the COCO segmentation annotation, or any class specific segmentations at all, our result which is on-par with Res50 [17] and 2.9% AP below ResNeXt-101 is very competitive.
Conclusion
In conclusion, we present a novel object detection framework based on bottom-up extreme points estimation. Our framework extracts four extreme points and groups them in a purely geometric manner. The presented framework yields state-of-the-art detection results and produces competitive instance segmentation results on MSCOCO, without seeing any COCO training instance segmentations. | 3,844 |
1901.08043 | 2914868659 | With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2 on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9 , much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6 Mask AP. | As a bottom-up object detection method, our idea of grouping center and extreme points is related to Deformable Part Model @cite_13 . Our center point detector functions similarly with the root filter in DPM @cite_13 , and our four extreme points can be considered as a universal part decomposition for all categories. Instead of learning the part configuration, our predicted center and four extreme points have a geometry structure. And we use a state-of-the-art keypoint detection network instead of low-level image filters for part detection. | {
"abstract": [
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
],
"cite_N": [
"@cite_13"
],
"mid": [
"2168356304"
]
} | Bottom-up Object Detection by Grouping Extreme and Center Points | Top-down approaches have dominated object detection for years. Prevalent detectors convert object detection into rectangular region classification, by either explicitly cropping the region [12] or region feature [11,41] (two-stage object detection) or implicitly setting fix-sized anchors for region proxies [25,28,38] (one-stage object detection). However, top-down detection is not without limits. A rectangular bounding box is not a natural object representation. Most objects are not axis-aligned boxes, and fitting them inside a box includes many distracting background pixels ( Figure. 1). In addition, top-down object detectors enumerate a large number of possible box locations without truly understanding the compositional visual grammars [9,13] of objects themselves. This is computationally expensive. Finally, boxes are a bad proxy for the object themselves. They convey little detailed object information, e.g., object shape and pose. Figure 1: We propose to detect objects by finding their extreme points. They directly form a bounding box , but also give a much tighter octagonal approximation of the object.
In this paper, we propose ExtremeNet, a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. We use a state-of-the-art keypoint estimation framework [3,5,30,31,49] to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, we use one heatmap per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold. We enumerate all O(n 4 ) combinations of extreme point prediction, and select the valid ones. The number of extreme point prediction n is usually quite small, for COCO [26] n ≤ 40, and a brute force algorithm implemented on GPU is sufficient. Figure 2 shows an overview of the proposed method.
We are not the first to use deep keypoint prediction for object detection. CornerNet [22] predicts two opposing corners of a bounding box. They group corner points into bounding boxes using an associative embedding feature [30]. Our approach differs in two key aspects: key- point definition and grouping. A corner is another form of bounding box, and suffers many of the issues top-down detection suffers from. A corner often lies outside an object, without strong appearance features. Extreme points, on the other hand, lie on objects, are visually distinguishable, and have consistent local appearance features. For example, the top-most point of human is often the head, and the bottommost point of a car or airplane will be a wheel. This makes the extreme point detection easier. The second difference to CornerNet is the geometric grouping. Our detection framework is fully appearance-based, without any implicit feature learning. In our experiments, the appearance-based grouping works significantly better.
Our idea is motivated by Papadopoulos et al. [33], who proposed to annotate bounding boxes by clicking the four extreme points. This annotation is roughly four times faster to collect and provides richer information than bounding boxes. Extreme points also have a close connection to object masks. Directly connecting the inflated extreme points offers a more fine-grained object mask than the bounding box. In our experiment, we show that fitting a simple octagon to the extreme points yields a good object mask estimation. Our method can be further combined with Deep Extreme Cut (DEXTR) [29], which turns extreme point annotations into a segmentation mask for the indicated object. Directly feeding our extreme point predictions as guidance to DEXTR [29] leads to close to state-of-the-art instance segmentation results.
Our proposed method achieves a bounding box AP of 43.2% on COCO test-dev, out-performing all reported onestage object detectors [22,25,40,52] and on-par with sophisticated two-stage detectors. A Pascal VOC [8,14] pre-trained DEXTR [29] model yields a Mask AP of 34.6%, without using any COCO mask annotations. Code is available at https://github.com/xingyizhou/ ExtremeNet.
Preliminaries
Extreme and center points Let (x (tl) , y (tl) , x (br) , y (br) ) denote the four sides of a bounding box. To annotate a bounding box, a user commonly clicks on the top-left (x (tl) , y (tl) ) and bottom-right (x (br) , y (br) ) corners. As both points regularly lie outside an object, these clicks are often inaccuracy and need to be adjusted a few times. The whole process takes 34.5 seconds on average [44]. Papadopoulos et al. [33] propose to annotate the bounding box by clicking the four extreme points
(x (t) , y (t) ), (x (l) , y (l) ), (x (b) , y (b) ), (x (r) , y (r) ), where the box is (x (l) , y (t) , x (r) , y (b)
). An extreme point is a point (x (a) , y (a) ) such that no other point (x, y) on the object lies further along one of the four cardinal directions a: top, bottom, left, right. Extreme click annotation time is 7.2 seconds on average [33]. The resulting annotation is on-par with the more time-consuming box annotation. Here, we use the extreme click annotations directly and bypass the bounding box. We additionally use the center point of each object as (
x (l) +x (r) 2 , y (t) +y (b) 2 ).
Keypoint detection Keypoint estimation, e.g., human joint estimation [3,5,15,30,49] or chair corner point estimation [36,53], commonly uses a fully convolutional encoderdecoder network to predict a multi-channel heatmap for each type of keypoint (e.g., one heatmap for human head, another heatmap for human wrist). The network is trained in a fully supervised way, with either an L2 loss to a rendered Gaussian map [3,5,30,49] or with a per-pixel logistic regression loss [22,34,35]. State-of-the-art keypoint estimation networks, e.g., 104-layer HourglassNetwork [22,31], are trained in a fully convolutional manner. They regress to a heatmapŶ ∈ (0, 1) H×W of width W and height H for each output channel. The training is guided by a multi-peak Gaussian heatmap Y ∈ (0, 1) H×W , where each keypoint defines the mean of a Gaussian Kernel. The standard deviation is either fixed, or set proportional to the object size [22]. The Gaussian heatmap serves as the regression target in the L2 loss case or as the weight map to reduce penalty near a positive location in the logistic regression case [22].
CornerNet CornerNet [22] uses keypoint estimation with an HourglassNetwork [31] as an object detector. They predict two sets of heatmaps for the opposing corners of the box. In order to balance the positive and negative locations they use a modified focal loss [25] for training:
L det = − 1 N H i=1 W j=1 (1 −Ŷ ij ) α log(Ŷ ij ) if Y ij = 1 (1−Y ij ) β (Ŷ ij ) α log(1−Ŷ ij ) o.w. ,(1)
where α and β are hyper-parameters and fixed to α = 2 and β = 4 during training. N is the number of objects in the image.
For sub-pixel accuracy of extreme points, CornerNet additionally regresses to category-agnostic keypoint offset ∆ (a) for each corner. This regression recovers part of the information lost in the down-sampling of the hourglass network. The offset map is trained with smooth L1 Loss [11] SL 1 on ground truth extreme point locations:
L of f = 1 N N k=1 SL 1 (∆ (a) , x/s − x/s ),(2)
where s is the down-sampling factor (s = 4 for Hourglass-Net), x is the coordinate of the keypoint. CornerNet then groups opposing corners into detection using an associative embedding [30]. Our extreme point estimation uses the CornerNet architecture and loss, but not the associative embedding.
Deep Extreme Cut Deep Extreme Cut (DEXTR) [29] is an extreme point guided image segmentation method. It takes four extreme points and the cropped image region surrounding the bounding box spanned by the extreme points as input. From this it produces a category-agnostic foreground segmentation of the indicated object using the semantic segmentation network of Chen et al. [4]. The network learns to generate the segmentation mask that matches the input extreme point.
ExtremeNet for Object detection
ExtremeNet uses an HourglassNetwork [31] to detect five keypoints per class (four extreme points, and one center). We follow the training setup, loss and offset prediction of CornerNet [22]. The offset prediction is categoryagnostic, but extreme-point specific. There is no offset prediction for the center map. The output of our network is thus 5 × C heatmaps and 4 × 2 offset maps, where C is the number of classes (C = 80 for MS COCO [26]). Figure 3 shows an overview. Once the extreme points are extracted, we group them into detections in a purely geometric manner.
Algorithm 1: Center Grouping
Input : Center and Extremepoint heatmaps of an image for one category:Ŷ (c) ,Ŷ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W Center and peak selection thresholds: τc and τp Output: Bounding box with score // Convert heatmaps into coordinates of keypoints. // T , L, B, R are sets of points.
T ← ExtractPeak(Ŷ (t) , τp) L ← ExtractPeak(Ŷ (l) , τp) B ← ExtractPeak(Ŷ (b) , τp) R ← ExtractPeak(Ŷ (r) , τp) for t ∈ T , l ∈ L, b ∈ B, r ∈ R do // If
Center Grouping
Extreme points lie on different sides of an object. This complicates grouping. For example, an associative embedding [30] might not have a global enough view to group these keypoints. Here, we take a different approach that exploits the spread out nature of extreme points.
The input to our grouping algorithm is five heatmaps per class: one center heatmapŶ (c) ∈ (0, 1) H×W and four extreme heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W for the top, left, bottom, right, respectively. Given a heatmap, we extract the corresponding keypoints by detecting all peaks. A peak is any pixel location with a value greater than τ p , that is locally maximal in a 3 × 3 window surrounding the pixel. We name this procedure as ExtrectPeak.
Given four extreme points t, b, r, l extracted from heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) , we compute their geometric center c = ( lx+tx 2 , ty+by 2 ). If this center is predicted with a high response in the center mapŶ (c) , we commit the extreme points as a valid detection:Ŷ (c) cx,cy ≥ τ c for a threshold τ c . We then enumerate over all quadruples of keypoints t, b, r, l in a brute force manner. We extract detections for each class independently. Algorithm 1 summarizes this procedure. We set τ p = 0.1 and τ c = 0.1 in all experiments.
This brute force grouping algorithm has a runtime of O(n 4 ), where n is the number of extracted extreme points for each cardinal direction. Supplementary material presents a O(n 2 ) algorithm that is faster on paper. However, then it is harder to accelerate on a GPU and slower in practice for the MS COCO dataset, where n ≤ 40.
Ghost box suppression
Center grouping may give a high-confidence falsepositive detection for three equally spaced colinear objects of the same size. The center object has two choices here, commit to the correct small box, or predict a much larger box containing the extreme points of its neighbors. We call these false-positive detections "ghost" boxes. As we'll show in our experiments, these ghost boxes are infrequent, but nonetheless a consistent error mode of our grouping.
We present a simple post-processing step to remove ghost boxes. By definition a ghost box contains many other smaller detections. To discourage ghost boxes, we use a form of soft non-maxima suppression [1]. If the sum of scores of all boxes contained in a certain bounding box exceeds 3 times of the score of itself, we divide its score by 2. This non-maxima suppression is similar to the standard overlap-based non-maxima suppression, but penalizes potential ghost boxes instead of multiple overlapping boxes.
Edge aggregation
Extreme points are not always uniquely defined. If vertical or horizontal edges of an object form the extreme points (e.g., the top of a car) any point along that edge might be considered an extreme point. As a result, our network produces a weak response along any aligned edges of the object, instead of a single strong peak response. This weak response has two issues: First, the weaker response might be below our peak selection threshold τ p , and we will miss the extreme point entirely. Second, even if we detect the keypoint, its score will be lower than a slightly rotated object with a strong peak response.
We use edge aggregation to address this issue. For each extreme point, extracted as a local maximum, we aggregate its score in either the vertical direction, for left and right extreme points, or the horizontal direction, for top and bottom keypoints. We aggregate all monotonically de-creasing scores, and stop the aggregation at a local minimum along the aggregation direction. Specifically, let m be an extreme point and N
Extreme Instance Segmentation
Extreme points carry considerable more information about an object, than a simple bounding box, with at least twice as many annotated values (8 vs 4). We propose a simple method to approximate the object mask using extreme points by creating an octagon whose edges are centered on the extreme points. Specifically, for an extreme point, we extend it in both directions on its corresponding edge to a segment of 1/4 of the entire edge length. The segment is truncated when it meets a corner. We then connect the end points of the four segments to form the octagon. See Figure 1 for an example.
To further refine the bounding box segmentation, we use Deep Extreme Cut (DEXTR) [29], a deep network trained to convert the manually provided extreme points into instance segmentation mask. In this work, we simply replace the manual input of DEXTR [29] with our extreme point prediction, to perform a 2-stage instance segmentation. Specifically, for each of our predicted bounding box , we crop the bounding box region, render a Gaussian map with our predicted extreme point, and then feed the concatenated image to the pre-trained DEXTR model. DEXTR [29] is class-agnostic, thus we directly use the detected class and score of ExtremeNet. No further post-processing is used.
Experiments
We evaluate our method on the popular MS COCO dataset [26]. COCO contains rich bounding box and instance segmentation annotations for 80 categories. We train on the train2017 split, which contains 118k images and 860k annotated objects. We perform all ablation studies on val2017 split, with 5k images and 36k objects, and compare to prior work on the test-dev split with contains 20k images The main evaluation metric is average precision over a dense set of fixed recall threshold We show average precision at IOU threshold 0.5 (AP 50 ), 0.75 (AP 75 ), and averaged over all thresholds between 0.5 and 1 (AP ). We also report AP for small, median and large objects (AP S , AP M , AP L ). The test evaluation is done on the official evaluation server. Qualitative results are shown in Table. 4 and can be found more in the supplementary material.
Extreme point annotations
There are no direct extreme point annotation in the COCO [26]. However, there are complete annotations for object segmentation masks. We thus find extreme points as extrema in the polygonal mask annotations. In cases where an edge is parallel to an axis or within a 3 • angle, we place the extreme point at the center of the edge. Although our training data is derived from the more expensive segmentation annotation, the extreme point data itself is 4× cheaper to collect than the standard bounding box [33].
Training details
Our implementation is based on the public implementation of CornerNet [22]. We strictly follow CornerNets hyper-parameters: we set the input resolution to 511 × 511 and output resolution to 128×128. Data augmentation consists of flipping, random scaling between 0.6 and 1.3, random cropping, and random color jittering. The network is optimized with Adam [21] with learning rate 2.5e − 4. Cor-nerNet [22] was originally trained on 10 GPUs for 500k iterations, and an equivalent of over 140 GPU days. Due to limited GPU resources, we fine-tune our network from a pre-trained CornerNet model with randomly initialized head layers on 5 GPUs for 250k iterations with a batch size of 24. Learning rate is dropped 10× at the 200k iterations.
Testing details
For each input image, our network produces four C-channel heatmaps for extreme points, one C-channel heatmap for center points, and four 2-channel offset maps. We apply edge aggregation (Section. 4.3) to each extreme point heatmap, and multiply the center heatmap by 2 to correct for the overall scale change. We then apply the center grouping algorithm (Section. 4.1) to the heatmaps. At most 40 top points are extracted in ExtrectPeak to keep the enumerating efficiency. The predicted bounding box coordinates are refined by adding an offset at the corresponding location of offsetmaps.
Following CornerNet [22], we keep the original image resolution instead of resizing it to a fixed size. We use flip augmentation for testing. In our main comparison, we use additional 5× multi-scale (0.5, 0.75, 1, 1.25, 1.5) augmentation. Finally, Soft-NMS [1] filters all augmented detection results. Testing on one image takes 322ms (3.1FPS), with 168ms on network forwarding, 130ms on grouping and rest time on image loading and post processing (NMS).
Ablation studies
Center Grouping vs. Associative Embedding Our Ex-tremeNet can also be trained with an Associative Embedding [30] the center map with a four-channel associative embedding feature map trained with a Hinge Loss [22]. Table 1 shows the result. We observe a 2.1% AP drop when using the associative embedding. While associative embeddings work well for human pose estimation and CornerNet, our extreme points lie on the very side of objects. Learning the identity and appearance of entire objects from the vantage point of its extreme points might simply be too hard. While it might work well for small objects, where the entire object easily fits into the effective receptive field of a keypoint, it fails for medium and large objects as shown in Table 1. Furthermore, extreme points often lie at the intersection between overlapping objects, which further confuses the identity feature. Our geometric grouping method gracefully deals with these issues, as it only needs to reason about appearance.
Edge aggregation Edge aggregation (Section 4.3) gives a decent AP improvement of 0.7%. It proofs more effective for larger objects, that are more likely to have a long axis aligned edges without a single well defined extreme point. It effectively enhances the predicted heatmap in a simple post-processing step with minor additional cost.
Ghost box suppression Our simple ghost bounding box suppression (Section 4.2) yields 0.3% AP improvement. This suggests that ghost boxes are not a significant practical issue in MS COCO. A more sophisticated false-positive removal algorithm, e.g., learn NMS [18], might yield a slightly better result.
Error Analysis To better understand where the error comes from and how well each of our components is trained, we provide error analysis by replacing each output component with its ground truth. Table 1 shows the Table 2: State-of-the-art comparison on COCO test-dev. SS/ MS are short for single-scale/ multi-scale tesing, respectively. It shows that our ExtremeNet in on-par with state-of-the-art region-based object detectors. result. A ground truth center heatmap alone does not increase AP much. This indicates that our center heatmap is trained quite well, and shows that the implicit object center is learnable. Replacing the extreme point heatmap with ground truth gives 16.3% AP improvement, with near twice improvement for small object AP. This indicates that keypoints of small objects are still hard to learn. The keypoint extractor struggles with both exact localization and as well as outright missing objects. When replacing both extreme point heatmap and center heatmap, the result comes to 79.8%, much higher than replacing one of them. This is due to that our center grouping is very strict in the keypoint location and a high performance requires to improve AP AP 50 Table 2 compares ExtremeNet to other state-of-the-art methods on COCO test-dev. Our model with multi-scale testing achieves an AP of 43.2, outperforming all reported one-stage object detectors and on-par with popular twostage detectors. Notable, it performs 1.1% higher than Cor-nerNet, which shows the advantage of detecting extreme and center points over detecting corners with associative features. In single scale setting, our performance is 0.4% AP below CornerNet [22]. However, our method has higher AP for small and median objects than CornerNet, which is known to be more challenging. For larger objects our center response map might not be accurate enough to perform well, as a few pixel shift might make the difference between a detection and a false-negative. Further, note that we used the half number of GPUs to train our model.
State-of-the-art comparisons
Instance Segmentation
Finally, we compare our instance segmentation results with and without DEXTR [29] to other baselines. Table 3 Extreme point heatmap
Center heatmap
Octagon mask Extreme points+DEXTR [29] Table 4: Qualitative results on COCO val2017. First and second column: our predicted (combined four) extreme point heatmap and center heatmap, respectively. We show them overlaid on the input image. We show heatmaps of different categories in different colors. Third column: our predicted bounding box and the octagon mask formed by extreme points. Fourth column: resulting masks of feeding our extreme point predictions to DEXTR [29].
shows the results. As a dummy baseline, we directly assign all pixels inside the rectangular bounding box as the segmentation mask. The result on our best-model (with 43.3% bounding box AP) is 12.1% Mask AP. The simple octagon mask (Section. 4.4) based on our predicted extreme points gets a mask AP of 18.9%, much better than the bounding box baseline. This shows that this simple octagon mask can give a relatively reasonable object mask without additional cost. Note that directly using the quadrangle of the four extreme points yields a too-small mask, with a lower IoU.
When combined with DEXTR [29], our method achieves a mask AP of 34.6% on COCO val2017. To put this result in a context, the state-of-the-art Mask RCNN [15] gets a mask AP of 37.5% with ResNeXt-101-FPN [24,50] back-bone and 34.0% AP with Res50-FPN. Considering the fact that our model has not been trained on the COCO segmentation annotation, or any class specific segmentations at all, our result which is on-par with Res50 [17] and 2.9% AP below ResNeXt-101 is very competitive.
Conclusion
In conclusion, we present a novel object detection framework based on bottom-up extreme points estimation. Our framework extracts four extreme points and groups them in a purely geometric manner. The presented framework yields state-of-the-art detection results and produces competitive instance segmentation results on MSCOCO, without seeing any COCO training instance segmentations. | 3,844 |
1901.08043 | 2914868659 | With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2 on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9 , much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6 Mask AP. | Determining which keypoints are from the same person is an important component in bottom-up multi-person pose estimation. There are multiple solutions: Newell al @cite_1 proposes to learn an associative feature for each keypoint, which is trained using an embedding loss. Cao al @cite_30 learns an affinity field which resembles the edge between connected keypoints. Papandreous al @cite_2 learns the displacement to the parent joint on the human skeleton tree, as a 2-d feature for each keypoint. Nie al @cite_50 also learn a feature as the offset with respect to the object center. | {
"abstract": [
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.",
"This paper proposes a novel Pose Partition Network (PPN) to address the challenging multi-person pose estimation problem. The proposed PPN is favorably featured by low complexity and high accuracy of joint detection and partition. In particular, PPN performs dense regressions from global joint candidates within a specific embedding space, which is parameterized by centroids of persons, to efficiently generate robust person detection and joint partition. Then, PPN infers body joint configurations through conducting graph partition for each person detection locally, utilizing reliable global affinity cues. In this way, PPN reduces computation complexity and improves multi-person pose estimation significantly. We implement PPN with the Hourglass architecture as the backbone network to simultaneously learn joint detector and dense regressor. Extensive experiments on benchmarks MPII Human Pose Multi-Person, extended PASCAL-Person-Part, and WAF show the efficiency of PPN with new state-of-the-art performance.",
"We present a box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. The proposed PersonLab model tackles both semantic-level reasoning and object-part associations using part-based modeling. Our model employs a convolutional network which learns to detect individual keypoints and predict their relative displacements, allowing us to group keypoints into person pose instances. Further, we propose a part-induced geometric embedding descriptor which allows us to associate semantic person pixels with their corresponding person instance, delivering instance-level person segmentations. Our system is based on a fully-convolutional architecture and allows for efficient inference, with runtime essentially independent of the number of people present in the scene. Trained on COCO data alone, our system achieves COCO test-dev keypoint average precision of 0.665 using single-scale inference and 0.687 using multi-scale inference, significantly outperforming all previous bottom-up pose estimation systems. We are also the first bottom-up method to report competitive results for the person class in the COCO instance segmentation task, achieving a person category average precision of 0.417."
],
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_50",
"@cite_2"
],
"mid": [
"2559085405",
"2555751471",
"2895457201",
"2962773068"
]
} | Bottom-up Object Detection by Grouping Extreme and Center Points | Top-down approaches have dominated object detection for years. Prevalent detectors convert object detection into rectangular region classification, by either explicitly cropping the region [12] or region feature [11,41] (two-stage object detection) or implicitly setting fix-sized anchors for region proxies [25,28,38] (one-stage object detection). However, top-down detection is not without limits. A rectangular bounding box is not a natural object representation. Most objects are not axis-aligned boxes, and fitting them inside a box includes many distracting background pixels ( Figure. 1). In addition, top-down object detectors enumerate a large number of possible box locations without truly understanding the compositional visual grammars [9,13] of objects themselves. This is computationally expensive. Finally, boxes are a bad proxy for the object themselves. They convey little detailed object information, e.g., object shape and pose. Figure 1: We propose to detect objects by finding their extreme points. They directly form a bounding box , but also give a much tighter octagonal approximation of the object.
In this paper, we propose ExtremeNet, a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. We use a state-of-the-art keypoint estimation framework [3,5,30,31,49] to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, we use one heatmap per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold. We enumerate all O(n 4 ) combinations of extreme point prediction, and select the valid ones. The number of extreme point prediction n is usually quite small, for COCO [26] n ≤ 40, and a brute force algorithm implemented on GPU is sufficient. Figure 2 shows an overview of the proposed method.
We are not the first to use deep keypoint prediction for object detection. CornerNet [22] predicts two opposing corners of a bounding box. They group corner points into bounding boxes using an associative embedding feature [30]. Our approach differs in two key aspects: key- point definition and grouping. A corner is another form of bounding box, and suffers many of the issues top-down detection suffers from. A corner often lies outside an object, without strong appearance features. Extreme points, on the other hand, lie on objects, are visually distinguishable, and have consistent local appearance features. For example, the top-most point of human is often the head, and the bottommost point of a car or airplane will be a wheel. This makes the extreme point detection easier. The second difference to CornerNet is the geometric grouping. Our detection framework is fully appearance-based, without any implicit feature learning. In our experiments, the appearance-based grouping works significantly better.
Our idea is motivated by Papadopoulos et al. [33], who proposed to annotate bounding boxes by clicking the four extreme points. This annotation is roughly four times faster to collect and provides richer information than bounding boxes. Extreme points also have a close connection to object masks. Directly connecting the inflated extreme points offers a more fine-grained object mask than the bounding box. In our experiment, we show that fitting a simple octagon to the extreme points yields a good object mask estimation. Our method can be further combined with Deep Extreme Cut (DEXTR) [29], which turns extreme point annotations into a segmentation mask for the indicated object. Directly feeding our extreme point predictions as guidance to DEXTR [29] leads to close to state-of-the-art instance segmentation results.
Our proposed method achieves a bounding box AP of 43.2% on COCO test-dev, out-performing all reported onestage object detectors [22,25,40,52] and on-par with sophisticated two-stage detectors. A Pascal VOC [8,14] pre-trained DEXTR [29] model yields a Mask AP of 34.6%, without using any COCO mask annotations. Code is available at https://github.com/xingyizhou/ ExtremeNet.
Preliminaries
Extreme and center points Let (x (tl) , y (tl) , x (br) , y (br) ) denote the four sides of a bounding box. To annotate a bounding box, a user commonly clicks on the top-left (x (tl) , y (tl) ) and bottom-right (x (br) , y (br) ) corners. As both points regularly lie outside an object, these clicks are often inaccuracy and need to be adjusted a few times. The whole process takes 34.5 seconds on average [44]. Papadopoulos et al. [33] propose to annotate the bounding box by clicking the four extreme points
(x (t) , y (t) ), (x (l) , y (l) ), (x (b) , y (b) ), (x (r) , y (r) ), where the box is (x (l) , y (t) , x (r) , y (b)
). An extreme point is a point (x (a) , y (a) ) such that no other point (x, y) on the object lies further along one of the four cardinal directions a: top, bottom, left, right. Extreme click annotation time is 7.2 seconds on average [33]. The resulting annotation is on-par with the more time-consuming box annotation. Here, we use the extreme click annotations directly and bypass the bounding box. We additionally use the center point of each object as (
x (l) +x (r) 2 , y (t) +y (b) 2 ).
Keypoint detection Keypoint estimation, e.g., human joint estimation [3,5,15,30,49] or chair corner point estimation [36,53], commonly uses a fully convolutional encoderdecoder network to predict a multi-channel heatmap for each type of keypoint (e.g., one heatmap for human head, another heatmap for human wrist). The network is trained in a fully supervised way, with either an L2 loss to a rendered Gaussian map [3,5,30,49] or with a per-pixel logistic regression loss [22,34,35]. State-of-the-art keypoint estimation networks, e.g., 104-layer HourglassNetwork [22,31], are trained in a fully convolutional manner. They regress to a heatmapŶ ∈ (0, 1) H×W of width W and height H for each output channel. The training is guided by a multi-peak Gaussian heatmap Y ∈ (0, 1) H×W , where each keypoint defines the mean of a Gaussian Kernel. The standard deviation is either fixed, or set proportional to the object size [22]. The Gaussian heatmap serves as the regression target in the L2 loss case or as the weight map to reduce penalty near a positive location in the logistic regression case [22].
CornerNet CornerNet [22] uses keypoint estimation with an HourglassNetwork [31] as an object detector. They predict two sets of heatmaps for the opposing corners of the box. In order to balance the positive and negative locations they use a modified focal loss [25] for training:
L det = − 1 N H i=1 W j=1 (1 −Ŷ ij ) α log(Ŷ ij ) if Y ij = 1 (1−Y ij ) β (Ŷ ij ) α log(1−Ŷ ij ) o.w. ,(1)
where α and β are hyper-parameters and fixed to α = 2 and β = 4 during training. N is the number of objects in the image.
For sub-pixel accuracy of extreme points, CornerNet additionally regresses to category-agnostic keypoint offset ∆ (a) for each corner. This regression recovers part of the information lost in the down-sampling of the hourglass network. The offset map is trained with smooth L1 Loss [11] SL 1 on ground truth extreme point locations:
L of f = 1 N N k=1 SL 1 (∆ (a) , x/s − x/s ),(2)
where s is the down-sampling factor (s = 4 for Hourglass-Net), x is the coordinate of the keypoint. CornerNet then groups opposing corners into detection using an associative embedding [30]. Our extreme point estimation uses the CornerNet architecture and loss, but not the associative embedding.
Deep Extreme Cut Deep Extreme Cut (DEXTR) [29] is an extreme point guided image segmentation method. It takes four extreme points and the cropped image region surrounding the bounding box spanned by the extreme points as input. From this it produces a category-agnostic foreground segmentation of the indicated object using the semantic segmentation network of Chen et al. [4]. The network learns to generate the segmentation mask that matches the input extreme point.
ExtremeNet for Object detection
ExtremeNet uses an HourglassNetwork [31] to detect five keypoints per class (four extreme points, and one center). We follow the training setup, loss and offset prediction of CornerNet [22]. The offset prediction is categoryagnostic, but extreme-point specific. There is no offset prediction for the center map. The output of our network is thus 5 × C heatmaps and 4 × 2 offset maps, where C is the number of classes (C = 80 for MS COCO [26]). Figure 3 shows an overview. Once the extreme points are extracted, we group them into detections in a purely geometric manner.
Algorithm 1: Center Grouping
Input : Center and Extremepoint heatmaps of an image for one category:Ŷ (c) ,Ŷ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W Center and peak selection thresholds: τc and τp Output: Bounding box with score // Convert heatmaps into coordinates of keypoints. // T , L, B, R are sets of points.
T ← ExtractPeak(Ŷ (t) , τp) L ← ExtractPeak(Ŷ (l) , τp) B ← ExtractPeak(Ŷ (b) , τp) R ← ExtractPeak(Ŷ (r) , τp) for t ∈ T , l ∈ L, b ∈ B, r ∈ R do // If
Center Grouping
Extreme points lie on different sides of an object. This complicates grouping. For example, an associative embedding [30] might not have a global enough view to group these keypoints. Here, we take a different approach that exploits the spread out nature of extreme points.
The input to our grouping algorithm is five heatmaps per class: one center heatmapŶ (c) ∈ (0, 1) H×W and four extreme heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W for the top, left, bottom, right, respectively. Given a heatmap, we extract the corresponding keypoints by detecting all peaks. A peak is any pixel location with a value greater than τ p , that is locally maximal in a 3 × 3 window surrounding the pixel. We name this procedure as ExtrectPeak.
Given four extreme points t, b, r, l extracted from heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) , we compute their geometric center c = ( lx+tx 2 , ty+by 2 ). If this center is predicted with a high response in the center mapŶ (c) , we commit the extreme points as a valid detection:Ŷ (c) cx,cy ≥ τ c for a threshold τ c . We then enumerate over all quadruples of keypoints t, b, r, l in a brute force manner. We extract detections for each class independently. Algorithm 1 summarizes this procedure. We set τ p = 0.1 and τ c = 0.1 in all experiments.
This brute force grouping algorithm has a runtime of O(n 4 ), where n is the number of extracted extreme points for each cardinal direction. Supplementary material presents a O(n 2 ) algorithm that is faster on paper. However, then it is harder to accelerate on a GPU and slower in practice for the MS COCO dataset, where n ≤ 40.
Ghost box suppression
Center grouping may give a high-confidence falsepositive detection for three equally spaced colinear objects of the same size. The center object has two choices here, commit to the correct small box, or predict a much larger box containing the extreme points of its neighbors. We call these false-positive detections "ghost" boxes. As we'll show in our experiments, these ghost boxes are infrequent, but nonetheless a consistent error mode of our grouping.
We present a simple post-processing step to remove ghost boxes. By definition a ghost box contains many other smaller detections. To discourage ghost boxes, we use a form of soft non-maxima suppression [1]. If the sum of scores of all boxes contained in a certain bounding box exceeds 3 times of the score of itself, we divide its score by 2. This non-maxima suppression is similar to the standard overlap-based non-maxima suppression, but penalizes potential ghost boxes instead of multiple overlapping boxes.
Edge aggregation
Extreme points are not always uniquely defined. If vertical or horizontal edges of an object form the extreme points (e.g., the top of a car) any point along that edge might be considered an extreme point. As a result, our network produces a weak response along any aligned edges of the object, instead of a single strong peak response. This weak response has two issues: First, the weaker response might be below our peak selection threshold τ p , and we will miss the extreme point entirely. Second, even if we detect the keypoint, its score will be lower than a slightly rotated object with a strong peak response.
We use edge aggregation to address this issue. For each extreme point, extracted as a local maximum, we aggregate its score in either the vertical direction, for left and right extreme points, or the horizontal direction, for top and bottom keypoints. We aggregate all monotonically de-creasing scores, and stop the aggregation at a local minimum along the aggregation direction. Specifically, let m be an extreme point and N
Extreme Instance Segmentation
Extreme points carry considerable more information about an object, than a simple bounding box, with at least twice as many annotated values (8 vs 4). We propose a simple method to approximate the object mask using extreme points by creating an octagon whose edges are centered on the extreme points. Specifically, for an extreme point, we extend it in both directions on its corresponding edge to a segment of 1/4 of the entire edge length. The segment is truncated when it meets a corner. We then connect the end points of the four segments to form the octagon. See Figure 1 for an example.
To further refine the bounding box segmentation, we use Deep Extreme Cut (DEXTR) [29], a deep network trained to convert the manually provided extreme points into instance segmentation mask. In this work, we simply replace the manual input of DEXTR [29] with our extreme point prediction, to perform a 2-stage instance segmentation. Specifically, for each of our predicted bounding box , we crop the bounding box region, render a Gaussian map with our predicted extreme point, and then feed the concatenated image to the pre-trained DEXTR model. DEXTR [29] is class-agnostic, thus we directly use the detected class and score of ExtremeNet. No further post-processing is used.
Experiments
We evaluate our method on the popular MS COCO dataset [26]. COCO contains rich bounding box and instance segmentation annotations for 80 categories. We train on the train2017 split, which contains 118k images and 860k annotated objects. We perform all ablation studies on val2017 split, with 5k images and 36k objects, and compare to prior work on the test-dev split with contains 20k images The main evaluation metric is average precision over a dense set of fixed recall threshold We show average precision at IOU threshold 0.5 (AP 50 ), 0.75 (AP 75 ), and averaged over all thresholds between 0.5 and 1 (AP ). We also report AP for small, median and large objects (AP S , AP M , AP L ). The test evaluation is done on the official evaluation server. Qualitative results are shown in Table. 4 and can be found more in the supplementary material.
Extreme point annotations
There are no direct extreme point annotation in the COCO [26]. However, there are complete annotations for object segmentation masks. We thus find extreme points as extrema in the polygonal mask annotations. In cases where an edge is parallel to an axis or within a 3 • angle, we place the extreme point at the center of the edge. Although our training data is derived from the more expensive segmentation annotation, the extreme point data itself is 4× cheaper to collect than the standard bounding box [33].
Training details
Our implementation is based on the public implementation of CornerNet [22]. We strictly follow CornerNets hyper-parameters: we set the input resolution to 511 × 511 and output resolution to 128×128. Data augmentation consists of flipping, random scaling between 0.6 and 1.3, random cropping, and random color jittering. The network is optimized with Adam [21] with learning rate 2.5e − 4. Cor-nerNet [22] was originally trained on 10 GPUs for 500k iterations, and an equivalent of over 140 GPU days. Due to limited GPU resources, we fine-tune our network from a pre-trained CornerNet model with randomly initialized head layers on 5 GPUs for 250k iterations with a batch size of 24. Learning rate is dropped 10× at the 200k iterations.
Testing details
For each input image, our network produces four C-channel heatmaps for extreme points, one C-channel heatmap for center points, and four 2-channel offset maps. We apply edge aggregation (Section. 4.3) to each extreme point heatmap, and multiply the center heatmap by 2 to correct for the overall scale change. We then apply the center grouping algorithm (Section. 4.1) to the heatmaps. At most 40 top points are extracted in ExtrectPeak to keep the enumerating efficiency. The predicted bounding box coordinates are refined by adding an offset at the corresponding location of offsetmaps.
Following CornerNet [22], we keep the original image resolution instead of resizing it to a fixed size. We use flip augmentation for testing. In our main comparison, we use additional 5× multi-scale (0.5, 0.75, 1, 1.25, 1.5) augmentation. Finally, Soft-NMS [1] filters all augmented detection results. Testing on one image takes 322ms (3.1FPS), with 168ms on network forwarding, 130ms on grouping and rest time on image loading and post processing (NMS).
Ablation studies
Center Grouping vs. Associative Embedding Our Ex-tremeNet can also be trained with an Associative Embedding [30] the center map with a four-channel associative embedding feature map trained with a Hinge Loss [22]. Table 1 shows the result. We observe a 2.1% AP drop when using the associative embedding. While associative embeddings work well for human pose estimation and CornerNet, our extreme points lie on the very side of objects. Learning the identity and appearance of entire objects from the vantage point of its extreme points might simply be too hard. While it might work well for small objects, where the entire object easily fits into the effective receptive field of a keypoint, it fails for medium and large objects as shown in Table 1. Furthermore, extreme points often lie at the intersection between overlapping objects, which further confuses the identity feature. Our geometric grouping method gracefully deals with these issues, as it only needs to reason about appearance.
Edge aggregation Edge aggregation (Section 4.3) gives a decent AP improvement of 0.7%. It proofs more effective for larger objects, that are more likely to have a long axis aligned edges without a single well defined extreme point. It effectively enhances the predicted heatmap in a simple post-processing step with minor additional cost.
Ghost box suppression Our simple ghost bounding box suppression (Section 4.2) yields 0.3% AP improvement. This suggests that ghost boxes are not a significant practical issue in MS COCO. A more sophisticated false-positive removal algorithm, e.g., learn NMS [18], might yield a slightly better result.
Error Analysis To better understand where the error comes from and how well each of our components is trained, we provide error analysis by replacing each output component with its ground truth. Table 1 shows the Table 2: State-of-the-art comparison on COCO test-dev. SS/ MS are short for single-scale/ multi-scale tesing, respectively. It shows that our ExtremeNet in on-par with state-of-the-art region-based object detectors. result. A ground truth center heatmap alone does not increase AP much. This indicates that our center heatmap is trained quite well, and shows that the implicit object center is learnable. Replacing the extreme point heatmap with ground truth gives 16.3% AP improvement, with near twice improvement for small object AP. This indicates that keypoints of small objects are still hard to learn. The keypoint extractor struggles with both exact localization and as well as outright missing objects. When replacing both extreme point heatmap and center heatmap, the result comes to 79.8%, much higher than replacing one of them. This is due to that our center grouping is very strict in the keypoint location and a high performance requires to improve AP AP 50 Table 2 compares ExtremeNet to other state-of-the-art methods on COCO test-dev. Our model with multi-scale testing achieves an AP of 43.2, outperforming all reported one-stage object detectors and on-par with popular twostage detectors. Notable, it performs 1.1% higher than Cor-nerNet, which shows the advantage of detecting extreme and center points over detecting corners with associative features. In single scale setting, our performance is 0.4% AP below CornerNet [22]. However, our method has higher AP for small and median objects than CornerNet, which is known to be more challenging. For larger objects our center response map might not be accurate enough to perform well, as a few pixel shift might make the difference between a detection and a false-negative. Further, note that we used the half number of GPUs to train our model.
State-of-the-art comparisons
Instance Segmentation
Finally, we compare our instance segmentation results with and without DEXTR [29] to other baselines. Table 3 Extreme point heatmap
Center heatmap
Octagon mask Extreme points+DEXTR [29] Table 4: Qualitative results on COCO val2017. First and second column: our predicted (combined four) extreme point heatmap and center heatmap, respectively. We show them overlaid on the input image. We show heatmaps of different categories in different colors. Third column: our predicted bounding box and the octagon mask formed by extreme points. Fourth column: resulting masks of feeding our extreme point predictions to DEXTR [29].
shows the results. As a dummy baseline, we directly assign all pixels inside the rectangular bounding box as the segmentation mask. The result on our best-model (with 43.3% bounding box AP) is 12.1% Mask AP. The simple octagon mask (Section. 4.4) based on our predicted extreme points gets a mask AP of 18.9%, much better than the bounding box baseline. This shows that this simple octagon mask can give a relatively reasonable object mask without additional cost. Note that directly using the quadrangle of the four extreme points yields a too-small mask, with a lower IoU.
When combined with DEXTR [29], our method achieves a mask AP of 34.6% on COCO val2017. To put this result in a context, the state-of-the-art Mask RCNN [15] gets a mask AP of 37.5% with ResNeXt-101-FPN [24,50] back-bone and 34.0% AP with Res50-FPN. Considering the fact that our model has not been trained on the COCO segmentation annotation, or any class specific segmentations at all, our result which is on-par with Res50 [17] and 2.9% AP below ResNeXt-101 is very competitive.
Conclusion
In conclusion, we present a novel object detection framework based on bottom-up extreme points estimation. Our framework extracts four extreme points and groups them in a purely geometric manner. The presented framework yields state-of-the-art detection results and produces competitive instance segmentation results on MSCOCO, without seeing any COCO training instance segmentations. | 3,844 |
1901.08043 | 2914868659 | With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2 on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9 , much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6 Mask AP. | Prevalent keypoint detection methods work on well-defined semantic keypoints, e.g., human joints. StarMap @cite_49 mixes all types of keypoints using a single heatmap for general keypoint detection. Our extreme and center points are a kind of such general implicit keypoints, but with more explicit geometry property. | {
"abstract": [
"Semantic keypoints provide concise abstractions for a variety of visual understanding tasks. Existing methods define semantic keypoints separately for each category with a fixed number of semantic labels in fixed indices. As a result, this keypoint representation is in-feasible when objects have a varying number of parts, e.g. chairs with varying number of legs. We propose a category-agnostic keypoint representation, which combines a multi-peak heatmap (StarMap) for all the keypoints and their corresponding features as 3D locations in the canonical viewpoint (CanViewFeature) defined for each instance. Our intuition is that the 3D locations of the keypoints in canonical object views contain rich semantic and compositional information. Using our flexible representation, we demonstrate competitive performance in keypoint detection and localization compared to category-specific state-of-the-art methods. Moreover, we show that when augmented with an additional depth channel (DepthMap) to lift the 2D keypoints to 3D, our representation can achieve state-of-the-art results in viewpoint estimation. Finally, we show that our category-agnostic keypoint representation can be generalized to novel categories."
],
"cite_N": [
"@cite_49"
],
"mid": [
"2795096917"
]
} | Bottom-up Object Detection by Grouping Extreme and Center Points | Top-down approaches have dominated object detection for years. Prevalent detectors convert object detection into rectangular region classification, by either explicitly cropping the region [12] or region feature [11,41] (two-stage object detection) or implicitly setting fix-sized anchors for region proxies [25,28,38] (one-stage object detection). However, top-down detection is not without limits. A rectangular bounding box is not a natural object representation. Most objects are not axis-aligned boxes, and fitting them inside a box includes many distracting background pixels ( Figure. 1). In addition, top-down object detectors enumerate a large number of possible box locations without truly understanding the compositional visual grammars [9,13] of objects themselves. This is computationally expensive. Finally, boxes are a bad proxy for the object themselves. They convey little detailed object information, e.g., object shape and pose. Figure 1: We propose to detect objects by finding their extreme points. They directly form a bounding box , but also give a much tighter octagonal approximation of the object.
In this paper, we propose ExtremeNet, a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. We use a state-of-the-art keypoint estimation framework [3,5,30,31,49] to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, we use one heatmap per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold. We enumerate all O(n 4 ) combinations of extreme point prediction, and select the valid ones. The number of extreme point prediction n is usually quite small, for COCO [26] n ≤ 40, and a brute force algorithm implemented on GPU is sufficient. Figure 2 shows an overview of the proposed method.
We are not the first to use deep keypoint prediction for object detection. CornerNet [22] predicts two opposing corners of a bounding box. They group corner points into bounding boxes using an associative embedding feature [30]. Our approach differs in two key aspects: key- point definition and grouping. A corner is another form of bounding box, and suffers many of the issues top-down detection suffers from. A corner often lies outside an object, without strong appearance features. Extreme points, on the other hand, lie on objects, are visually distinguishable, and have consistent local appearance features. For example, the top-most point of human is often the head, and the bottommost point of a car or airplane will be a wheel. This makes the extreme point detection easier. The second difference to CornerNet is the geometric grouping. Our detection framework is fully appearance-based, without any implicit feature learning. In our experiments, the appearance-based grouping works significantly better.
Our idea is motivated by Papadopoulos et al. [33], who proposed to annotate bounding boxes by clicking the four extreme points. This annotation is roughly four times faster to collect and provides richer information than bounding boxes. Extreme points also have a close connection to object masks. Directly connecting the inflated extreme points offers a more fine-grained object mask than the bounding box. In our experiment, we show that fitting a simple octagon to the extreme points yields a good object mask estimation. Our method can be further combined with Deep Extreme Cut (DEXTR) [29], which turns extreme point annotations into a segmentation mask for the indicated object. Directly feeding our extreme point predictions as guidance to DEXTR [29] leads to close to state-of-the-art instance segmentation results.
Our proposed method achieves a bounding box AP of 43.2% on COCO test-dev, out-performing all reported onestage object detectors [22,25,40,52] and on-par with sophisticated two-stage detectors. A Pascal VOC [8,14] pre-trained DEXTR [29] model yields a Mask AP of 34.6%, without using any COCO mask annotations. Code is available at https://github.com/xingyizhou/ ExtremeNet.
Preliminaries
Extreme and center points Let (x (tl) , y (tl) , x (br) , y (br) ) denote the four sides of a bounding box. To annotate a bounding box, a user commonly clicks on the top-left (x (tl) , y (tl) ) and bottom-right (x (br) , y (br) ) corners. As both points regularly lie outside an object, these clicks are often inaccuracy and need to be adjusted a few times. The whole process takes 34.5 seconds on average [44]. Papadopoulos et al. [33] propose to annotate the bounding box by clicking the four extreme points
(x (t) , y (t) ), (x (l) , y (l) ), (x (b) , y (b) ), (x (r) , y (r) ), where the box is (x (l) , y (t) , x (r) , y (b)
). An extreme point is a point (x (a) , y (a) ) such that no other point (x, y) on the object lies further along one of the four cardinal directions a: top, bottom, left, right. Extreme click annotation time is 7.2 seconds on average [33]. The resulting annotation is on-par with the more time-consuming box annotation. Here, we use the extreme click annotations directly and bypass the bounding box. We additionally use the center point of each object as (
x (l) +x (r) 2 , y (t) +y (b) 2 ).
Keypoint detection Keypoint estimation, e.g., human joint estimation [3,5,15,30,49] or chair corner point estimation [36,53], commonly uses a fully convolutional encoderdecoder network to predict a multi-channel heatmap for each type of keypoint (e.g., one heatmap for human head, another heatmap for human wrist). The network is trained in a fully supervised way, with either an L2 loss to a rendered Gaussian map [3,5,30,49] or with a per-pixel logistic regression loss [22,34,35]. State-of-the-art keypoint estimation networks, e.g., 104-layer HourglassNetwork [22,31], are trained in a fully convolutional manner. They regress to a heatmapŶ ∈ (0, 1) H×W of width W and height H for each output channel. The training is guided by a multi-peak Gaussian heatmap Y ∈ (0, 1) H×W , where each keypoint defines the mean of a Gaussian Kernel. The standard deviation is either fixed, or set proportional to the object size [22]. The Gaussian heatmap serves as the regression target in the L2 loss case or as the weight map to reduce penalty near a positive location in the logistic regression case [22].
CornerNet CornerNet [22] uses keypoint estimation with an HourglassNetwork [31] as an object detector. They predict two sets of heatmaps for the opposing corners of the box. In order to balance the positive and negative locations they use a modified focal loss [25] for training:
L det = − 1 N H i=1 W j=1 (1 −Ŷ ij ) α log(Ŷ ij ) if Y ij = 1 (1−Y ij ) β (Ŷ ij ) α log(1−Ŷ ij ) o.w. ,(1)
where α and β are hyper-parameters and fixed to α = 2 and β = 4 during training. N is the number of objects in the image.
For sub-pixel accuracy of extreme points, CornerNet additionally regresses to category-agnostic keypoint offset ∆ (a) for each corner. This regression recovers part of the information lost in the down-sampling of the hourglass network. The offset map is trained with smooth L1 Loss [11] SL 1 on ground truth extreme point locations:
L of f = 1 N N k=1 SL 1 (∆ (a) , x/s − x/s ),(2)
where s is the down-sampling factor (s = 4 for Hourglass-Net), x is the coordinate of the keypoint. CornerNet then groups opposing corners into detection using an associative embedding [30]. Our extreme point estimation uses the CornerNet architecture and loss, but not the associative embedding.
Deep Extreme Cut Deep Extreme Cut (DEXTR) [29] is an extreme point guided image segmentation method. It takes four extreme points and the cropped image region surrounding the bounding box spanned by the extreme points as input. From this it produces a category-agnostic foreground segmentation of the indicated object using the semantic segmentation network of Chen et al. [4]. The network learns to generate the segmentation mask that matches the input extreme point.
ExtremeNet for Object detection
ExtremeNet uses an HourglassNetwork [31] to detect five keypoints per class (four extreme points, and one center). We follow the training setup, loss and offset prediction of CornerNet [22]. The offset prediction is categoryagnostic, but extreme-point specific. There is no offset prediction for the center map. The output of our network is thus 5 × C heatmaps and 4 × 2 offset maps, where C is the number of classes (C = 80 for MS COCO [26]). Figure 3 shows an overview. Once the extreme points are extracted, we group them into detections in a purely geometric manner.
Algorithm 1: Center Grouping
Input : Center and Extremepoint heatmaps of an image for one category:Ŷ (c) ,Ŷ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W Center and peak selection thresholds: τc and τp Output: Bounding box with score // Convert heatmaps into coordinates of keypoints. // T , L, B, R are sets of points.
T ← ExtractPeak(Ŷ (t) , τp) L ← ExtractPeak(Ŷ (l) , τp) B ← ExtractPeak(Ŷ (b) , τp) R ← ExtractPeak(Ŷ (r) , τp) for t ∈ T , l ∈ L, b ∈ B, r ∈ R do // If
Center Grouping
Extreme points lie on different sides of an object. This complicates grouping. For example, an associative embedding [30] might not have a global enough view to group these keypoints. Here, we take a different approach that exploits the spread out nature of extreme points.
The input to our grouping algorithm is five heatmaps per class: one center heatmapŶ (c) ∈ (0, 1) H×W and four extreme heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) ∈ (0, 1) H×W for the top, left, bottom, right, respectively. Given a heatmap, we extract the corresponding keypoints by detecting all peaks. A peak is any pixel location with a value greater than τ p , that is locally maximal in a 3 × 3 window surrounding the pixel. We name this procedure as ExtrectPeak.
Given four extreme points t, b, r, l extracted from heatmapsŶ (t) ,Ŷ (l) ,Ŷ (b) ,Ŷ (r) , we compute their geometric center c = ( lx+tx 2 , ty+by 2 ). If this center is predicted with a high response in the center mapŶ (c) , we commit the extreme points as a valid detection:Ŷ (c) cx,cy ≥ τ c for a threshold τ c . We then enumerate over all quadruples of keypoints t, b, r, l in a brute force manner. We extract detections for each class independently. Algorithm 1 summarizes this procedure. We set τ p = 0.1 and τ c = 0.1 in all experiments.
This brute force grouping algorithm has a runtime of O(n 4 ), where n is the number of extracted extreme points for each cardinal direction. Supplementary material presents a O(n 2 ) algorithm that is faster on paper. However, then it is harder to accelerate on a GPU and slower in practice for the MS COCO dataset, where n ≤ 40.
Ghost box suppression
Center grouping may give a high-confidence falsepositive detection for three equally spaced colinear objects of the same size. The center object has two choices here, commit to the correct small box, or predict a much larger box containing the extreme points of its neighbors. We call these false-positive detections "ghost" boxes. As we'll show in our experiments, these ghost boxes are infrequent, but nonetheless a consistent error mode of our grouping.
We present a simple post-processing step to remove ghost boxes. By definition a ghost box contains many other smaller detections. To discourage ghost boxes, we use a form of soft non-maxima suppression [1]. If the sum of scores of all boxes contained in a certain bounding box exceeds 3 times of the score of itself, we divide its score by 2. This non-maxima suppression is similar to the standard overlap-based non-maxima suppression, but penalizes potential ghost boxes instead of multiple overlapping boxes.
Edge aggregation
Extreme points are not always uniquely defined. If vertical or horizontal edges of an object form the extreme points (e.g., the top of a car) any point along that edge might be considered an extreme point. As a result, our network produces a weak response along any aligned edges of the object, instead of a single strong peak response. This weak response has two issues: First, the weaker response might be below our peak selection threshold τ p , and we will miss the extreme point entirely. Second, even if we detect the keypoint, its score will be lower than a slightly rotated object with a strong peak response.
We use edge aggregation to address this issue. For each extreme point, extracted as a local maximum, we aggregate its score in either the vertical direction, for left and right extreme points, or the horizontal direction, for top and bottom keypoints. We aggregate all monotonically de-creasing scores, and stop the aggregation at a local minimum along the aggregation direction. Specifically, let m be an extreme point and N
Extreme Instance Segmentation
Extreme points carry considerable more information about an object, than a simple bounding box, with at least twice as many annotated values (8 vs 4). We propose a simple method to approximate the object mask using extreme points by creating an octagon whose edges are centered on the extreme points. Specifically, for an extreme point, we extend it in both directions on its corresponding edge to a segment of 1/4 of the entire edge length. The segment is truncated when it meets a corner. We then connect the end points of the four segments to form the octagon. See Figure 1 for an example.
To further refine the bounding box segmentation, we use Deep Extreme Cut (DEXTR) [29], a deep network trained to convert the manually provided extreme points into instance segmentation mask. In this work, we simply replace the manual input of DEXTR [29] with our extreme point prediction, to perform a 2-stage instance segmentation. Specifically, for each of our predicted bounding box , we crop the bounding box region, render a Gaussian map with our predicted extreme point, and then feed the concatenated image to the pre-trained DEXTR model. DEXTR [29] is class-agnostic, thus we directly use the detected class and score of ExtremeNet. No further post-processing is used.
Experiments
We evaluate our method on the popular MS COCO dataset [26]. COCO contains rich bounding box and instance segmentation annotations for 80 categories. We train on the train2017 split, which contains 118k images and 860k annotated objects. We perform all ablation studies on val2017 split, with 5k images and 36k objects, and compare to prior work on the test-dev split with contains 20k images The main evaluation metric is average precision over a dense set of fixed recall threshold We show average precision at IOU threshold 0.5 (AP 50 ), 0.75 (AP 75 ), and averaged over all thresholds between 0.5 and 1 (AP ). We also report AP for small, median and large objects (AP S , AP M , AP L ). The test evaluation is done on the official evaluation server. Qualitative results are shown in Table. 4 and can be found more in the supplementary material.
Extreme point annotations
There are no direct extreme point annotation in the COCO [26]. However, there are complete annotations for object segmentation masks. We thus find extreme points as extrema in the polygonal mask annotations. In cases where an edge is parallel to an axis or within a 3 • angle, we place the extreme point at the center of the edge. Although our training data is derived from the more expensive segmentation annotation, the extreme point data itself is 4× cheaper to collect than the standard bounding box [33].
Training details
Our implementation is based on the public implementation of CornerNet [22]. We strictly follow CornerNets hyper-parameters: we set the input resolution to 511 × 511 and output resolution to 128×128. Data augmentation consists of flipping, random scaling between 0.6 and 1.3, random cropping, and random color jittering. The network is optimized with Adam [21] with learning rate 2.5e − 4. Cor-nerNet [22] was originally trained on 10 GPUs for 500k iterations, and an equivalent of over 140 GPU days. Due to limited GPU resources, we fine-tune our network from a pre-trained CornerNet model with randomly initialized head layers on 5 GPUs for 250k iterations with a batch size of 24. Learning rate is dropped 10× at the 200k iterations.
Testing details
For each input image, our network produces four C-channel heatmaps for extreme points, one C-channel heatmap for center points, and four 2-channel offset maps. We apply edge aggregation (Section. 4.3) to each extreme point heatmap, and multiply the center heatmap by 2 to correct for the overall scale change. We then apply the center grouping algorithm (Section. 4.1) to the heatmaps. At most 40 top points are extracted in ExtrectPeak to keep the enumerating efficiency. The predicted bounding box coordinates are refined by adding an offset at the corresponding location of offsetmaps.
Following CornerNet [22], we keep the original image resolution instead of resizing it to a fixed size. We use flip augmentation for testing. In our main comparison, we use additional 5× multi-scale (0.5, 0.75, 1, 1.25, 1.5) augmentation. Finally, Soft-NMS [1] filters all augmented detection results. Testing on one image takes 322ms (3.1FPS), with 168ms on network forwarding, 130ms on grouping and rest time on image loading and post processing (NMS).
Ablation studies
Center Grouping vs. Associative Embedding Our Ex-tremeNet can also be trained with an Associative Embedding [30] the center map with a four-channel associative embedding feature map trained with a Hinge Loss [22]. Table 1 shows the result. We observe a 2.1% AP drop when using the associative embedding. While associative embeddings work well for human pose estimation and CornerNet, our extreme points lie on the very side of objects. Learning the identity and appearance of entire objects from the vantage point of its extreme points might simply be too hard. While it might work well for small objects, where the entire object easily fits into the effective receptive field of a keypoint, it fails for medium and large objects as shown in Table 1. Furthermore, extreme points often lie at the intersection between overlapping objects, which further confuses the identity feature. Our geometric grouping method gracefully deals with these issues, as it only needs to reason about appearance.
Edge aggregation Edge aggregation (Section 4.3) gives a decent AP improvement of 0.7%. It proofs more effective for larger objects, that are more likely to have a long axis aligned edges without a single well defined extreme point. It effectively enhances the predicted heatmap in a simple post-processing step with minor additional cost.
Ghost box suppression Our simple ghost bounding box suppression (Section 4.2) yields 0.3% AP improvement. This suggests that ghost boxes are not a significant practical issue in MS COCO. A more sophisticated false-positive removal algorithm, e.g., learn NMS [18], might yield a slightly better result.
Error Analysis To better understand where the error comes from and how well each of our components is trained, we provide error analysis by replacing each output component with its ground truth. Table 1 shows the Table 2: State-of-the-art comparison on COCO test-dev. SS/ MS are short for single-scale/ multi-scale tesing, respectively. It shows that our ExtremeNet in on-par with state-of-the-art region-based object detectors. result. A ground truth center heatmap alone does not increase AP much. This indicates that our center heatmap is trained quite well, and shows that the implicit object center is learnable. Replacing the extreme point heatmap with ground truth gives 16.3% AP improvement, with near twice improvement for small object AP. This indicates that keypoints of small objects are still hard to learn. The keypoint extractor struggles with both exact localization and as well as outright missing objects. When replacing both extreme point heatmap and center heatmap, the result comes to 79.8%, much higher than replacing one of them. This is due to that our center grouping is very strict in the keypoint location and a high performance requires to improve AP AP 50 Table 2 compares ExtremeNet to other state-of-the-art methods on COCO test-dev. Our model with multi-scale testing achieves an AP of 43.2, outperforming all reported one-stage object detectors and on-par with popular twostage detectors. Notable, it performs 1.1% higher than Cor-nerNet, which shows the advantage of detecting extreme and center points over detecting corners with associative features. In single scale setting, our performance is 0.4% AP below CornerNet [22]. However, our method has higher AP for small and median objects than CornerNet, which is known to be more challenging. For larger objects our center response map might not be accurate enough to perform well, as a few pixel shift might make the difference between a detection and a false-negative. Further, note that we used the half number of GPUs to train our model.
State-of-the-art comparisons
Instance Segmentation
Finally, we compare our instance segmentation results with and without DEXTR [29] to other baselines. Table 3 Extreme point heatmap
Center heatmap
Octagon mask Extreme points+DEXTR [29] Table 4: Qualitative results on COCO val2017. First and second column: our predicted (combined four) extreme point heatmap and center heatmap, respectively. We show them overlaid on the input image. We show heatmaps of different categories in different colors. Third column: our predicted bounding box and the octagon mask formed by extreme points. Fourth column: resulting masks of feeding our extreme point predictions to DEXTR [29].
shows the results. As a dummy baseline, we directly assign all pixels inside the rectangular bounding box as the segmentation mask. The result on our best-model (with 43.3% bounding box AP) is 12.1% Mask AP. The simple octagon mask (Section. 4.4) based on our predicted extreme points gets a mask AP of 18.9%, much better than the bounding box baseline. This shows that this simple octagon mask can give a relatively reasonable object mask without additional cost. Note that directly using the quadrangle of the four extreme points yields a too-small mask, with a lower IoU.
When combined with DEXTR [29], our method achieves a mask AP of 34.6% on COCO val2017. To put this result in a context, the state-of-the-art Mask RCNN [15] gets a mask AP of 37.5% with ResNeXt-101-FPN [24,50] back-bone and 34.0% AP with Res50-FPN. Considering the fact that our model has not been trained on the COCO segmentation annotation, or any class specific segmentations at all, our result which is on-par with Res50 [17] and 2.9% AP below ResNeXt-101 is very competitive.
Conclusion
In conclusion, we present a novel object detection framework based on bottom-up extreme points estimation. Our framework extracts four extreme points and groups them in a purely geometric manner. The presented framework yields state-of-the-art detection results and produces competitive instance segmentation results on MSCOCO, without seeing any COCO training instance segmentations. | 3,844 |
1901.07786 | 2952145720 | Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it. | In the recent work of Hayashi @cite_1 , an encoder-decoder approach was presented, where the first sentence was reformulated to a headline. Our Encoder-Decoder baseline (see section ) follows their setup. | {
"abstract": [
"Automatic headline generation is related to automatic text summarization and it is useful to solve information flood problems. This paper aims at generating a headline using a recurrent neural network which is based on a machine translation approach. Our headline generator consists of an encoder and a decoder and they are constructed with Long Short Term Memory, which is one of recurrent neural networks. The encoder constructs distributed representation from the first sentence in an article and the decoder generated headlines from the distributed representation. In our experiments, we confirmed that our proposed method could generate appropriate headlines but in some articles this method generates meaningless headlines. The results show that our proposed method is superior to another approach, statistical machine translation from the viewpoint of ROUGE, which is an evaluation score of automatic text summarization. Furthermore, we could find that using an input sentence in reverse order improves the quality of headline generation."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2787752238"
]
} | Self-Attentive Model for Headline Generation | Headline writing style has broader applications than those used purely within the journalism community. So-called naming is one of the arts of journalism. Just as natural language processing techniques help people with tasks such as incoming message classification (see [5] or [6]), the naming problem could also be solved using modern machine learning and, in particular, deep learning techniques. In the field of machine learning, the naming problem is formulated as headline generation, i.e. given the text it is needed to generate a title.
Headline generation can also be seen as a special type of text summarization. The aim of summarization is to produce a shorter version of the text that captures the main idea of the source version. We focus on abstractive summarization when the summary is generated on the fly, conditioned on the source sentence, possibly containing novel words not used in the original text.
The downside of traditional summarization is that finding a source of summaries for a large number of texts is rather costly. The advantage of headline generation over the traditional approach is that we have an endless supply of news articles since they are available in every major language and almost always have a title.
This task could be considered language-independent due to the absence of the necessity of native speakers for markup and/or model development.
While the task of learning to generate article headlines may seem to be easier than generating full summaries, it still requires that the learning algorithm be able to catch structure dependencies in natural language and therefore could be an interesting benchmark for testing various approaches.
In this paper, we present a new approach to headline generation based on Universal Transformer architecture which explicitly learns non-local representations of the text and seems to be necessary to train summarization model. We also present the test results of our model on the New York Times Annotated corpus and the RIA corpus.
Related Work
Rush et al. [11] were the first to apply an attention mechanism to abstractive text summarization.
In the recent work of Hayashi [4], an encoder-decoder approach was presented, where the first sentence was reformulated to a headline. Our Encoder-Decoder baseline (see section 6.1) follows their setup.
The related approach was presented in [10], where the approach of the first sentence was expanded with a so-called topic sentence. The topic sentence is chosen to be the first sentence containing the most important information from a news article (so called 5W1H information, where 5W1H stands for who, what, where, when, why, how). Our Encoder-Decoder baseline could be considered to implement their approach in OF (trained On First sentence) setup.
Tan et al. in [15] present an encoder-decoder approach based on a pregenerated summary of the article. The summary is generated using a statistical summarization approach. The authors mention that the first sentence approach is not enough for New York Times corpora, but they only use a summary for their approach instead of the whole text, thus relying on external tools of summarization.
Our Approach
Universal Transformer
While RNNs could be easily used to define the Encoder-Decoder model, learning the recurrent model is very expensive from a computation perspective. The other drawback is that they use only local information while omitting a sequence of hidden states H = {h 1 , ..., h N }. I.e. any two vectors from hidden state h i and h j are connected with j − i RNN computations that makes it hard to catch all the dependencies in them due to limited capacity. To train a rich model that would learn complex text structure, we have to define a model that relies on non-local dependencies in the data.
In this work, we adopt the Universal Transformer model architecture [3], which is a modified version of Transformer [16]. This approach has several benefits over RNNs. First of all, it could be trained in parallel. Furthermore, all input vectors are connected to every other via the attention mechanism. It implies that Transformer architecture learns non-local dependencies between tokens regardless of the distance between them, and thus it is able to learn a more complex representation of the text in the article, which proves to be necessary to effectively solve the task of summarization. Also, unlike [4,15], our model is trained end-to-end using the text and title of each news article.
Byte Pair Encoding
We also adopt byte-pair encoding (BPE), introduced by Sennrich for the machine translation task in [13]. BPE is a data compression technique where often encountered pairs of bytes are replaced by additional extra-alphabet symbols. In the case of texts, like in the machine translation field, the most frequent words are kept in the vocabulary, while less frequent words are replaced by a sequence of (typically two) tokens. E.g., for morphologically rich languages, the word endings could be detached since each word form is definitely less frequent than its stem. BPE encoding allows us to represent all words, including the ones unseen during training, with a fixed vocabulary.
Experiments
In our experiments, we consider two corpora: one in Russian and another in English. It is important to mention that we have not done any additional preprocessing other than lower casing, unlike other approaches [4,10]. We apply BPE encoding, which allows us to avoid usage of the < U N K > token for outof-vocabulary words. For our experiments, we withheld 20,000 random articles to form the test set. We have repeated our experiments 5 times with different random seeds and report mean values.
English Dataset We use the New York Times Annotated Corpus (NYT) as presented by the Linguistic Data Consortium in [12]. This dataset contains 1.8 million news articles from the New York Times news agency, written between the years 1987 and 2006. For our experiments, we filtered out news articles containing titles shorter than 3 words or longer than 15 words. We also filtered articles with a body text shorter than 20 words or longer than 2000 words. In addition, we skipped obituaries in the dataset. After filtering, we had 1444919 news available to us with a mean title length of 7.9 words and mean text length of 707.6 words.
Russian Dataset Russian news agency "Rossiya Segodnya" provided us with a dataset (RIA) for research purposes 1 . It contains news documents from January, 2010 to December, 2014. In total, there are 1003869 news articles in the provided corpus with a mean title length 9.5 words and mean text length of 315.6 words.
Experiments
Baseline models
First Sentence This model takes the first sentence of an article and uses it as its hypothesis for an article headline. This is a strong baseline for generating headlines from news articles.
Encoder-Decoder Following [10], we use the encoder-decoder architecture on the first sentence of an article. The model itself is already described at recent works section as Seq-To-Seq with RNNs of Sutskever et al. [14]. For this approach, we use the same preprocessing as we did for our model, including byte pair encoding.
Training
For both datasets, NYT and RIA, we used the same set of hyper-parameters for the models, namely 4 layers in the encoder and decoder with 8 heads of attention. In addition, we added a Dropout of p = 0.3 before applying Layer Normalization [8].
The models were trained with the Adam optimizer using a scaled learning rate, as proposed by the authors of the original Transformer with the number of warmout steps equal to 4000 in both cases and β = (0.9, 0.98). Both models were trained until convergence.
We trained the BPE tokenizator separately on the datasets. NYT data was tokenized with a vocabulary size of active tokens equal to 40000, while RIA data was tokenized using 50000 token vocabulary. In addition, we have limited length of the documents with 3000 BPE tokens and 2000 BPE tokens for RIA and NYT datasets respectively. Any exceeding tokens were omitted. word2vec [9] embeddings were trained on each dataset with the size of each embedding equal to 512. For headline generation, we adopted beam-search size of 10.
Results
Model
R In Tab. 1 we present results based on two corpora: the New York Times Annotated (NYT) corpus for English, and the Rossiya Segodnya (RIA) corpus for Russian. For the NYT corpus, we reached a new state of the art on ROUGE-1, ROUGE-2 and ROUGE-L F 1 scores. For the RIA corpus, since it has no previous art, we present results for the baselines and our model. 2 For our model we also experimented with label smoothing following [7].
-1-f R-1-r R-2-f R-2-r R-L-f R-L-
In our experiments, we noticed that some of the generated headlines are scored low by ROUGE metrics despite seeming reasonable, e.g. top sample in Tab in Tab. 2. 5 annotators marked up 100 randomly sampled articles from a train set of each corpora. Each number shows the percentage of annotator preference over three possible options: original headline (Human), generated headline (Machine), no preference (Tie).
For the both corpora, we could see that our model is not reaching human parity yet, having 42.6% and 45.6% of (Machine + Tie) user preference for NYT and RIA datasets respectively, but this result is already close to human parity and leaves room for improvement.
Original text, truncated: Unethical and irresponsible as the assertion that antidepressant medication, an excellent treatment for some forms of depression, will turn a man into a fish. It does a disservice to psychoanalysis, which offers rich and valuable insights into the human mind. ... Homosexuality is not an illness by any of the usual criteria in medicine, such as an increased risk of morbidity or mortality, painful symptoms or social, interpersonal or occupational dysfunction as a result of homosexuality itself... Original headline: homosexuality, not an illness, can't be cured Generated headline: why we can't let gay therapy begin Original text, truncated: southwest airlines said yesterday that it would add 16 flights a day from chicago midway airport, moving to protect a valuable hub amid the fight breaking out over the assets of ata airlines, the airport's biggest carrier. southwest said that beginning in january, it would add the flights to 13 cities that it already served from midway... Original headline: southwest is adding flights to protect its chicago hub Generated headline: southwest airlines to add 16 flights from chicago Original text, truncated: москва, 1 апр -риа новости. количество сделок продажи элитных квартир в москве выросло в первом квартале этого года, по сравнению с аналогичным периодом предыдущего, в два раза, говорится в отчете компании intermarksavill s. при этом, также сообщается в нем, количество заключенных в столице первичных сделок в сегменте бизнес-класса в первом квартале 2010 года оказалось на 20 выше, чем в первом квартале прошлого года... Original headline: продажи элитного жилья в москве увеличились в 1 квартале в два раза Generated headline: продажи элитных квартир в москве в 1 квартале выросли вдвое Table 3. Samples of headlines generated by our model.
Conclusion
In this paper, we explore the application of Universal Transformer architecture to the task of abstractive headline generation and outperform the abstractive stateof-the-art result on the New York Times Annotated corpus. We also present a newly released Rossiya Segodnya corpus and results achieved by our model applied to it. | 1,871 |
1901.07417 | 2914826069 | This paper shows that every sublevel set of the loss function of a class of deep over-parameterized neural nets with piecewise linear activation functions is connected and unbounded. This implies that the loss has no bad local valleys and all of its global minima are connected within a unique and potentially very large global valley. | Many interesting theoretical results have been developed on the loss surface of neural networks @cite_23 @cite_11 @cite_22 @cite_10 @cite_18 @cite_3 @cite_24 @cite_4 @cite_15 @cite_0 @cite_5 @cite_12 @cite_1 @cite_19 @cite_6 . There is also a whole line of researches studying convergence of learning algorithms in training neural networks and others studying generalization properties, which is however beyond the scope of this paper. | {
"abstract": [
"An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. @PARASPLIT In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. @PARASPLIT Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.",
"In deep learning, , as well as , create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.",
"The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. A key issue is that the neural network training problem is nonconvex, hence optimization algorithms may not return a global minima. This paper provides sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Our conditions require both the network output and the regularization to be positively homogeneous functions of the network parameters, with the regularization being designed to control the network size. Our results apply to networks with one hidden layer, where size is measured by the number of neurons in the hidden layer, and multiple deep subnetworks connected in parallel, where size is measured by the number of subnetworks.",
"Several recently proposed architectures of neural networks such as ResNeXt, Inception, Xception, SqueezeNet and Wide ResNet are based on the designing idea of having multiple branches and have demonstrated improved performance in many applications. We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap. The duality gap measures the degree of intrinsic non-convexity of an optimization problem: smaller gap in relative value implies lower degree of intrinsic non-convexity. The challenge is to quantitatively measure the duality gap of highly non-convex problems such as deep neural networks. In this work, we provide strong guarantees of this quantity for two classes of network architectures. For the neural networks with arbitrary activation functions, multi-branch architecture and a variant of hinge loss, we show that the duality gap of both population and empirical risks shrinks to zero as the number of branches increases. This result sheds light on better understanding the power of over-parametrization where increasing the network width tends to make the loss surface less non-convex. For the neural networks with linear activation function and @math loss, we show that the duality gap of empirical risk is zero. Our two results work for arbitrary depths and adversarial data, while the analytical techniques might be of independent interest to non-convex optimization more broadly. Experiments on both synthetic and real-world datasets validate our results.",
"",
"",
"We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.",
"Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide full (necessary and sufficient) characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for various neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of these neural networks. One particular conclusion is that: The loss function of linear networks has no spurious local minimum, while the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.",
"With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local global optima equivalence of the optimization problems arising from training of such non-convex models. Using the \"local openness\" property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y, and input data matrix X. 2) develop almost complete characterization of the local global optima equivalence of multi-layer linear neural networks. We provide various counterexamples to show the necessity of each of our assumptions. 3) show global local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond \"full-rank\" cases.",
"It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks.",
"",
"Understanding the geometry of neural network loss surfaces is important for the development of improved optimization algorithms and for building a theoretical understanding of why deep learning works. In this paper, we study the geometry in terms of the distribution of eigenvalues of the Hessian matrix at critical points of varying energy. We introduce an analytical framework and a set of tools from random matrix theory that allow us to compute an approximation of this distribution under a set of simplifying assumptions. The shape of the spectrum depends strongly on the energy and another key parameter, ϕ, which measures the ratio of parameters to data points. Our analysis predicts and numerical simulations support that for critical points of small index, the number of negative eigenvalues scales like the 3 2 power of the energy. We leave as an open problem an explanation for our observation that, in the context of a certain memorization task, the energy of minimizers is well-approximated by the function 1 2(1 - ϕ)2.",
"Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the geometric structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (\"overspecified\") networks, which accords with some recent empirical and theoretical observations.",
"One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.",
"We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between largeand small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting."
],
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2565538933",
"2593380010",
"2746420172",
"2807313858",
"2614119628",
"2964072429",
"2736030546",
"2765428107",
"2963076198",
"2963759574",
"2963336603",
"2618381130",
"2963326517",
"2949148114",
"1899249567"
]
} | On Connected Sublevel Sets in Deep Learning | It is commonly observed in deep learning that overparameterization sometimes can be helpful for optimizing neural networks. Theoretically, several recent work (Allen-Zhu et al., 2018b;Du et al., 2018;Zou et al., 2018) have also established convergence of gradient descent for non-linear networks under excessive over-parameterization regimes. For instance the above work require Ω(N 4 ) neurons (or more) per hidden layer to guarantee that (stochastic) gradient descent converges to a global minimum with zero training error. This is an inspiring result given the hardness of the problem, however the question of which fundamental properties of the loss underpinning these successes remains unanswered. We are interested in the following question:
Is there any underlying geometric structure of the loss function that can "intuitively" supports for the success of local search algorithms like gradient descent under excessive over-parameterization regimes?
This paper sheds light on this question by showing that every sublevel set of the loss function is connected if the network has a sufficiently wide hidden layer. The key idea of the paper is to show that, for linearly independent training data and under a relatively mild condition on the architecture, every sublevel set of the loss is connected. This allows us to obtain similar results for deep and wide neural nets with arbitrary data. In particular, we first show that if one of the hidden layers has more neurons than the number of training samples, then the loss has no bad local valleys in the sense that there is a continuous path from anywhere in parameter space on which the loss is non-increasing and gets arbitrarily close to the minimal value of the loss. In a special case where the first hidden layer is a wide layer with twice more neurons than the number of training samples, then we show that every sublevel set is connected, and thus there is a unique global valley. All our results hold for deep fully connected networks with standard architecture, for arbitrary convex losses and strictly monotonic and/or piecewise linear activation functions such as Leaky-ReLU. Most of technical proofs are moved to the appendix.
Key Result: Linearly Independent Data Leads to Connected Sublevel Sets
This section presents our key results for linearly independent data, which form the basis for our additional results in the next sections where we analyze deep over-parameterized networks with arbitrary data. Below we assume that the widths of the hidden layers are decreasing, i.e. n 1 > . . . > n L . Note that it is still possible to have n 1 ≥ d or n 1 < d. The above condition is quite natural as in practice (e.g., see Table 1 in (Nguyen & Hein, 2018)) the first hidden layer often has the most number of neurons, afterwards the number of neurons starts decreasing towards the output layer, which is helpful for the network to learn more compact representations at higher layers. We introduce the following property for a class of points θ = (W l , b l ) L l=1 in parameter space and refer to it later in our results and proofs.
Property 3.1 W l has full rank for every l ∈ [2, L].
Our main result in this section is stated as follows.
Theorem 3.2 Let Assumption 2.1 hold, rank(X) = N and n 1 > . . . > n L where L ≥ 2. Then the following hold:
1. Every sublevel set of Φ is connected. Moreover, Φ can attain any value arbitrarily close to p * .
2. Every non-empty connected component of every level set of Φ is unbounded.
We have the following decomposition of sublevel set:
Φ −1 ((−∞, α]) = Φ −1 (α)∪Φ −1 ((−∞, α)
). It follows that if Φ has unbounded level sets then its sublevel sets must also be unbounded. We note that the reverse is not true, e.g. the standard Gaussian distribution function has unbounded sublevel sets but its level sets are bounded. Given that, the two statements of Theorem 3.2 together imply that every sublevel set of the loss must be both connected and unbounded. While the first statement of Theorem 3.2 implies that Φ has a rather well-behaved loss surface, the second statement implies that it has no bounded valleys, regardless of whether these valleys contain a global minimum or not. This clearly also indicates that Φ has no strict local minima/maxima. In the remaining of this section, we will present the proof of Theorem 3.2. The following lemmas will be helpful.
Lemma 3.3 Let the conditions of Theorem 3.2 hold. Given some k ∈ [2, L]. Then there is a continuous map h : Ω * 2 × . . . × Ω * k × R N ×n k → Ω 1 which satisfy the following:
1. For every (W 2 , b 2 ), . . . , (W k , b k ), A ∈ Ω * 2 × . . . × Ω * k × R N ×n k it holds that F k h (W l , b l ) k l=2 , A , (W l , b l ) k l=2 = A. 2. For every θ = (W * l , b * l ) L l=1 where all the matrices (W * l ) k l=2 have full rank, there is a continuous curve from θ to h (W * l , b * l ) k l=2 , F k (θ) , (W * l , b * l ) L l=2 on which the loss Φ is constant. Proof: For every (W 2 , b 2 ), . . . , (W k , b k ), A ∈ Ω * 2 × . . . × Ω * k × R N ×n k , let us define the value of the map h as h (W l , b l ) k l=2 , A = (W 1 , b 1 ),
where (W 1 , b 1 ) is given by the following recursive formula
W 1 b T 1 = [X, 1 N ] † σ −1 (B 1 ), B l = σ −1 (B l+1 ) − 1 N b T l+1 W † l+1 , ∀ l ∈ [1, k − 2], B k−1 = (A − 1 N b T L ) W † L k = L σ −1 (A) − 1 N b T k W † k k ∈ [2, L − 1]
. By our assumption n 1 > . . . > n L , it follows from the domain of h that all the matrices (W l ) k l=2 have full column rank, and so they have a left inverse. Similarly, [X, 1 N ] has full row rank due to our assumption that rank(X) = N , and so it has a right inverse. Moreover σ has a continuous inverse by Assumption 2.1. Thus h is a continuous map as it is a composition of continuous functions. In the following, we prove that h satisfies the two statements of the lemma.
1. Let (W 2 , b 2 ), . . . , (W k , b k ), A ∈ Ω * 2 × . . . × Ω * k × R N ×n k .
Since all the matrices (W l ) k l=2 have full column rank and [X, 1 N ] has full row rank, it holds that W † l W l = I and [X, 1 N ][X, 1 N ] † = I and thus we easily obtain from the above definition of h that
B 1 = σ [X, 1 N ] W 1 b T 1 , B l+1 = σ(B l W l+1 + 1 N b T l+1 ), ∀ l ∈ [1, k − 2], A = B L−1 W L + 1 N b T L k = L, σ(B k−1 W k + 1 N b T k ) k ∈ [2, L − 1].
One can easily check that the above formula of A is exactly the definition of F k from (1) and thus it holds
F k h (W l , b l ) k l=2 , A , (W l , b l ) k l=2 = A for every (W 2 , b 2 ), . . . , (W k , b k ), A ∈ Ω * 2 × . . . × Ω * k × R N ×n k .
2. Let G l : R N ×n l−1 → R N ×n l be defined as
G l (Z) = ZW * L + 1 N (b * L ) T l = L σ ZW * l + 1 N (b * l ) T l ∈ [2, L − 1].
For convenience, let us group the parameters of the first layer into a matrix, say
U = [W T 1 , b 1 ] T ∈ R (d+1)×n1 . Similarly, let U * = [(W * 1 ) T , b * 1 ] T ∈ R (d+1)×n1 . Let f : R (d+1)×n1 → R N ×n k be a function of (W 1 , b 1 ) defined as f (U ) = G k • G k−1 . . . G 2 • G 1 (U ), where G 1 (U ) = σ([X, 1 N ]U ), U = [W T 1 , b 1 ] T .
We note that this definition of f is exactly F k from (1), but here we want to exploit the fact that f is a function of (W 1 , b 1 ) as all other parameters are fixed to the corresponding values of θ. Let A = F k (θ). By definition we have f (U * ) = A and thus U * ∈ f −1 (A). Let us denote
(W h 1 , b h 1 ) = h W * l , b * l ) k l=2 , A , U h = [(W h 1 ) T , b h 1 ] T .
By applying the first statement of the lemma to
(W * 2 , b * 2 ), . . . , (W * k , b * k ), A we have A = F k (W h 1 , b h 1 ), (W * l , b * l ) k l=2 = f (U h ) which implies U h ∈ f −1 (A). So far both U * and U h belong to f −1 (A). The idea now is that if one can show that f −1 (A)
is a connected set then there would exist a connected path between U * and U h (and thus a path between (W * 1 , b * 1 ) and (W h 1 , b h 1 )) on which the output at layer k is identical to A and hence the loss is invariant, which concludes the proof.
In the following, we show that f −1 (A) is indeed connected. First, one observes that range(G l ) = R N ×n l for every l ∈ [2, k] since all the matrices (W * l ) k l=2 have full column rank and σ(R) = R due to Assumption 2.1. Similarly, it follows from our assumption rank(X) = N that range(G 1 ) = R N ×n1 . By standard rules of compositions, we have
f −1 (A) = G −1 1 • G −1 2 • . . . • G −1 k (A).
where all the inverse maps G −1 l have full domain. It holds
G −1 k (A) = (A − 1 N b T L )(W * L ) † + {B | BW * L = 0} k = L σ −1 (A) − 1 N b * k (W * k ) † + {B | BW * k = 0} else
which is a connected set in each case because of the following reasons: 1) the kernel of any matrix is connected, 2) the Minkowski-sum of two connected sets is connected by Proposition 2.8, and 3) the image of a connected set under a continuous map is connected by Proposition 2.7. By repeating the similar argument for k − 1, . . . , 2 we conclude that
V := G −1 2 • . . . • G −1 k (A) is connected. Lastly, we have G −1 1 (V ) = [X, 1 N ] † σ −1 (V ) + {B | [X, 1 N ]B = 0}
which is also connected by the same arguments above. Thus f −1 (A) is a connected set.
Overall, we have shown in this proof that the set of (W 1 , b 1 ) which realizes the same output at layer k (given the parameters of other layers in between are fixed) is a connected set. Since both (W * 1 , b * 1 ) and h (W * l , b * l ) k l=2 , F k (θ) belong to this solution set, there must exist a continuous path between them on which the loss Φ is constant.
Lemma 3.4 Let the conditions of Theorem 3.2 hold. Let θ = (W l , b l ) L l=1 be any point in parameter space. Then there is a continuous curve which starts from θ and ends at some θ = (W l , b l ) L l=1 so that θ satisfies Property 3.1 and the loss Φ is constant on the curve.
Proposition 3.5 (Evard & Jafari, 1994) The set of full rank matrices A ∈ R m×n is connected for m = n.
3.1. Proof of Theorem 3.2 1. Let L α be some sublevel set of Φ. Let θ = (W l , b l ) L l=1 and θ = (W l , b l ) L l=1 be arbitrary points in L α . Let F L = F L (θ) and F L = F L (θ ). These two quantities are computed in the beginning and will never change during this proof. But when we write F L (θ ) for some θ we mean the network output evaluated at θ . The main idea is to construct two different continuous paths which simultaneously start from θ and θ and are entirely contained in L α (this is done by making the loss on each individual path non-increasing), and then show that they meet at a common point in L α , which then implies that L α is a connected set.
First of all, we can assume that both θ and θ satisfy Property 3.1, because otherwise by Lemma 3.4 one can follow a continuous path from each point to arrive at some other point where this property holds and the loss on each path is invariant, meaning that we still stay inside L α . As θ and θ satisfy Property 3.1, all the weight matrices (W l ) L l=2 and (W l ) L l=2 have full rank, and thus by applying the second statement of Lemma 3.3 with k = L and using the similar argument above, we can simultaneously drive θ and θ to the following points,
θ = h (W l , b l ) L l=2 , F L , (W 2 , b 2 ), . . . , (W L , b L ) , θ = h (W l , b l ) L l=2 , F L , (W 2 , b 2 ), . . . , (W L , b L ) (3) where h : Ω * 2 × . . . × Ω * L × R N ×m → Ω 1 is the continuous map from Lemma 3.3 which satisfies F L h (Ŵ l ,b l ) L l=2 , A , (Ŵ l ,b l ) L l=2 = A, for every (4) (Ŵ l ,b l ), . . . , (Ŵ L ,b L ), A ∈ Ω * 2 × . . . × Ω * L × R N ×n k .
Next, we construct a continuous path starting from θ on which the loss is constant and it holds at the end point of the path that all parameters from layer 2 till layer L are equal to the corresponding parameters of θ . Indeed, by applying Proposition 3.5 to the pairs of full rank matrices (W l , W l ) for every l ∈ [2, L], we obtain continuous curves W 2 (λ), . . . , W L (λ) so that W l (0) = W l , W l (1) = W l and W l (λ) has full rank for every λ ∈ [0, 1]. For every l ∈ [2, L], let c l : [0, 1] → Ω * l be the curve of layer l defined as
c l λ) = W l (λ), (1 − λ)b l + λb l .
We consider the curve c : [0, 1] → Ω given by
c(λ) = h (c l (λ)) L l=2 , F L , c 2 (λ), . . . , c L (λ) .
Then one can easily check that c(0) = θ and c is continuous as all the functions h, c 2 , . . . , c l are continuous. Moreover, we have c 2 (λ), . . . , c L (λ) ∈ Ω * 2 × . . . × Ω * L and thus it follows from (4) that F L (c(λ)) = F L for every λ ∈ [0, 1], which leaves the loss invariant on c.
Since the curve c above starts at θ and has constant loss, we can reset θ to the end point of this curve, by setting θ = c(1), while keeping θ from (3), which together give us
θ = h (W l , b l ) L l=2 , F L , (W 2 , b 2 ), . . . , (W L , b L ) , θ = h (W l , b l ) L l=2 , F L , (W 2 , b 2 ), . . . , (W L , b L ) .
Now we note that the parameters of θ and θ coincide at all layers except at the first layer. We will construct two continuous paths inside L α , say c 1 (·) and c 2 (·), which starts from θ and θ respectively , and show that they meet at a common point in L α . LetŶ ∈ R N ×m be any matrix so that
ϕ(Ŷ ) ≤ min(Φ(θ), Φ(θ )).(5)
Consider the curve c 1 : [0, 1] → Ω defined as
c 1 (λ) = h (W l , b l ) L l=2 , (1 − λ)F L + λŶ , (W l , b l ) L l=2 .
Note that c 1 is continuous as h is continuous, and it holds:
c 1 (0) = θ, c 1 (1) = h (W l , b l ) L l=2 ,Ŷ , (W l , b l ) L l=2 .
It follows from the definition of Φ, c 1 (λ) and (4) that
Φ(c 1 (λ)) = ϕ(F L (c 1 (λ))) = ϕ((1 − λ)F L + λŶ )
and thus by convexity of ϕ,
Φ(c 1 (λ)) ≤ (1 − λ)ϕ(F L ) + λϕ(Ŷ ) ≤ (1 − λ)Φ(θ) + λΦ(θ) = Φ(θ),
which implies that c 1 [0, 1] is entirely contained in L α . Similarly, we can also construct a curve c 2 (·) inside L α which starts at θ and satisfies
c 2 (0) = θ , c 2 (1) = h (W l , b l ) L l=2 ,Ŷ , (W l , b l ) L l=2 .
So far, the curves c 1 and c 2 start at θ and θ respectively and meet at the same point c 1 (1) = c 2 (1).
Overall, we have shown that starting from any two points in L α we can find two continuous curves so that the loss is non-increasing on each curve, and these curves meet at a common point in L α , and so L α has to be connected. Moreover, the point where they meet satisfies Φ(c 1 (1)) = ϕ(Ŷ ). From (5), ϕ(Ŷ ) can be chosen arbitrarily small, and thus Φ can attain any value arbitrarily close to p * .
2. Let C be a non-empty connected component of some
level set, i.e. C ⊆ Φ −1 (α) for some α ∈ R. Let θ = (W l , b l ) L l=1 ∈ C.
Similar as above, we first use Lemma 3.4 to find a continuous path from θ to some other point where W 2 attains full rank, and the loss is invariant on the path. From that point, we apply Lemma 3.3 with k = 2 to obtain another continuous path (with constant loss) which leads us
to θ := h (W 2 , b 2 ), F 2 (θ) , (W 2 , b 2 ), . . . , (W L , b L ) where h : Ω * 2 → Ω 1 is a continuous map satisfying that F 2 h (Ŵ 2 ,b 2 ), A , (Ŵ l ,b l ) L l=2 = A,
for every point (Ŵ l ,b l ) L l=1 such thatŴ 2 has full rank, and every A ∈ R N ×n2 . Note that θ ∈ C as the loss is constant on the above paths. Consider the following continuous curve
c(λ) = h (λW 2 , b 2 ), F 2 (θ) , (W 2 , b 2 ), . . . , (W L , b L )
for every λ ≥ 1. This curve starts at θ since c(1) = θ . We have F 2 (c(λ)) = F 2 (θ) for every λ ≥ 1 and thus the loss is constant on this curve, meaning that the entire curve belongs to C. Lastly, the curve c[1, ∞) is unbounded as λ goes to infinity, and thus C has to be unbounded.
Large Width of One of Hidden Layers Leads to No Bad Local Valleys
In the previous section, we show that linearly independent training data essentially leads to connected sublevel sets. In this section, we show the first application of this result in proving absence of bad local valleys on the loss landscape of deep and wide neural nets with arbitrary training data.
Definition 4.1 A local valley is a nonempty connected component of some strict sublevel set L s α := {θ | Φ(θ) < α} . A bad local valley is a local valley on which the training loss Φ cannot be made arbitrarily close to p * .
The main result of this section is stated as follows.
Theorem 4.2 Let Assumption 2.1 and Assumption 2.2 hold. Suppose that there exists a layer k ∈ [1, L − 1] such that n k ≥ N and n k+1 > . . . > n L . Then the following hold: Right: a different function which satisfies every local minimum is a global minimum, but bad local valleys still exist at both infinities (exponential tails) where local search algorithms easily get stuck.
1. The loss Φ has no bad local valleys.
2. If k ≤ L − 2 then every local valley of Φ is unbounded.
The conditions of Theorem 4.2 are satisfied for any strictly monotonic and piecewise linear activation function such as Leaky-ReLU (see Lemma 2.3). We note that for Leaky-ReLU and other similar homogeneous activation functions, the second statement of Theorem 4.2 is quite straightforward. Indeed, if one scales all parameters of one hidden layer by some arbitrarily large factor k > 0 and the weight matrix of the following layer by 1/k then the network output will be unchanged, and so every connected component of every level set (also sublevel set) must extend to infinity and thus be unbounded. However, for general non-homogeneous activation functions, the second statement is non-trivial.
The first statement of Theorem 4.2 implies that there is a continuous path from any point in parameter space on which the loss is non-increasing and gets arbitrarily close to p * . At this point, one might wonder if a function satisfies "every local minimum is a global minimum" would automatically contain no bad local valleys. Unfortunately this is not true in general. Indeed, Figure 4 shows two counter-examples where a function does not have any bad local mimina, but bad local valleys still exist. The reason for this lies at the fact that bad local valleys generally need not contain any critical point though in theory they can have very large volume or even be unbounded. Thus any pure results on global optimality of local minima with no further information on the loss would not be sufficient to guarantee convergence of local search algorithms to a global minimum, especially if they are initialized in such regions. Similar to the second statement of Theorem 4.2, the first statement on one hand can guarantee absence of strict local minima, but on the other hand cannot rule out the possibility of non-strict bad local minima. This suggests that it might be desirable to have in practice both properties for the loss surface of neural nets, that is, there are no bad local valleys and every local minimum is a global minimum. Overall, the statements of Theorem 4.2 altogether imply that every local valley must be an "unbounded" global valley in which the loss can attain any value arbitrarily close to p * .
The high level proof idea for Theorem 4.2 is that inside every local valley one can find a point where the feature representations of all training samples are linearly independent at the wide hidden layer, and thus an application of Theorem 3.2 to the subnetwork from this wide layer till the output layer would yield the result. Below we list several lemmas which are helpful for the proof of Theorem 4.2.
Lemma 4.3 Let (F, W, I) be such that F ∈ R N ×n , W ∈ R n×p , rank(F ) < n and I ⊂ {1, . . . , n} be a subset of columns of F so that rank(F (:, I)) = rank(F ) andĪ the remaining columns. Then there exists a continuous curve c : [0, 1] → R n×p which satisfies the following:
1. c(0) = W and F c(λ) = F W, ∀ λ ∈ [0, 1]. 2. The product F c(1) is independent of F (:,Ī). Lemma 4.4 Given v ∈ R n with v i = v j ∀ i = j, and σ : R → R satisfies Assumption 2.2. Let S ⊆ R n be defined as S = {σ(v + b1 n ) | b ∈ R} . Then it holds Span(S) = R n .
We recall the following standard result from topology (e.g., see Apostol (1974), Theorem 4.23, p. 82). Step 1: Finding a point inside C where F k has full rank. Let θ ∈ C be such that the pre-activation outputs at the first hidden layer are distinct for all training samples. Note that such θ always exist since Assumption 2.4 implies that the set of W 1 where this does not hold has Lebesgue measure zero, whereas C has positive measure. This combined with Assumption 2.1 implies that the (post-activation) outputs at the first hidden layer are distinct for all training samples. Now one can view these outputs at the first layer as inputs to the next layer and argue similarly. By repeating this argument and using the fact that C has positive measure, we conclude that there exists θ ∈ C such that the outputs at layer k − 1 are distinct for all training samples, i.e. (F k−1 ) i: = (F k−1 ) j: for every i = j. Let V be the pre-activation output (without bias term) at layer k, in particular
V = F k−1 W k = [v 1 , . . . , v n k ] ∈ R N ×n k .
Since F k−1 has distinct rows, one can easily perturb W k so that every column of V has distinct entries. Note here that the set of W k where this does not hold has measure zero whereas C has positive measure. Equivalently, C must contain a point where every v j has distinct entries. To simplify notation, let a = b k ∈ R n k , then by definition, 1 N a 1 ), . . . , σ(v n k + 1 N a n k )]. (6) Suppose that F k has low rank, otherwise we are done. Let r = rank(F k ) < N ≤ n k and I ⊂ {1, . . . , n k } , |I| = r be the subset of columns of F k so that rank(F k (:, I)) = rank(F k ), andĪ the remaining columns. By applying Lemma 4.3 to (F k , W k+1 , I), we can follow a continuous path with invariant loss (i.e. entirely contained inside C) to arrive at some point where F k W k+1 is independent of F k (: .Ī). It remains to show how to change F k (:,Ī) by modifying certain parameters so that F k has full rank. Let p = |Ī| = n k − r andĪ = {j 1 , . . . , j p } . From (6) we have
F k = [σ(v 1 +F k (:,Ī) = [σ(v j1 + 1 N a j1 ), . . . , σ(v jp + 1 N a jp )].
Let col(·) denotes the column space of a matrix. Then dim(col(F k (:, I))) = r < N. Since v j1 has distinct entries, Lemma 4.4 implies that there must exist a j1 ∈ R so that σ(v j1 + 1 N a j1 ) / ∈ col(F k (:, I)), because otherwise Span {σ(v j1 + 1 N a j1 ) | a j1 ∈ R} ∈ col(F k (:, I)) whose dimension is strictly smaller than N and thus contradicts Lemma 4.4. So we pick one such value for a j1 and follow a direct line segment between its current value and the new value. Note that the loss is invariant on this segment since any changes on a j1 only affects F k (:,Ī) which however has no influence on the loss by above construction. Moreover, it holds at the new value of a j1 that rank(F k ) increases by 1. Since n k ≥ N by our assumption, it follows that p ≥ N − r and thus one can choose a j2 , . . . , a j N −r in a similar way and finally obtain rank(F k ) = N.
Step 2: Applying Theorem 3.2 to the subnetwork above k. Suppose that we have found from the previous step a point θ = ((W * l , b * l ) L l=1 ) ∈ C so that F k has full rank. Let the function g : Ω k+1 × . . . × Ω L → R be defined as
g (W l , b l ) L l=k+1 = Φ (W * l , b * l ) k l=1 , (W l , b l ) L l=k+1(7)
We recall that C is a connected component of L s α . It holds g (W * l , b * l ) L l=k+1 = Φ(θ) ≤ α. Now one can view g as the new loss for the subnetwork from layer k till layer L and F k can be seen as the new training data. Since rank(F k ) = N and n k+1 > . . . > n L , Theorem 3.2 implies that g has connected sublevel sets and g can attain any value arbitrarily close to p * . Let ∈ (p * , α) and (W l , b l ) L l=k+1 be any point such that g (W l , b l ) L l=k+1 ≤ . Since both (W * l , b * l ) L l=k+1 and (W l , b l ) L l=k+1 belongs to the αsublevel set of g, which is a connected set, there must exist a continuous path from (W * l , b * l ) L l=k+1 to (W l , b l ) L l=k+1 on which the value of g is not larger than α. This combined with (7) implies that there is also a continuous
path from θ = (W * l , b * l ) k l=1 , (W * l , b * l ) L l=k+1 to θ := (W * l , b * l ) k l=1 , (W l , b l ) L l=k+1
on which the loss Φ is not larger than α. Since C is connected, it must hold θ ∈ C. Moreover, we have Φ(θ ) = g (W l , b l ) L l=k+1 ≤ . Since can be chosen arbitrarily small and close to p * , we conclude that the loss Φ can be made arbitrarily small inside C, and thus Φ has no bad local valleys.
2. Let C be a local valley, which by Definition 4.1 is a connected component of some strict sublevel set L s α = Φ −1 ((−∞, α)). According the the proof of the first statement above, one can find a θ = (W * l , b * l ) L l=1 ∈ C so that F k (θ) has full rank. Now one can view F k (θ) as the training data for the subnetwork from layer k till layer L. The new loss is defined for this subnetwork as
g (W l , b l ) L l=k+1 = Φ (W * l , b * l ) k l=1 , (W l , b l ) L l=k+1 .
By our assumptions, σ satisfies Assumption 2.1 and n k+1 > . . . > n L , thus the above subnetwork with the new loss g and training data F k (θ) satisfy all the conditions of Theorem 3.2, and so it follows that g has unbounded level set components. Let β :
= g (W * l , b * l ) L l=k+1 = Φ(θ) < α. Let E be a connected component of the level set g −1 (β) which contains (W * l , b * l ) L l=k+1 . Let D = (W * l , b * l ) k l=1 , (W l , b l ) L l=k+1 (W l , b l ) L l=k+1 ∈ E .
Then D is connected and unbounded since E is connected and unbounded. It holds for every θ ∈ D that Φ(θ ) = β, and thus D ⊆ Φ −1 (β) ⊆ L s α , where the last inclusion follows from β < α. Moreover, we have θ = (W * l , b * l ) k l=1 , (W * l , b * l ) L l=k+1 ∈ D and also θ ∈ C, it follows that D ⊆ C since C is already the maximal connected component of L s α . Since D is unbounded, C must also be unbounded, which finishes the proof.
Large Width of First Hidden Layer Leads to Connected Sublevel Sets
In the previous section (Theorem 4.2), we show that if one of the hidden layers has more than N neurons then the loss function has no bad local valleys. In this section, we treat a special case where the first hidden layer has at least 2N neurons. Under such setting, the next theorem shows in addition that every sublevel set must be also connected. (Draxler et al., 2018;Garipov et al., 2018) have shown that different global minima of several existing CNN architectures can be connected by a continuous path on which the loss has similar values. While our current results are not directly applicable to these models, we consider this as a stepping stone for such an extension in future work. Similar to previous results, the unboundedness of level sets as shown in the second statement of Theorem 5.1 implies that Φ has no bounded local valleys nor strict local extrema. The proof of Theorem 5.1 relies on the following lemmas.
Lemma 5.2 Let (X, W, b, V ) ∈ R N ×d × R d×n × R n × R n×p . Let σ : R → R satisfy Assumption 2.2. Suppose that n ≥ N and X has distinct rows. Let Z = σ(XW + 1 N b T ) V. There is a continuous curve c : [0, 1] → R d×n × R n × R n×p with c(λ) = (W (λ), b(λ), V (λ)) satisfying: 1. c(0) = (W, b, V ). 2. σ XW (λ)) + 1 N b(λ) T V (λ) = Z, ∀ λ ∈ [0, 1]. 3. rank σ XW (1) + 1 N b(1) T = N.
Lemma 5.3 Let (X, W, V, W ) ∈ R N ×d ×R d×n ×R n×p × R d×n . Let σ : R → R satisfy Assumption 2.2. Suppose that n ≥ 2N and rank(σ(XW )) = N, rank(σ(XW )) = N. Then there is a continuous curve c : [0, 1] → R d×n × R n×p with c(λ) = (W (λ), V (λ)) which satisfies the following:
1. c(0) = (W, V ).
σ(XW
(λ)) V (λ) = σ(XW ) V, ∀ λ ∈ [0, 1].
3. W (1) = W .
Proof of Theorem 5.1
Let θ = (W l , b l ) L l=1 , θ = (W l , b l ) L l=1 be arbitrary points in some sublevel set L α . It is sufficient to show that there is a connected path between θ and θ on which the loss is not larger than α. The output at the first layer is given by
F 1 (θ) = σ([X, 1 N ][W T 1 , b 1 ] T ), F 1 (θ ) = σ([X, 1 N ][W T 1 , b 1 ] T ).
First, by applying Lemma 5.2 to (X, W 1 , b 1 , W 2 ), we can assume that F 1 (θ) has full rank, because otherwise there is a continuous path starting from θ to some other point where the rank condition is fulfilled and the loss is invariant on the path, and so we can reset θ to this new point. Similarly, we can assume that F 1 (θ ) has full rank.
Next, by applying Lemma 5.3 to the tuple [X, 1 N ], [W T 1 , b 1 ] T , W 2 , [W T 1 , b 1 ] T , and using the similar argument as above, we can drive θ to some other point where the parameters of the first hidden layer agree with the corresponding values of θ . So we can assume w.l.o.g. that (W 1 , b 1 ) = (W 1 , b 1 ). Note that at this step we did not modify θ but θ and thus F 1 (θ ) still has full rank.
Once the first hidden layer of θ and θ coincide, one can view the output of this layer, say F 1 := F 1 (θ) = F 1 (θ ) with rank(F 1 ) = N , as the new training data for the subnetwork from layer 1 till layer L (given that (W 1 , b 1 ) is fixed). This subnetwork and the new data F 1 satisfy all the conditions of Theorem 3.2, and so it follows that the loss Φ restricted to this subnetwork has connected sublevel sets, which implies that there is a connected path between (W l , b l ) L l=2 and (W l , b l ) L l=2 on which the loss is not larger than α. This indicates that there is also a connected path between θ and θ in L α and so L α must be connected.
To show that every level set component of Φ is unbounded, let θ ∈ Ω be an arbitrary point. Denote F 1 = F 1 (θ) and let I ⊂ {1, . . . , N } be such that rank(F 1 (:, I)) = rank(F 1 ). Since rank(F 1 ) ≤ min(N, n 1 ) < n 1 , we can apply Lemma 4.3 to the tuple (F 1 , W 2 , I) to find a continuous path W 2 (λ) which drives θ to some other point where the output at 2nd layer F 1 W 2 is independent of F 1 (:,Ī). Note that the network output at 2nd layer is invariant on this path and hence the entire path belongs to the same level set component with θ. From that point, one can easily scale (W 1 (:,Ī), b 1 (Ī)) to arbitrarily large values without affecting the output. Since this path has constant loss and is unbounded, it follows that every level set component of Φ is unbounded.
Extensions to ReLU Activation Function
We discuss a possible approach to extend our previous results to the ReLU activation by removing Assumption 2.1.
Theorem 6.1 All the following hold under Assumption 2.2:
1. If min {n 1 , . . . , n L−1 } ≥ N then the loss function Φ has no bad local valleys.
2. If min {n 1 , . . . , n L−1 } ≥ 2N then every sublevel set of Φ is connected.
It is clear that the conditions of Theorem 6.1 are far from practical settings, and theoretically they are also significantly stronger than that of Theorem 5.1 as it requires all the hidden layers to be sufficiently over-parameterized. Nevertheless, we note that the similar conditions have also been used by recent theoretical work (Allen-Zhu et al., 2018b;Du et al., 2018) in proving convergence guarantees of gradient descent methods. Theoretically, we find these results interesting as together they seem to suggest that Leaky-ReLU might lead to a much "easier" loss surface than ReLU.
Related Work
Many interesting theoretical results have been developed on the loss surface of neural networks (Livni et al., 2014;Choromanska et al., 2015;Haeffele & Vidal, 2017;Safran & Shamir, 2016;Hardt & Ma, 2017;Xie et al., 2017;Yun et al., 2017;Lu & Kawaguchi, 2017;Pennington & Bahri, 2017;Zhou & Liang, 2018;Liang et al., 2018b;a;Zhang et al., 2018;Nouiehed & Razaviyayn, 2018;Laurent & v. Brecht, 2018). There is also a whole line of researches studying convergence of learning algorithms in training neural networks (Andoni et al., 2014;Sedghi & Anandkumar, 2015;Janzamin et al., 2016;Gautier et al., 2016;Brutzkus & Globerson, 2017;Soltanolkotabi, 2017;Soudry & Hoffer, 2017;Tian, 2018;Wang et al., 2018;Ji & Telgarsky, 2019;Arora et al., 2019;Allen-Zhu et al., 2018a;Bartlett et al., 2018;Chizat & Bach, 2018) and others studying generalization properties, which is however beyond the scope of this paper.
The closest existing result is the work by (Venturi et al., 2018) who study the relationship between the intrinsic dimension of neural networks and the presence/absence of spurious valleys. They show that if the number of hidden neurons is greater than the intrinsic dimension of the network, defined as the dimension of some function space, then the loss has no spurious valley, and furthermore, if the number of hidden neurons is greater than two times the intrinsic dimension then every sublevel set is connected. The results apply to one hidden layer networks with population risk and square loss. As admitted by the authors in the paper, an extension of such result, in particular the notion of intrinsic dimension, to multiple layer networks would require the number of neurons to grow exponentially with depth.
More closely related in terms of the setting are the work by (Nguyen & Hein, 2017; who analyze the optimization landscape of standard deep and wide (convolutional) neural networks for multiclass problem. They both assume that the network has a wide hidden layer k with n k ≥ N. This condition has been recently relaxed to n 1 + . . . + n L−1 ≥ N by using flexible skip-connections (Nguyen et al., 2019). All of these results so far require real analytic activation functions, and thus are not applicable to the class of piecewise linear activations analyzed in this paper. Moreover, while the previous work focus on global optimality of critical points, this paper characterizes sublevel sets of the loss function which give us further insights and intuition on the underlying geometric structure of the optimization landscape.
Conclusion. We show that every sublevel set of the loss function in training a certain class of deep overparameterized neural nets is connected and unbounded.
Tian, Y. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. In ICML, 2018.
Venturi, L., Bandeira, A. S., and Bruna, J. Spurious valleys in two-layer neural network optimization landscapes. arXiv:1802.06384v2, 2018.
Wang, G., Giannakis, G. B., and Chen, J. Learning relu networks on linearly separable data: Algorithm, optimality, and generalization. arXiv:1808.04685, 2018.
A. Proof of Lemma 2.3
A function σ : R → R is continuous piecewise linear with at least two pieces if it can be represented as
σ(x) = a i x + b i , ∀ x ∈ (x i−1 , x i ), ∀ i ∈ [1, n + 1].
for some n ≥ 1,
x 0 = −∞ < x 1 < . . . < x n < x n+1 = ∞ and (a i , b i ) n+1 i=1 .
We can assume that all the linear pieces agree at their intersection and there are no consecutive pieces with the same slope: a i = a i+1 for every i ∈ [1, n]. Suppose by contradiction that σ does not satisfy Assumption 2.2, then there are non-zero coefficients
(λ i , y i ) m i=1 with y i = y j (i = j) such that σ(x) = m i=1 λ i σ(x − y i ) for every x ∈ R.
We assume w.l.o.g. that y 1 < . . . < y m .
Case 1: y 1 > 0. For every x ∈ (−∞, x 1 ) we have σ(x) = a 1 x+b 1 = m i=1 λ i (a 1 (x−y i )+b 1 ) and thus by comparing the coefficients on both sides we obtain
m i=1 λ i a 1 = a 1 . Moreover, for every x ∈ x 1 , min(x 1 + y 1 , x 2 ) it holds σ(x) = a 2 x + b 2 = m i=1 λ i (a 1 (x − y i ) + b 1 ) and so m i=1 λ i a 1 = a 2 .
Thus a 1 = a 2 , which is a contradiction. Case 2: y 1 < 0. By definition, for x ∈ (−∞,
x 1 + y 1 ) we have σ(x) = a 1 x + b 1 = m i=1 λ i (a 1 (x − y i ) + b 1 )
and thus by comparing the coefficients on both sides we obtain
m i=1 λ i a 1 = a 1 .(8)
For x ∈ x 1 + y 1 , min(x 1 + y 2 , x 1 , x 2 + y 1 ) it holds
σ(x) = a 1 x + b 1 = λ 1 (a 2 (x − y 1 ) + b 2 ) + m i=2 λ i (a 1 (x − y i ) + b 1 )
and thus by comparing the coefficients we have
λ 1 a 2 + m i=2 λ i a 1 = a 1 .
This combined with (8) leads to λ 1 a 1 = λ 1 a 2 , and thus a 1 = a 2 (since λ 1 = 0) which is a contradiction.
One can prove similarly for ELU (Clevert et al., 2016) σ
(x) = x x ≥ 0 α(e x − 1) x < 0 where α > 0.
Suppose by contradiction that there exist non-zero coeffi-
cients (λ i , y i ) m i=1 with y i = y j (i = j) such that σ(x) = m i=1 λ i σ(x−y i )
, and assume w.l.o.g. that y 1 < . . . < y m . If y m > 0 then for every x ∈ (max(0, y m−1 ), y m ) it holds
σ(x) = x = λ m α(e x−ym − 1) + m−1 i=1 λ i (x − y i ) ⇒ e x = xe ym − m−1 i=1 λ i (x − y i )e ym λ m α + e ym
which is a contradiction since e x cannot be identical to any affine function on any open interval. Thus it must hold that y m < 0. But then for every x ∈ (y m , 0) we have
σ(x) = α(e x − 1) = m i=1 λ i (x − y i ) ⇒ e x = 1 α m i=1 λ i (x − y i ) + 1
which is a contradiction for the same reason above.
B. Proof of Proposition 2.7
Pick some a, b ∈ f (A) and let x, y ∈ A be such that f (x) = a and f (y) = b. Since A is connected, there is a continuous curve r : [0, 1] → A so that r(0) = x, r(1) = y. Consider the curve f • r : [0, 1] → f (A), then it holds that f (r(0)) = a, f (r(1)) = b. Moreover, f • r is continuous as both f and r are continuous. Thus it follows from Definition 2.6 that f (A) is a connected.
C. Proof of Proposition 2.8
Let x, y ∈ U + V then there exist a, b ∈ U and c, d ∈ V such that x = a+c, y = b+d. Since U and V are connected sets, there exist two continuous curves p : [0, 1] → U and q : [0, 1] → V such that p(0) = a, p(1) = b and q(0) = c, q(1) = d. Consider the continuous curve r(t) := p(t) + q(t) then we have r(0) = a + c = x, r(1) = b + d = y and r(t) ∈ U + V for every t ∈ [0, 1]. This implies that every two elements in U + V can be connected by a continuous curve and thus U + V must be a connected set.
D. Proof of Lemma 3.4
The idea is to make one weight matrix full rank at a time while keeping the others fixed (except the first layer). Each step is done by following a continuous path which leads to a new point where the rank condition is fulfilled while keeping the loss constant along the path. Each time when we follow a continuous path, we reset our starting point to the end point of the path and proceed. This is repeated until all the matrices (W l ) L l=2 have full rank.
Step 1: Make W 2 full rank. If W 2 has full rank then we proceed to W 3 . Otherwise, let rank(W 2 ) = r < n 2 < n 1 . Let I ⊂ {1, . . . , n 1 } , |I| = r denote the set of indices of linearly independent rows of W 2 so that rank(W 2 (I, :)) = r. LetĪ denote the remaining rows of W 2 . Let E ∈ R (n1−r)×r be a matrix such that W 2 (Ī, :) = EW 2 (I, :). Let P ∈ R n1×n1 be a permutation matrix which permutes the rows of W 2 according to I so that we can write P W 2 = W 2 (I, :) W 2 (Ī, :) .
We recall that F 1 (θ) is the output of the network at the first layer, evaluated at θ. Below we drop θ and just write F 1 as it is clear from the context. By construction of P , we have
F 1 P T = [F 1 (:, I), F 1 (:,Ī)].
The first step is to turn W 1 into a canonical form. In particular, the set of all possible solutions of W 1 which realizes the same the output F 1 at the first hidden layer is characterized by X † σ −1 (F 1 ) − 1 N b T 1 + ker(X) where we denote, by abuse of notation, ker(X) = A ∈ R d×n1 XA = 0 . This solution set is connected because ker(X) is a connected set and the Minkowski-sum of two connected sets is known to be connected, and so there exists a continuous path between every two solutions in this set on which the output F 1 is invariant. Obviously the current W 1 and X † (σ −1 (F 1 ) − 1 N b T 1 ) are elements of this set, thus they must be connected by a continuous path on which the loss is invariant. So we can assume now that
W 1 = X † (σ −1 (F 1 ) − 1 N b T 1 ).
Next, consider the curve:
W 1 (λ) = X † σ −1 (A(λ)) − 1 N b T 1 , where A(λ) = [F 1 (:, I) + λF 1 (:,Ī)E, (1 − λ)F 1 (:,Ī)] P.
This curves starts at θ since W 1 (0) = W 1 , and it is continuous as σ has a continuous inverse by Assumption 2.1. Using XX † = I, one can compute the pre-activation output (without bias term) at the second layer as
σ XW 1 (λ) + 1 N b T 1 W 2 = A(λ) W 2 = F 1 W 2 ,
which implies that the loss is invariant on this curve, and so we can take its end point W 1 (1) as a new starting point:
W 1 = X † σ −1 (A) − 1 N b T 1 , where A = [F 1 (:, I) + F 1 (:,Ī)E, 0] P.
Now, the output at second layer above, given by AW 2 , is independent of W 2 (Ī, :) because it is canceled by the zero component in A. Thus one can easily change W 2 (Ī, :) so that W 2 has full rank while still keeping the loss invariant.
Step 2: Using induction to make W 3 , . . . , W L full rank. Let θ = (W l , b l ) L l=2 be our current point. Suppose that all the matrices (W l ) k l=2 already have full rank for some k ≥ 2 then we show below how to make W k+1 full rank. We write F k to denote F k (θ). By the second statement of Lemma 3.3, we can follow a continuous path (with invariant loss) to drive θ to the following point:
θ := h (W l , b l ) k l=2 , F k , (W l , b l ) L l=2(9)
where h : Ω * 2 × . . . × Ω * k × R N ×n k is the continuous map from Lemma 3.3 which satisfies for every A ∈ R N ×n k ,
F k h (W l , b l ) k l=2 , A , (W l , b l ) k l=2 = A.(10)
Now, if W k+1 already has full rank then we are done, otherwise we follow the similar steps as before. Indeed, let r = rank(W k+1 ) < n k+1 < n k and I ⊂ {1, . . . , n k } , |I| = r the set of indicies of r linearly independent rows of W k+1 . Then there is a permutation matrix P ∈ R n k ×n k and some matrix E ∈ R (n k −r)×r so that P W k+1 = W k+1 (I, :) W k+1 (Ī, :) , W k+1 (Ī, :) = EW k+1 (I, :).
Moreover it holds
F k P T = [F k (:, I), F k (:,Ī)].(12)
Consider the following curve c : [0, 1] → Ω which continuously update (W 1 , b 1 ) while keeping other layers fixed:
c(λ) = h (W l , b l ) k l=2 , A(λ) , (W 2 , b 2 ), . . . , (W L , b L ) , where A(λ) = [F k (:, I) + λF k (:,Ī)E, (1 − λ)F k (:,Ī)] P.
It is clear that c is continuous as h is continuous. One can easily verify that c(0) = θ by using (12) and (9). The preactivation output (without bias term) at layer k + 1 for every point on this curve is given by
F k (c(λ)) W k+1 = A(λ)W k+1 = F k W k+1 , ∀ λ ∈ [0, 1],
where the first equality follows from (10) and the second follows from (11) and (12). As the loss is invariant on this curve, we can take its end point c(1) as a new starting point:
θ := h (W l , b l ) k l=2 , A , (W 2 , b 2 ), . . . , (W L , b L ) , where A = [F k (:, I) + F k (:,Ī)E, 0] P.
At this point, the output at layer k + 1 as mentioned above is given by AW k+1 , which is independent of W k+1 (Ī, :) since it is canceled out by the zero component in A, and thus one can easily change the submatrix W k+1 (Ī, :) so that W k+1 has full rank while leaving the loss invariant.
Overall, by induction we can make all the weight matrices W 2 , . . . , W L full rank by following several continuous paths on which the loss is constant, which finishes the proof.
E. Proof of Lemma 4.3
Let r = rank(F ) < n. Since I contains r linearly independent columns of F , the remaining columns must lie on their span. In other words, there exists E ∈ R r×(n−r) so that F (:,Ī) = F (:, I) E. Let P ∈ R n×n be a permutation matrix which permutes the columns of F according to I so that we can write which is independent of F (:,Ī).
F. Proof of Lemma 4.4
Suppose by contradiction that dim(Span(S)) < n. Then there exists λ ∈ R n , λ = 0 such that λ ⊥ Span(S), and thus it holds
n i=1 λ i σ(v i + b) = 0 for every b ∈ R.
We assume w.l.o.g. that λ 1 = 0 then it holds
σ(v 1 + b) = − n i=2 λ i λ 1 σ(v i + b), ∀ b ∈ R.
By a change of variable, we have
σ(c) = − n i=2 λ i λ 1 σ(c + v i − v 1 ), ∀ c ∈ R,
which contradicts Assumption 2.2. Thus Span(S) = R n .
G. Proof of Lemma 5.2
Let F = σ(XW + 1 N b T ) ∈ R N ×n . If F already has full rank then we are done. Otherwise let r = rank(F ) < N ≤ n. Let I denote a set of column indices of F so that rank(F (:, I)) = r andĪ the remaining columns. By applying Lemma 4.3 to (F, V, I), we can find a continuous path V (λ) so that we will arrive at some point where F V (λ) is invariant on the path and it holds at the end point of the path that F V is independent of F (:,Ī). This means that we can arbitrarily change the values of W (:,Ī) and b(Ī) without affecting the value of Z, because any changes of these variables are absorbed into F (:,Ī) which anyway has no influence on F V. Thus it is sufficient to show that there exist W (:,Ī) and b(Ī) for which F has full rank. Let p = n − r andĪ = {j 1 , . . . , j p } . Let A = XW then A(:,Ī) := [a j1 , . . . , a jp ] = XW (:,Ī). By assumption X has distinct rows, one can choose W (:,Ī) so that each a j k ∈ R N has distinct entries. Then we have
F (:,Ī) = [σ(a j1 + 1 N b j1 ), . . . , σ(a jp + 1 N b jp )].
Let col(·) denotes the column space of a matrix. It holds dim(col(F (:, I))) = r < N. Since a j1 has distinct entries, Lemma 4.4 implies that there must exist b j1 ∈ R so that σ(a j1 + 1 N b j1 ) / ∈ col(F (:, I)), because otherwise Span {σ(a j1 + 1 N b j1 ) | b j1 ∈ R} ∈ col(F (:, I)) whose dimension is strictly smaller than N , which contradicts Lemma 4.4. So it means that there is b j1 ∈ R so that rank(F ) increases by 1. By assumption n ≥ N, it follows that p ≥ N − r, and thus we can choose b j2 , . . . , b j N −r similarly to obtain rank(F ) = N.
H. Proof of Lemma 5.3
We need to show that there is a continuous path from (W, V ) to (W , V ) for some V ∈ R n×p , so that the output function, defined by Z := σ(XW )V, is invariant along the path. Let F = σ(XW ) ∈ R N ×n and F = σ(XW ). It holds Z = F V. Let I resp. I denote the maximum subset of linearly independent columns of F resp. F so that rank(F (:, I)) = rank(F (:, I )) = N, andĪ andĪ be their complements. By the rank condition, we have |I| = |I | = N. Since rank(F ) = N < n, we can apply Lemma 4.3 to the tuple (F, V, I) to arrive at some point where the output Z is independent of F (:,Ī). From here, we can update W (:,Ī) arbitrarily so that it does not affect Z because any change to these weights only lead to changes on F (:,Ī) which however has no influence on Z. So by taking a direct line segment from the current value of W (:,Ī) to W (I , :), we achieve W (:,Ī) = W (:, I ). We refer to this step below as a copy step. Note here that since n ≥ 2N by assumption, we must have |Ī| ≥ |I |. Moreover, if |Ī| > |I | then we can simply ignore the redundant space in W (:,Ī). Now we already copy W (:, I ) into W (:,Ī), so it holds that rank(F (:,Ī)) = rank(F (:, I )) = N. Let K = I ∩Ī and J = I ∩ I be disjoint subsets so that I = K ∪ J. Suppose w.l.o.g. that the above copy step has been done in such a way that W (:,Ī ∩ I ) = W (:, K). Now we apply Lemma 4.3 to (F, V,Ī) to arrive at some point where Z is independent of F (:, I), and thus we can easily obtain W (:, I ∩ I ) = W (:, J) by taking a direct line segment between these weights. So far, all the rows of W (:, K ∪ J) have been copied into W (:, I ) at the right positions so we obtain that W (:, I ) = W (:, I ). It follows that rank(F (:, I )) = rank(F (:, I )) = N and thus we can apply Lemma 4.3 to (F, V, I ) to arrive at some other point where Z is independent of F (:,Ī ). From here we can easily obtain W (:,Ī ) = W (:,Ī ) by taking a direct line segment between these variables. Till now we already have W = W . Moreover, all the paths which we have followed leave the output Z invariant.
I. Proof of Theorem 6.1
Case 1: min {n 1 , . . . , n L−1 } ≥ N. Let θ = (W l , b l ) L l=1 be an arbitrary point of some strict sublevel set L s α , for some α > p * . We will show that there is a continuous descent path starting from θ on which the loss is non-increasing and gets arbitrarily close to p * . Indeed, for every arbitrarily close to p * and ≤ α, letŶ ∈ R N ×m be such that ϕ(Ŷ ) ≤ . Since X has distinct rows, n 1 ≥ N, and the activation σ satisfies Assumption 2.2, an application of Lemma 5.2 to (X, W 1 , b 1 , W 2 ) shows that there is a continuous path with constant loss which leads θ to some other point where the output at the first hidden layer is full rank. So we can assume w.l.o.g. that it holds for θ that rank(F 1 ) = N. By assumption n 1 ≥ N and F 1 ∈ R N ×n1 , it follows that F 1 must have distinct rows, and thus by applying Lemma 5.2 again to (F 1 , W 2 , b 2 , W 3 ) we can assume w.l.o.g. that rank(F 2 ) = N. By repeating this argument to higher layers using our assumption on the width, we can eventually arrive at some θ = (W l , b l ) L l=1 where rank(F L−1 ) = N. Thus there must exist W * L−1 ∈ R n L−1 ×m so that F L−1 W * L = Y − 1 N b T L . Consider the line segment W L (λ) = (1 − λ)W L + λW * L , then it holds by convexity of ϕ that
Φ (W l , b l ) L−1 l=1 , (W L (λ), b L ) =ϕ F L−1 W L (λ) + 1 N b T L =ϕ (1 − λ)(F L−1 W L + 1 N b T L ) + λ(F L−1 W * L + 1 N b T L ) ≤(1 − λ)ϕ(F L ) + λϕ(Ŷ ) <(1 − λ)α + λ ≤ α.
Thus the whole line segment is contained in L s α . By plugging λ = 1 we obtain (W l , b l ) L−1 l=1 , (W * L , b L ) ∈ L s α . Moreover, it holds Φ (W l , b l ) L−1 l=1 , (W * L , b L ) = ϕ(Ŷ ) ≤ . As can be chosen arbitrarily close to p * , we conclude that Φ can be made arbitrarily close to p * in every strict sublevel set which implies that Φ has no bad local valleys.
Case 2: min {n 1 , . . . , n L−1 } ≥ 2N. Our first step is similar to the first step in the proof of Theorem 5.1, which we repeat below for completeness. Let θ = (W l , b l ) L l=1 , θ = (W l , b l ) L l=1 be arbitrary points in some sublevel set L α . It is sufficient to show that there is a connected path between θ and θ on which the loss is not larger than α. In the following, we denote F k and F k as the output at a layer k for θ and θ respectively. The output at the first layer is:
F 1 = σ([X, 1 N ][W T 1 , b 1 ] T ), F 1 = σ([X, 1 N ][W T 1 , b 1 ] T ).
By applying Lemma 5.2 to (X, W 1 , b 1 , W 2 ) and (X, W 1 , b 1 , W 2 ) we can assume w.l.o.g. that both F 1 and F 1 have full rank, since otherwise there is a continuous path starting from each point and leading to some other point where the rank condition is fulfilled and the network output at second layer is invariant on the path. Once F 1 and F 1 have full rank, we can apply Lemma 5.3 to [X, 1 N ], [W T 1 , b 1 ] T , W 2 , [W T 1 , b 1 ] T in order to drive θ to some other point where the parameters of the first layer are all equal to the corresponding ones of θ . So we can assume w.l.o.g. that (W 1 , b 1 ) = (W 1 , b 1 ).
Once the network parameters of θ and θ coincide at the first hidden layer, we can view the output of this layer, which is equal for both points (i.e., F 1 = F 1 ), as the new training data for the subnetwork from layer 2 till layer L. Same as before, we first apply Lemma 5.2 to (F 1 , W 2 , b 2 , W 3 ) and (F 1 , W 2 , b 2 , W 3 ) to drive θ and θ respectively to other new points where both F 2 and F 2 have full rank. Note that this path only acts on (W 2 , b 2 , W 3 ) and thus leaves everything else below layer 2 invariant, in particular we still have F 1 = F 1 . Then we can apply Lemma 5.3 again to the tuple [F 1 , 1 N ], [W T 2 , b 2 ] T , W 3 , [W T 2 , b 2 ] T to drive θ to some other point where (W 2 , b 2 ) = (W 2 , b 2 ).
By repeating the above argument to the last hidden layer, we can make all network parameters of θ and θ coincide for all layers, except the output layer. In particular, the path that each θ and θ has followed has invariant loss. The output of the last hidden layer for these points is A := F L−1 = F L−1 . The loss at these two points can be rewritten as
Φ(θ) = ϕ [A, 1 N ] W L b T L , Φ(θ ) = ϕ [A, 1 N ] W L b T L .
Since ϕ is convex, the line segment
(1 − λ) W L b T L + λ W L b T L
must yield a continuous descent path between (W L , b L ) and (W L , b L ), and so the loss of every point on this path cannot be larger than α. Moreover, this path connects θ and θ together, and thus L α has to be connected. | 12,196 |
1901.07440 | 2913669491 | Links are an essential feature of the World Wide Web, and source code repositories are no exception. However, despite their many undisputed benefits, links can suffer from decay, insufficient versioning, and lack of bidirectional traceability. In this paper, we investigate the role of links contained in source code comments from these perspectives. We conducted a large-scale study of around 9.6 million links to establish their prevalence, and we used a mixed-methods approach to identify the links' targets, purposes, decay, and evolutionary aspects. We found that links are prevalent in source code repositories, that licenses, software homepages, and specifications are common types of link targets, and that links are often included to provide metadata or attribution. Links are rarely updated, but many link targets evolve. Almost 10 of the links included in source code comments are dead. We then submitted a batch of link-fixing pull requests to open source software repositories, resulting in most of our fixes being merged successfully. Our findings indicate that links in source code comments can indeed be fragile, and our work opens up avenues for future work to address these problems. | One of the most related studies is the one by Xia et al @cite_33 . They investigated what developers search for on the Web, and found that developers search for explanations of unknown terminology, explanations for exceptions error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries services. Furthermore, they found that searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. | {
"abstract": [
"Developers commonly make use of a web search engine such as Google to locate online resources to improve their productivity. A better understanding of what developers search for could help us understand their behaviors and the problems that they meet during the software development process. Unfortunately, we have a limited understanding of what developers frequently search for and of the search tasks that they often find challenging. To address this gap, we collected search queries from 60 developers, surveyed 235 software engineers from more than 21 countries across five continents. In particular, we asked our survey participants to rate the frequency and difficulty of 34 search tasks which are grouped along the following seven dimensions: general search, debugging and bug fixing, programming, third party code reuse, tools, database, and testing. We find that searching for explanations for unknown terminologies, explanations for exceptions error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries services are the most frequent search tasks that developers perform, while searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. Our study sheds light as to why practitioners often perform some of these tasks and why they find some of them to be challenging. We also discuss the implications of our findings to future research in several research areas, e.g., code search engines, domain-specific search engines, and automated generation and refinement of search queries."
],
"cite_N": [
"@cite_33"
],
"mid": [
"2604794021"
]
} | 9.6 Million Links in Source Code Comments: Purpose, Evolution, and Decay | When Ted Nelson started Project Xanadu 1 in 1960, he envisioned "an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system-a literary, legal and business arrangement-for friction-less, non-negotiated quotation at any time and in any amount" [25]. Links were supposed to be visible and could be followed from all endpoints, with permission to link to a document explicitly granted by the act of publication [2]. Decades later, Nelson witnessed the birth of the World Wide Web, which in his words "trivialized this original Xanadu model, vastly but incorrectly simplifying these problems to a world of fragile ever-breaking one-way links, with no recognition of change or copyright, and no support for multiple versions or principled re-use" [25]. As predicted by Nelson, the Internet and its implementation of links have afforded us countless opportunities since, but also experienced issues such as link decay [17], [22], digital plagiarism [9], and the need to rely on external services to keep historical copies of web content [24].
In this work, we investigate the role of links contained in source code comments from the perspective of these opportunities and challenges: what purposes do they serve, how do they and their targets evolve, and how often do they break? The significance of this work is closely related to software documentation [33] and self-admitted technical debt [28]. To improve documentation and mitigate potential issues, it is important to understand developers' typical knowledge sharing activities by referencing external sources, and to investigate link decay as a potential problem.
Our work is related to and inspired by recent research on source code comments in terms of documentation, traceability, licensing, and attribution. For example, source code comments have been found to document technical debt [28] and to support articulation work [36]. They are fragile with respect to identifier renaming, i.e., traceability between comments and code is easily lost [32]. Source code comments located at the beginning of a file often include a text or a link indicating the copyright and license information of the file [12]. These comments are updated during the evolution of a product by the copyright holders [42]. Links in source code comments are sometimes used for attribution when source code has been taken from elsewhere-however, the vast majority of code snippets is copied without attribution [7], [8]. Despite these research efforts, to the best of our knowledge, the role of links in source code comments has not been studied comprehensively so far.
To fill this gap, in this paper, we first lay the foundation for understanding the role of links in source code comments by collecting 9,654,702 links from source code comments in 25,925 Git repositories. Our parser is able to extract comments from source code written in 7 programming languages. We find that links in source code comments are common: more than 80% of the repositories in our study contained at least one link. Through a qualitative study of a stratified sample of 1,146 links, we establish the kinds of link targets that are referenced in source code comments. To understand how links are used to indicate issues related to attribution, technical debt, copyright, and licensing, our qualitative study also uncovers the various purposes for including links in source code comments. We find that licenses, software homepages, and specifications are among the most prevalent types of link targets, and that links are often used to provide metadata or attribution. Link decay has the potential of making documentation in source code comments fragile and buggy. We investigate this issue from two perspectives: we analyze the evolution of the links in the repositories' commit histories and we examine how often link targets referenced in source code comments change. We find that links are rarely updated, but their targets evolve, in almost 10% of all cases leading to dead links. We then submit fixes to a subset of these broken links as pull requests, most of which were successfully merged by the maintainers of the corresponding open source projects.
In summary, this paper's contributions are three-fold:
• a large-scale and comprehensive study of around 9.6 million links to establish the prevalence of links in source code comments, • a mixed-methods study to identify targets, purposes, and evolutionary aspects of links in source code comments, and • an analysis of the extent to which links in source code comments are affected by link decay, with all nine linkfixing pull requests submitted to active open source projects already merged by the projects' maintainers.
II. RESEARCH METHOD
In this section, we present our research questions and data collection methodology, and we introduce the data contained in our online appendix.
A. Research Questions
The main goal of the study is to gain insights into the purposes, evolution and decay of links in source code comments. Based on this goal, we constructed seven research questions to guide our study. We now present each of these questions, along with the motivation for each.
(RQ1): How prevalent are links in source code comments?
The motivation of RQ1 is to understand whether the use of links in source code is a common practice in the wild. Furthermore, we would like to quantitatively explore the distribution, diversity, and spread of these links across different types of software projects. (RQ2): What kind of link targets are referenced in source code comments? (RQ3): What purpose do links in source code serve? RQ2 and RQ3 require a deeper analysis of the repositories, where we would like to understand the nature and purpose that the links serve. The key motivation for RQ2 is to identify the types of link targets that developers are likely to refer to in source code comments. Furthermore, we would like to characterize the most common types of linked domains.
The key motivation for RQ3 is to determine the reasons why developers use links.
(RQ4): How do links in source code comments evolve? (RQ5): How frequently do link targets referenced in source code comments change? (RQ6): How many links in source code comments are dead?
B. Data Collection
We now describe our methods for repository preparation, comment extraction, and link identification.
Repository preparation. In this work, we analyzed active software development repositories on GitHub written in common programming languages. As common programming languages, we selected seven languages: C, C++, Java, JavaScript, Python, PHP, and Ruby. These languages have been ranked consistently in the top 10 languages on GitHub from 2008 to 2017 (based on the number of repositories from 2008 to 2015 [20], the number of pull requests from 2014 to 2017 [10], and the number of pull requests in 2017 [13]).
Using the GHTorrent dataset 2 [16], we collected active repositories for the seven languages using the following criteria: (i) having more than 500 commits in their entire history (the same threshold used in previous work [4]), and (ii) having at least 100 commits in the most active two years. We designed the second criterion to remove long-term less active repositories and short-term projects that have not been maintained for long (and may not be software development projects). For example, we were able to exclude software-engineering-amsterdam/sea-of-ql, which is a repository of a collaboration space for students in a particular university course, and was reported as a false positive of software project identification [23]. We determine repositories' languages based on the GHTorrent information. Forked repositories are excluded if repositories are recorded in GHTorrent as forks of other repositories.
With the above criteria, we prepared the candidate list of target repositories for the seven languages as shown in Table I.
When we collected these candidate repositories (from May to June 2018), some repositories were not available because they had been deleted or made private. In total, we obtained more than 25,000 repositories, which is almost 90% of the candidate repositories.
Comment extraction. From each Git repository, we extract source files of the labeled language in the HEAD commit (the latest snapshot of a cloned repository). For example, only .java files are extracted from a Java repository. To process source files, we employ ANTLR4 lexical analyzers for six languages other than Ruby because their grammar definitions are available in the official example repository. 3 For Ruby, we use a standard library, Ripper parser.
We extract all single line comments (e.g., // in C) and multiline comments (/ * ... * /) according to the grammars. In the case of Python, string literals (''' ... ''') are also regarded as comments because they include documents (known as docstring). In the case of PHP, both HTML comments and PHP code comments are extracted.
Link identification. From the extracted comments, links are identified using the regular expression /http\S+/ (localhost and IP addresses, which are mainly used for private addresses, are excluded) and validated with the Perl module Data::Validate::URI. We identified a total of 9,654,702 links from the collected repositories as seen in Table I. All links are recorded with the information of the corresponding file, repository identifiers (pairs of account and repository names), commit hashes, and the line number where the surrounding comment starts. Considering the number of repositories, we found that repositories written in C, C++, and Java tend to contain more links compared to repositories in Python and Ruby.
C. Online Appendix
Our online appendix contains our 9,654,702 links associated with the information of languages and comment location (GitHub links including account names, repository names, commit hashes, file paths, and line numbers). The appendix is available at https://github.com/NAIST-SE/9.6MillionLinks.
III. FINDINGS
In this section, we present our findings for each research question.
A. Prevalence of Links (RQ1)
To understand the prevalence of links referenced in source code comments (RQ1), we conducted a quantitative analysis of our collected dataset in terms of link existence, domain diversity, and domain popularity.
Link existence. Figure 1a show the percentages of repositories that have at least one link in their source code comments. We see that, in every language, more than 80% of the repositories contain links in source code comments. Especially for repositories written in C, C++, and PHP, more than 90% of the repositories refer to external sources via links.
Domain diversity. In the obtained 9,654,702 links, there are 57,039 distinct domains (Internet hostnames). Figure 1b shows the distribution of the number of distinct domains per repository, for repositories that have at least one link in their source code comments. Median values are presented in the figure. We found that there is a diversity of links in a single repository even when summarized by their domains. Especially in repositories written in C, C++, JavaScript, and PHP, source code comments link to 10 or more different domains (median).
Popular domains. Figure 1c illustrates the proportion of languages shared by the top 10 most referenced domains. Note that domain ranking is based on the number of repositories instead of the number of links. If links belonging to a domain appear in a small number of repositories, the domain will be low-ranked even if those repositories contain many links.
The github.com domain is the top referenced domain in our dataset. More than 14,000 repositories across seven languages referenced content on github.com. As we will describe in detail in Section III-B, such content includes software homepage, code, and profile of a GitHub contributor. However, we find in Section III-F that many links to github.com are no longer available. We also found many links to code.google.com (7th rank). Such content includes bug report, software homepage, and code. In a statistically representative sample of common domains (sampling described in Section III-B), two out of three links to code.google.com are redirected to github.com, and one links to code.google.com/archive/.
The stackoverflow.com domain is the second most referenced domain and has been linked to from 8,189 repositories. As identified in previous work, Stack Overflow is widely used as a knowledge exchange platform between programmers [38], where programmers can obtain knowledge of good practices from code examples [29], [35], for example. The large number of links to stackoverflow.com in source code comments can be another piece of evidence of developers' needs for knowledge acquisition from external resources. We study how code could be obsolete by not being updated when external sources change in Section III-E.
The top domains differ by programming language: The www.apache.org domain is frequently linked from Java repositories, and the www.gnu.org domain is referenced from C and C++ repositories. Repositories written in JavaScript have many links to the Web-related domains of www.w3.org and developer.mozilla.org.
Summary: We revealed that links in source code comments are prevalent. In more than 80% of the 25,925 active repositories written in seven common languages, there exists at least one link in each repository. The top three most frequently referenced domains per repository are github.com, stackoverflow.com, and en.wikipedia.com. To understand what kind of link targets are referenced in source code comments (RQ2), we conducted a qualitative study of a statistically representative and stratified sample of all links in our dataset.
After an initial analysis of the link data, it quickly became obvious that some domains account for many links while other domains are rare. Based on this observation and to ensure diversity of our sample, we divided the data into three strata:
1) links to commonly linked domains, 2) links to domains sometimes linked, and 3) links to rarely linked domains.
To decide on thresholds for distinguishing domains into those that are commonly, sometimes, and rarely linked, we conducted a visual analysis of the distribution of links per domain in our dataset. Figure 2 shows this distribution using a log scale. While content from the most commonly linked domain was linked more than a million times, many domains appeared in our dataset with a much lower frequency. We used the "step" in the distribution on the left-hand side of Figure 2 to distinguish between domains that are commonly linked and domains that are sometimes linked, with a cutoff frequency of 230. We consider domains which account for exactly one link in our dataset to be rarely linked. Table II shows the number of domains and the number of links in each strata. We then drew a statistically representative sample from each bucket. The required sample size was calculated so that our conclusions about the ratio of links with a specific characteristic would generalize to all links in the same bucket with a confidence level of 95% and a confidence interval of 5. 4 The calculation of statistically significant sample sizes based on population size, confidence interval, and confidence level is well established (first published by Krejcie and Morgan in 1970 [19]). The qualitative analysis was conducted in multiple iterations: in the first iteration, the first two authors independently coded 20 links from the sample, discussed a common coding guide, and tested this coding guide on another 20 links from the sample, refining the guide, merging codes, and adding codes which had been missed. The initial codes were informed by those used by Aniche et al. [5] to categorize content posted on news aggregators, however, we found that their codes did not cover all types of link targets present in our dataset. In the second iteration, the four authors of this paper then independently coded another 30 links from the sample, using the coding guide designed by the first two authors. We then calculated the kappa agreement of this iteration between all four raters, for 30 cases and all 19 codes that emerged from the qualitative analysis. 5 The kappa agreement was 0.81 or "almost perfect" [39]. Based on this encouraging result, the remaining data was then coded by a single author.
The following list shows the 19 codes that emerged from our analysis along with a short description which was available in the coding guide:
• 404: link target does not exist (anymore) or cannot be accessed • licence: licence of a software project • software homepage: main web presence of a library or software project • specification: anything that resembles a requirements document or a technical standard • organization homepage: main web presence of an organization or company • other: anything that does not fit the other codes, including if sign-in is required • tutorial or article: technical article or tutorial, without commenting section (blog post otherwise) • API documentation: documentation of an API element • blog post: technical content with a commenting section • application: interactive application (e.g., web application, online utility) • bug report: bug report or issue in an online bug/issue tracker • research paper: academic paper • personal homepage: personal homepage of one individual • code: a source code file • forum thread: thread in a forum or entire forum • GitHub profile: profile of a GitHub contributor • book content: chapter/section of a book or entire book • Q&A thread: question-and-answer thread, but not Stack Overflow • Stack Overflow: question-and-answer thread on Stack Overflow Taxonomy of link targets. Table III shows the result of our qualitative analysis. For commonly-linked domains, license is the most common type of link target, accounting for more than half of the links in our sample, followed by software homepages, i.e., the main web presence of a library or software project. For domains that are linked sometimes from source code comments, the most common type of link target was 404, a non-existing link target. This is a first indicator of the decay 5 Kappa agreement was calculated using http://justusrandolph.net/kappa/. of links in source code comments, which we will analyze in detail in the next sections. Software homepages are also prevalent, as are organization homepages, both accounting for more than 10% of all links in our sample. Finally, for links from domains which are rarely linked, the problem of decay is even more serious, affecting 37% of the links in this sample.
In other words, we can conclude with a 95% confidence that between 32 and 42% of all links to domains which are rarely linked from source code comments are dead or inaccessible. The prevalence of the code "other" in the results for links to rarely linked domains is an indicator of the diversity of links present in source code comments.
Summary:
We identified more than a dozen different kinds of link targets, with dead links, licenses, and software homepages being the most prevalent. Dead links are particularly common for rarely linked domains.
C. Link Purpose (RQ3)
To understand the purpose of links referenced in source code comments (RQ3) and similar to (RQ2), we again employed a qualitative analysis of our statistically representative and stratified sample of 1,146 links, only this time focusing on the origin of a link (in a source comment) rather than the target of the link. We used the same iterative approach to design a coding guide, and validated the coding guide by having the four authors code 30 links independently, this time leading to a kappa agreement of 0.70 which indicates "substantial" agreement [39]. The somewhat lower agreement can be explained by the need to extrapolate the purpose of a link from its context in the source code alone, without being able to interview the contributor who added the link.
The following list shows all 8 codes that emerged from our analysis for link purpose, along with a short description which was available in the coding guide. The coding guide was informed by work on source code comments (e.g., [36]), selfadmitted technical debt (e.g., [28]), and attribution (e.g., [7]).
• metadata: the link relates to the author of the source code, a related organization, or the license • source/attribution: the comment explicitly indicates that the link is a source of some aspect of the source code (e.g., algorithm) • source code context: the link adds additional information to the source code (use this code for things that do not obviously fit into any of the previous) • see-also: the comment explicitly indicates that the link points to additional reading material (usually accompanied by a phrase such as "see also"). • commented-out source code: the link is part of the source code, e.g., as a parameter value, but has been commented out • link-only: the comment only contains the link • self-admitted technical debt: bug-related, like workaround, under development, and so on • @see: the link is accompanied by "@see", but no further explanation Note that our coding guide required the indicators of seealso and source/attribution to be explicit, thus reducing the guesswork required as part of the qualitative analysis.
Taxonomy of link purpose. Table IV shows the results of the qualitative analysis. For links to commonly linked domains, providing metadata, e.g., in the forum of licenses or author information, is by far the most common purpose of a link, covering three quarters of the links in our sample. For links to domains which are only sometimes linked, metadata only accounts for one third of the data, followed by links included for the purpose of attribution, providing context, or see-also information. The results for links to rarely linked domains are even more diverse: we can see from the table that these links are used for context, attribution, and as part of the source code functionality (albeit commented out), to name the top three. Six of the eight codes account for at least 10% of the links in this part of our sample.
Matching link target with purpose. Based on the qualitative analysis conducted for answering RQ2 and RQ3 about the targets and purposes of links in source code comments, we are now able to investigate the relationships between the different types of link targets and the different purposes which emerged from our qualitative analysis. To do so, we applied association rule learning using the apriori algorithm [1] as implemented in the R package arules 6 to our data, treating each link as a transaction containing two items: its target type and its purpose. We used 4 as threshold for support and 0.7 as threshold for confidence, i.e., all rules that we extracted are supported by at least four data points and we have at least a 70% confidence that the left hand side of the rule implies the right hand side. Table V shows the association rules extracted from our data with these settings, separately for each stratum in our sample. Unsurprisingly, the link target type license and the purpose of providing metadata are tightly connected, in particular for links referring to commonly linked domains. In fact, all links to licenses were found to have been included for the reason of providing metadata, and 72% of the metadata is license information. Links to software, organization, and personal homepages are also associated with metadata, across all strata. Although with a relatively low support of seven instances, it is also interesting to note the tight coupling of the link target type bug report and the purpose of admitting technical debt.
Summary: We identified different purposes for the inclusion of links in source code comments, with providing metadata and attribution being the most common. Links are also included for background information, to provide context, or to admit technical debt. In some cases, the link is part of source code which has been commented out.
D. Link Evolution (RQ4)
To understand how links evolve (RQ4), we investigated the revision histories of repositories in the samples from (RQ2). For each sample link, we searched an old version of the link that has been revised by a commit that introduced the link. We extracted such a commit introducing a link by using the git log command (-S option with tracking file renaming). We searched http(s) links removed from the code location where the sample link has been added. We identified 88 revised links out of 1,146 samples, including 24 (6.3%) in common, 31 (8.1%) in sometimes, and 33 (8.7%) in rare. Less than 10% of the links had been revised in each strata, that is, most of the links have never been updated. We manually analyzed the old and new paths of the links and identified the following evolution types:
• license replacement: a new link refers to a new software license. For example, a link to GNU GPL has been replaced with a link to the Apache License. • organization update: a project or an organization changed its name or website. For example, a project that acquired their own domain updated links to their project website. • change to https: a new link uses HTTPS instead of HTTP for the same location as the previous link. • content move: a new link refers to a slightly different location (e.g. the same path on a different server, the same document name on a different wiki), which is likely the same content. • content update: a new link refers to different content from the previous link, but the new content is likely updated. For example, the Apache Jackrabbit project replaced a link pointing to a draft version of a document 7 with a link to an RFC version. 8 • content change: a new link refers to relevant but different content from the previous link. For example, the Pi4J project replaced a link related to the usage of a serial port of Raspberry Pie 9 with another similar document. 10 • other: we could not identify types for some links whose contents are no longer available. It should be noted that the contents for 20 updated links are 404 Not Found.
Reasons for link evolution. Table VI shows the numbers of link evolution in the three strata. For commonly-linked domains, license replacement and updating organizational information account for about 80% of link revisions. For domains sometimes linked, organization update is the most common, followed by other and content change. For rarely- Summary: Links are rarely updated (less than 9%). Common modifications are updating licenses and organization homepages.
E. Link Target Evolution (RQ5)
After understanding the evolution of links, our next research question (RQ5) asks about the evolution of their targets. To investigate whether link targets referenced in source code comments evolve, we attempted to download all link targets in our sample of 1,146 links using the curl command with a timeout of 60 seconds. As already discussed as part of (RQ2), not all link targets are available. We were able to download a total of 1,034 link targets (90%). We then repeated the same download process exactly ten days later, to see how many of the link targets had changed within this short time frame and what kind of changes had happened.
Changes to the link target. Table VII summarizes the results of this analysis: out of the 1,034 link targets for which curl returned a result, 879 (85%) had not changed at all in the ten-day time frame (the downloaded content was exactly the same, as per the Windows file compare tool fc). We manually analyzed the 155 cases in which the content had changed by opening both versions in a web browser and conducting a visual comparison. The majority of the changes in the remaining 15% can be attributed to automatically generated changes, such as the display of a visitor count or the current date in a footer.
However, a non-negligible number of link targets underwent more significant changes in the ten-day time window: For six links for which we were able to retrieve data on the first download date, there was no content available anymore ten days later. For three links which had displayed an error message when we first attempted to download their content, the specific error message changed. Some link targets changed their website design, and for a few links, the content changed. For example, the download page of TaskWarrior 11 included the following notice when we first downloaded its content: "(For those of you wishing to build task from source on Cygwin, you will need some components installed (make, g++/clang, GnuTLS, libuuid, libreadline), but don't forget -task is a standard part of the Cygwin distribution, so you do not need to build from source, unless you want the latest development snapshot)." Ten days later, this notice was replaced with: "(Please note, that Cygwin is not supported anymore. Please use the Windows Subsystem for Linux to use Taskwarrior on Windows)." We argue that this kind of change is relevant to software developers.
Stack Overflow case study. To investigate this phenomenon in more detail, we conducted a case study with the subset of links pointing to Stack Overflow. As seen in Section III-A, stackoverflow.com is the second most referenced domain.
In all 9,654,702 obtained links, there are 32,197 links belonging to stackoverflow.com. Among those Stack Overflow links, there are varieties of expressions: an abbreviated path to an answer (/a/(answer id)), an abbreviated path to a question (/q/(question id)), and a full path to a question (/questions/(question id)/(title)). Older links start with 'http://' and newer links start with 'https://'. For each Stack Overflow link, we identified the timestamp of when the link was added to a repository by using the same git log command (-S option with tracking file renaming) used in Section III-D. For duplicate links, we consider only the oldest timestamp. Consequently, we obtained a list of 11,464 distinct links with their timestamps.
We then made use of the SOTorrent dataset [8] to investigate the extent to which Stack Overflow content had changed since the link to the question or answer had been added to a source code comment in a Git repository. We created a statistically representative sample of 372 links from the population of all unique links to Stack Overflow content in our dataset, and we queried SOTorrent to determine the following metrics for each link:
• the number of text edits on any post (i.e., question or answer) in the same thread, • the number of new comments on any post (i.e., question or answer) in the same thread, • the number of new answers in the same thread, and • the number of edits to the thread title. Thread updates. Figure 3 shows the results of this analysis. More than half of all Stack Overflow threads had at least one change made to the text of a question or answer in the same thread (median: 1, third quartile: 3) after they were committed to a Git repository as part of a source code comment, and more than half of these links attracted at least one new comment in the meantime (median: 2, third quartile: 7). While the number of new answers to a thread was zero in the median case, a quarter of the Stack Overflow threads attracted at least 2 new answers after the link was added in a source code comment (median: 0, third quartile: 2). In total, only 91 (24%) of the 372 Stack Overflow threads in our sample did not undergo any changes after they were added to a Git repository. Summary: We found that even within a short ten-day time window, a non-negligible portion of link targets referenced in source code comments evolve, in some cases adding or modifying pertinent information. In our case study on links pointing to Stack Overflow, we found that more than three quarters of all Stack Overflow threads linked in source code comments attracted at least one change (edit, new answer, or new comment) after being first referenced in a source code comment.
F. Link Decay (RQ6)
Among the obtained 9,654,702 links, there are 382,650 distinct links. To investigate the amount of dead links in source code comments (RQ6), we accessed all Web contents from the 382,650 unique links by using the Perl module LWP. 13 Link retrieval responses.
G. Fixing Dead Links (RQ7)
To fix dead links (RQ7), we collected fixable dead links and submitted pull requests to fix. We select dead links that are not metadata (need multiple files to be fixed) nor commented-out source code. Personal blog articles were avoided because they tend to be no longer available. Consequently we obtained 14 dead links to API documentation, research papers, and so on. After checking the original content in Wayback Machine 14 , we manually investigated new links by searching specific keywords in the original content. Our fixing process included first forking a personal copy of the project, fixing the link, and then later submission of a pull request to the project.
Pull request results. Developers showed they cared about dead links by accepting to all nine pull requests. 15 16 17 18 19 20 21 22 23 Since the link itself is a comment, we speculate that it has almost no conflicts with existing code, so our pull requests are likely to pass all tests and to be merged immediately. Developers responded with comments such as "LGTM (looks good to me)" and "Thanks for spotting the broken link".
Overall, the responses from developers provide sufficient motivation for tool support to assist with fixing broken links. We argue that such comments indicate that developers are concerned with keeping their links alive.
Summary: Developers generally responded positively to the request to fix dead links. All nine responsive projects accepted our pull requests to fix dead links.
IV. RECOMMENDATIONS
Our findings can be summarized into recommendations for developers and researchers.
Recommendations for software developers including links in source code comments are:
• Try referencing permanent links, as it is reported that more than 30% of links will not work after a 4 year period [18]. Referencing research papers with DOI is preferable instead of researchers' personal Web pages. Explicitly mentioning tags or commit hashes to referenced code in GitHub would be recommended, as software structure can be changed (we found many dead links to GitHub in Section III-F). • Check link targets for new information on a regular basis, as referenced external resources can be considered to be software documentation to support comprehension and maintenance activities. In addition, link target updates can be triggers of improving and updating code (as seen in Section III-E).
We can also consider future work with the following possible challenges.
• Further understanding of external sources. We found many sources as shown in Figure 1c and Table III. Although some sources have been already studied, for example, licenses [12], self-admitted technical debt [28], and Stack Overflow [38], other sources have not been well-studied with regard to their impact and influence on software development, such as research papers and Wikipedia articles. • Further studies of source code comments to understand how knowledge (related to knowledge-based theory [44] and human capital [26], [40]) is summarized and shared via source code comments. Further analyses of source code comment contents [27] would be required. • Tool support for external source referencing, tracking, and updating. Although we recommend developers to maintain links and associated code, it is not always possible. Tools or systems to help developers fix link issues and maintain code automatically could be practically useful.
V. THREATS TO VALIDITY
Threats to the construct validity exist in our approach to link identification. Since we identified links per line in source code comments, links located across multiple lines cannot be extracted. Note that we did not encounter any such multipleline links in our representative sample of 1,146 links. Hence we consider that the impact of incorrect link identification because of multiple-line links is small.
Threats to the external validity exist in our repository preparation. Although we analyzed a large amount of repositories on GitHub, we cannot generalize our findings to industry nor open source projects in general; some open source repositories are hosted outside of GitHub, e.g., on GitLab or private servers.
To mitigate threats to reliability, we prepared an online appendix of our 9,654,702 links with associated information (see Section II-C).
VII. CONCLUSION
To understand purposes, evolution, and decay of links in source code comments, we conducted (i) a quantitative study of 9,654,702 links from source code comments in 25,925 Git repositories to establish the prevalence of links in source code comments; (ii) a qualitative study of a stratified sample of 1,146 links to determine the kinds of link targets and purposes for including links present in our dataset; (iii) a quantitative and qualitative study to investigate the evolution of links in source code comments and their targets; and (iv) a quantitative study to determine the extent to which links in source code comments are affected by link decay.
Our work has shown that links in source code comments indeed suffer from decay, from insufficient versioning (when link targets evolve), and from lack of bidirectional traceability (which could help avoid decay). Based on this work which has established the prevalence of links in source code comments, their multiple purposes and targets, issues of decay, and practical needs of fixing dead links, there are many open avenues for future work: understanding the role of external sources for software development, further studies of source code comments, and tool support for external source referencing, to name a few. | 6,149 |
1901.07440 | 2913669491 | Links are an essential feature of the World Wide Web, and source code repositories are no exception. However, despite their many undisputed benefits, links can suffer from decay, insufficient versioning, and lack of bidirectional traceability. In this paper, we investigate the role of links contained in source code comments from these perspectives. We conducted a large-scale study of around 9.6 million links to establish their prevalence, and we used a mixed-methods approach to identify the links' targets, purposes, decay, and evolutionary aspects. We found that links are prevalent in source code repositories, that licenses, software homepages, and specifications are common types of link targets, and that links are often included to provide metadata or attribution. Links are rarely updated, but many link targets evolve. Almost 10 of the links included in source code comments are dead. We then submitted a batch of link-fixing pull requests to open source software repositories, resulting in most of our fixes being merged successfully. Our findings indicate that links in source code comments can indeed be fragile, and our work opens up avenues for future work to address these problems. | Many researchers have made use of code comments in their work. @cite_24 automatically identify bugs by analyzing inconsistencies between code and comments. Ratol and Robillard @cite_38 used code comments to assist refactoring activities. Wong et al @cite_27 used code comments to map source code and Stack Overflow content. German et al @cite_25 developed the ninka tool that automatically identifies a software license in code comments. Goldman and Miller @cite_3 developed the tool CodeTrail, that demonstrates how the developer's use of web resources can be improved by connecting the Eclipse integrated development environment (IDE) and the Firefox web browser. | {
"abstract": [
"Refactoring is a common software development practice and many simple refactorings can be performed automatically by tools. Identifier renaming is a widely performed refactoring activity. With tool support, rename refactorings can rely on the program structure to ensure correctness of the code transformation. Unfortunately, the textual references to the renamed identifier present in the unstructured comment text cannot be formally detected through the syntax of the language, and are thus fragile with respect to identifier renaming. We designed a new rule-based approach to detect fragile comments. Our approach, called Fraco, takes into account the type of identifier, its morphology, the scope of the identifier and the location of comments. We evaluated the approach by comparing its precision and recall against hand-annotated benchmarks created for six target Java systems, and compared the results against the performance of Eclipse's automated in-comment identifier replacement feature. Fraco performed with near-optimal precision and recall on most components of our evaluation data set, and generally outperformed the baseline Eclipse feature. As part of our evaluation, we also noted that more than half of the total number of identifiers in our data set had fragile comments after renaming, which further motivates the need for research on automatic comment refactoring.",
"When faced with the need for documentation, examples, bug fixes, error descriptions, code snippets, workarounds, templates, patterns, or advice, software developers frequently turn to their web browser. Web resources both organized and authoritative as well as informal and community-driven are heavily used by developers. The time and attention devoted to finding (or re-finding) and navigating these sites is significant. We present Codetrail, a system that demonstrates how the developer's use of web resources can be improved by connecting the Eclipse integrated development environment (IDE) and the Firefox web browser. Codetrail uses a communication channel and shared data model between these applications to implement a variety of integrative tools. By combining information previously available only to the IDE or the web browser alone (such as editing history, code contents, and recent browsing), Codetrail can automate previously manual tasks and enable new interactions that exploit the marriage of data and functionality from Firefox and Eclipse. Just as the IDE will change the contents of peripheral views to focus on the particular code or task with which the developer is engaged, so, too, the web browser can be focused on the developer's current context and task.",
"Commenting source code has long been a common practice in software development. Compared to source code, comments are more direct, descriptive and easy-to-understand. Comments and sourcecode provide relatively redundant and independent information regarding a program's semantic behavior. As software evolves, they can easily grow out-of-sync, indicating two problems: (1) bugs -the source code does not follow the assumptions and requirements specified by correct program comments; (2) bad comments - comments that are inconsistent with correct code, which can confuse and mislead programmers to introduce bugs in subsequent versions. Unfortunately, as most comments are written in natural language, no solution has been proposed to automatically analyze commentsand detect inconsistencies between comments and source code. This paper takes the first step in automatically analyzing commentswritten in natural language to extract implicit program rulesand use these rules to automatically detect inconsistencies between comments and source code, indicating either bugs or bad comments. Our solution, iComment, combines Natural Language Processing(NLP), Machine Learning, Statistics and Program Analysis techniques to achieve these goals. We evaluate iComment on four large code bases: Linux, Mozilla, Wine and Apache. Our experimental results show that iComment automatically extracts 1832 rules from comments with 90.8-100 accuracy and detects 60 comment-code inconsistencies, 33 newbugs and 27 bad comments, in the latest versions of the four programs. Nineteen of them (12 bugs and 7 bad comments) have already been confirmed by the corresponding developers while the others are currently being analyzed by the developers.",
"Code comments improve software maintainability. To address the comment scarcity issue, we propose a new automatic comment generation approach, which mines comments from a large programming Question and Answer (Q&A) site. Q&A sites allow programmers to post questions and receive solutions, which contain code segments together with their descriptions, referred to as code-description mappings. We develop AutoComment to extract such mappings, and leverage them to generate description comments automatically for similar code segments matched in open-source projects. We apply AutoComment to analyze Java and Android tagged Q&A posts to extract 132,767 code-description mappings, which help AutoComment to generate 102 comments automatically for 23 Java and Android projects. The user study results show that the majority of the participants consider the generated comments accurate, adequate, concise, and useful in helping them understand the code.",
"The reuse of free and open source software (FOSS) components is becoming more prevalent. One of the major challenges in finding the right component is finding one that has a license that is e for its intended use. The license of a FOSS component is determined by the licenses of its source code files. In this paper, we describe the challenges of identifying the license under which source code is made available, and propose a sentence-based matching algorithm to automatically do it. We demonstrate the feasibility of our approach by implementing a tool named Ninka. We performed an evaluation that shows that Ninka outperforms other methods of license identification in precision and speed. We also performed an empirical study on 0.8 million source code files of Debian that highlight interesting facts about the manner in which licenses are used by FOSS"
],
"cite_N": [
"@cite_38",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_25"
],
"mid": [
"2767331170",
"2051074879",
"2152874840",
"2023925487",
"1992218759"
]
} | 9.6 Million Links in Source Code Comments: Purpose, Evolution, and Decay | When Ted Nelson started Project Xanadu 1 in 1960, he envisioned "an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system-a literary, legal and business arrangement-for friction-less, non-negotiated quotation at any time and in any amount" [25]. Links were supposed to be visible and could be followed from all endpoints, with permission to link to a document explicitly granted by the act of publication [2]. Decades later, Nelson witnessed the birth of the World Wide Web, which in his words "trivialized this original Xanadu model, vastly but incorrectly simplifying these problems to a world of fragile ever-breaking one-way links, with no recognition of change or copyright, and no support for multiple versions or principled re-use" [25]. As predicted by Nelson, the Internet and its implementation of links have afforded us countless opportunities since, but also experienced issues such as link decay [17], [22], digital plagiarism [9], and the need to rely on external services to keep historical copies of web content [24].
In this work, we investigate the role of links contained in source code comments from the perspective of these opportunities and challenges: what purposes do they serve, how do they and their targets evolve, and how often do they break? The significance of this work is closely related to software documentation [33] and self-admitted technical debt [28]. To improve documentation and mitigate potential issues, it is important to understand developers' typical knowledge sharing activities by referencing external sources, and to investigate link decay as a potential problem.
Our work is related to and inspired by recent research on source code comments in terms of documentation, traceability, licensing, and attribution. For example, source code comments have been found to document technical debt [28] and to support articulation work [36]. They are fragile with respect to identifier renaming, i.e., traceability between comments and code is easily lost [32]. Source code comments located at the beginning of a file often include a text or a link indicating the copyright and license information of the file [12]. These comments are updated during the evolution of a product by the copyright holders [42]. Links in source code comments are sometimes used for attribution when source code has been taken from elsewhere-however, the vast majority of code snippets is copied without attribution [7], [8]. Despite these research efforts, to the best of our knowledge, the role of links in source code comments has not been studied comprehensively so far.
To fill this gap, in this paper, we first lay the foundation for understanding the role of links in source code comments by collecting 9,654,702 links from source code comments in 25,925 Git repositories. Our parser is able to extract comments from source code written in 7 programming languages. We find that links in source code comments are common: more than 80% of the repositories in our study contained at least one link. Through a qualitative study of a stratified sample of 1,146 links, we establish the kinds of link targets that are referenced in source code comments. To understand how links are used to indicate issues related to attribution, technical debt, copyright, and licensing, our qualitative study also uncovers the various purposes for including links in source code comments. We find that licenses, software homepages, and specifications are among the most prevalent types of link targets, and that links are often used to provide metadata or attribution. Link decay has the potential of making documentation in source code comments fragile and buggy. We investigate this issue from two perspectives: we analyze the evolution of the links in the repositories' commit histories and we examine how often link targets referenced in source code comments change. We find that links are rarely updated, but their targets evolve, in almost 10% of all cases leading to dead links. We then submit fixes to a subset of these broken links as pull requests, most of which were successfully merged by the maintainers of the corresponding open source projects.
In summary, this paper's contributions are three-fold:
• a large-scale and comprehensive study of around 9.6 million links to establish the prevalence of links in source code comments, • a mixed-methods study to identify targets, purposes, and evolutionary aspects of links in source code comments, and • an analysis of the extent to which links in source code comments are affected by link decay, with all nine linkfixing pull requests submitted to active open source projects already merged by the projects' maintainers.
II. RESEARCH METHOD
In this section, we present our research questions and data collection methodology, and we introduce the data contained in our online appendix.
A. Research Questions
The main goal of the study is to gain insights into the purposes, evolution and decay of links in source code comments. Based on this goal, we constructed seven research questions to guide our study. We now present each of these questions, along with the motivation for each.
(RQ1): How prevalent are links in source code comments?
The motivation of RQ1 is to understand whether the use of links in source code is a common practice in the wild. Furthermore, we would like to quantitatively explore the distribution, diversity, and spread of these links across different types of software projects. (RQ2): What kind of link targets are referenced in source code comments? (RQ3): What purpose do links in source code serve? RQ2 and RQ3 require a deeper analysis of the repositories, where we would like to understand the nature and purpose that the links serve. The key motivation for RQ2 is to identify the types of link targets that developers are likely to refer to in source code comments. Furthermore, we would like to characterize the most common types of linked domains.
The key motivation for RQ3 is to determine the reasons why developers use links.
(RQ4): How do links in source code comments evolve? (RQ5): How frequently do link targets referenced in source code comments change? (RQ6): How many links in source code comments are dead?
B. Data Collection
We now describe our methods for repository preparation, comment extraction, and link identification.
Repository preparation. In this work, we analyzed active software development repositories on GitHub written in common programming languages. As common programming languages, we selected seven languages: C, C++, Java, JavaScript, Python, PHP, and Ruby. These languages have been ranked consistently in the top 10 languages on GitHub from 2008 to 2017 (based on the number of repositories from 2008 to 2015 [20], the number of pull requests from 2014 to 2017 [10], and the number of pull requests in 2017 [13]).
Using the GHTorrent dataset 2 [16], we collected active repositories for the seven languages using the following criteria: (i) having more than 500 commits in their entire history (the same threshold used in previous work [4]), and (ii) having at least 100 commits in the most active two years. We designed the second criterion to remove long-term less active repositories and short-term projects that have not been maintained for long (and may not be software development projects). For example, we were able to exclude software-engineering-amsterdam/sea-of-ql, which is a repository of a collaboration space for students in a particular university course, and was reported as a false positive of software project identification [23]. We determine repositories' languages based on the GHTorrent information. Forked repositories are excluded if repositories are recorded in GHTorrent as forks of other repositories.
With the above criteria, we prepared the candidate list of target repositories for the seven languages as shown in Table I.
When we collected these candidate repositories (from May to June 2018), some repositories were not available because they had been deleted or made private. In total, we obtained more than 25,000 repositories, which is almost 90% of the candidate repositories.
Comment extraction. From each Git repository, we extract source files of the labeled language in the HEAD commit (the latest snapshot of a cloned repository). For example, only .java files are extracted from a Java repository. To process source files, we employ ANTLR4 lexical analyzers for six languages other than Ruby because their grammar definitions are available in the official example repository. 3 For Ruby, we use a standard library, Ripper parser.
We extract all single line comments (e.g., // in C) and multiline comments (/ * ... * /) according to the grammars. In the case of Python, string literals (''' ... ''') are also regarded as comments because they include documents (known as docstring). In the case of PHP, both HTML comments and PHP code comments are extracted.
Link identification. From the extracted comments, links are identified using the regular expression /http\S+/ (localhost and IP addresses, which are mainly used for private addresses, are excluded) and validated with the Perl module Data::Validate::URI. We identified a total of 9,654,702 links from the collected repositories as seen in Table I. All links are recorded with the information of the corresponding file, repository identifiers (pairs of account and repository names), commit hashes, and the line number where the surrounding comment starts. Considering the number of repositories, we found that repositories written in C, C++, and Java tend to contain more links compared to repositories in Python and Ruby.
C. Online Appendix
Our online appendix contains our 9,654,702 links associated with the information of languages and comment location (GitHub links including account names, repository names, commit hashes, file paths, and line numbers). The appendix is available at https://github.com/NAIST-SE/9.6MillionLinks.
III. FINDINGS
In this section, we present our findings for each research question.
A. Prevalence of Links (RQ1)
To understand the prevalence of links referenced in source code comments (RQ1), we conducted a quantitative analysis of our collected dataset in terms of link existence, domain diversity, and domain popularity.
Link existence. Figure 1a show the percentages of repositories that have at least one link in their source code comments. We see that, in every language, more than 80% of the repositories contain links in source code comments. Especially for repositories written in C, C++, and PHP, more than 90% of the repositories refer to external sources via links.
Domain diversity. In the obtained 9,654,702 links, there are 57,039 distinct domains (Internet hostnames). Figure 1b shows the distribution of the number of distinct domains per repository, for repositories that have at least one link in their source code comments. Median values are presented in the figure. We found that there is a diversity of links in a single repository even when summarized by their domains. Especially in repositories written in C, C++, JavaScript, and PHP, source code comments link to 10 or more different domains (median).
Popular domains. Figure 1c illustrates the proportion of languages shared by the top 10 most referenced domains. Note that domain ranking is based on the number of repositories instead of the number of links. If links belonging to a domain appear in a small number of repositories, the domain will be low-ranked even if those repositories contain many links.
The github.com domain is the top referenced domain in our dataset. More than 14,000 repositories across seven languages referenced content on github.com. As we will describe in detail in Section III-B, such content includes software homepage, code, and profile of a GitHub contributor. However, we find in Section III-F that many links to github.com are no longer available. We also found many links to code.google.com (7th rank). Such content includes bug report, software homepage, and code. In a statistically representative sample of common domains (sampling described in Section III-B), two out of three links to code.google.com are redirected to github.com, and one links to code.google.com/archive/.
The stackoverflow.com domain is the second most referenced domain and has been linked to from 8,189 repositories. As identified in previous work, Stack Overflow is widely used as a knowledge exchange platform between programmers [38], where programmers can obtain knowledge of good practices from code examples [29], [35], for example. The large number of links to stackoverflow.com in source code comments can be another piece of evidence of developers' needs for knowledge acquisition from external resources. We study how code could be obsolete by not being updated when external sources change in Section III-E.
The top domains differ by programming language: The www.apache.org domain is frequently linked from Java repositories, and the www.gnu.org domain is referenced from C and C++ repositories. Repositories written in JavaScript have many links to the Web-related domains of www.w3.org and developer.mozilla.org.
Summary: We revealed that links in source code comments are prevalent. In more than 80% of the 25,925 active repositories written in seven common languages, there exists at least one link in each repository. The top three most frequently referenced domains per repository are github.com, stackoverflow.com, and en.wikipedia.com. To understand what kind of link targets are referenced in source code comments (RQ2), we conducted a qualitative study of a statistically representative and stratified sample of all links in our dataset.
After an initial analysis of the link data, it quickly became obvious that some domains account for many links while other domains are rare. Based on this observation and to ensure diversity of our sample, we divided the data into three strata:
1) links to commonly linked domains, 2) links to domains sometimes linked, and 3) links to rarely linked domains.
To decide on thresholds for distinguishing domains into those that are commonly, sometimes, and rarely linked, we conducted a visual analysis of the distribution of links per domain in our dataset. Figure 2 shows this distribution using a log scale. While content from the most commonly linked domain was linked more than a million times, many domains appeared in our dataset with a much lower frequency. We used the "step" in the distribution on the left-hand side of Figure 2 to distinguish between domains that are commonly linked and domains that are sometimes linked, with a cutoff frequency of 230. We consider domains which account for exactly one link in our dataset to be rarely linked. Table II shows the number of domains and the number of links in each strata. We then drew a statistically representative sample from each bucket. The required sample size was calculated so that our conclusions about the ratio of links with a specific characteristic would generalize to all links in the same bucket with a confidence level of 95% and a confidence interval of 5. 4 The calculation of statistically significant sample sizes based on population size, confidence interval, and confidence level is well established (first published by Krejcie and Morgan in 1970 [19]). The qualitative analysis was conducted in multiple iterations: in the first iteration, the first two authors independently coded 20 links from the sample, discussed a common coding guide, and tested this coding guide on another 20 links from the sample, refining the guide, merging codes, and adding codes which had been missed. The initial codes were informed by those used by Aniche et al. [5] to categorize content posted on news aggregators, however, we found that their codes did not cover all types of link targets present in our dataset. In the second iteration, the four authors of this paper then independently coded another 30 links from the sample, using the coding guide designed by the first two authors. We then calculated the kappa agreement of this iteration between all four raters, for 30 cases and all 19 codes that emerged from the qualitative analysis. 5 The kappa agreement was 0.81 or "almost perfect" [39]. Based on this encouraging result, the remaining data was then coded by a single author.
The following list shows the 19 codes that emerged from our analysis along with a short description which was available in the coding guide:
• 404: link target does not exist (anymore) or cannot be accessed • licence: licence of a software project • software homepage: main web presence of a library or software project • specification: anything that resembles a requirements document or a technical standard • organization homepage: main web presence of an organization or company • other: anything that does not fit the other codes, including if sign-in is required • tutorial or article: technical article or tutorial, without commenting section (blog post otherwise) • API documentation: documentation of an API element • blog post: technical content with a commenting section • application: interactive application (e.g., web application, online utility) • bug report: bug report or issue in an online bug/issue tracker • research paper: academic paper • personal homepage: personal homepage of one individual • code: a source code file • forum thread: thread in a forum or entire forum • GitHub profile: profile of a GitHub contributor • book content: chapter/section of a book or entire book • Q&A thread: question-and-answer thread, but not Stack Overflow • Stack Overflow: question-and-answer thread on Stack Overflow Taxonomy of link targets. Table III shows the result of our qualitative analysis. For commonly-linked domains, license is the most common type of link target, accounting for more than half of the links in our sample, followed by software homepages, i.e., the main web presence of a library or software project. For domains that are linked sometimes from source code comments, the most common type of link target was 404, a non-existing link target. This is a first indicator of the decay 5 Kappa agreement was calculated using http://justusrandolph.net/kappa/. of links in source code comments, which we will analyze in detail in the next sections. Software homepages are also prevalent, as are organization homepages, both accounting for more than 10% of all links in our sample. Finally, for links from domains which are rarely linked, the problem of decay is even more serious, affecting 37% of the links in this sample.
In other words, we can conclude with a 95% confidence that between 32 and 42% of all links to domains which are rarely linked from source code comments are dead or inaccessible. The prevalence of the code "other" in the results for links to rarely linked domains is an indicator of the diversity of links present in source code comments.
Summary:
We identified more than a dozen different kinds of link targets, with dead links, licenses, and software homepages being the most prevalent. Dead links are particularly common for rarely linked domains.
C. Link Purpose (RQ3)
To understand the purpose of links referenced in source code comments (RQ3) and similar to (RQ2), we again employed a qualitative analysis of our statistically representative and stratified sample of 1,146 links, only this time focusing on the origin of a link (in a source comment) rather than the target of the link. We used the same iterative approach to design a coding guide, and validated the coding guide by having the four authors code 30 links independently, this time leading to a kappa agreement of 0.70 which indicates "substantial" agreement [39]. The somewhat lower agreement can be explained by the need to extrapolate the purpose of a link from its context in the source code alone, without being able to interview the contributor who added the link.
The following list shows all 8 codes that emerged from our analysis for link purpose, along with a short description which was available in the coding guide. The coding guide was informed by work on source code comments (e.g., [36]), selfadmitted technical debt (e.g., [28]), and attribution (e.g., [7]).
• metadata: the link relates to the author of the source code, a related organization, or the license • source/attribution: the comment explicitly indicates that the link is a source of some aspect of the source code (e.g., algorithm) • source code context: the link adds additional information to the source code (use this code for things that do not obviously fit into any of the previous) • see-also: the comment explicitly indicates that the link points to additional reading material (usually accompanied by a phrase such as "see also"). • commented-out source code: the link is part of the source code, e.g., as a parameter value, but has been commented out • link-only: the comment only contains the link • self-admitted technical debt: bug-related, like workaround, under development, and so on • @see: the link is accompanied by "@see", but no further explanation Note that our coding guide required the indicators of seealso and source/attribution to be explicit, thus reducing the guesswork required as part of the qualitative analysis.
Taxonomy of link purpose. Table IV shows the results of the qualitative analysis. For links to commonly linked domains, providing metadata, e.g., in the forum of licenses or author information, is by far the most common purpose of a link, covering three quarters of the links in our sample. For links to domains which are only sometimes linked, metadata only accounts for one third of the data, followed by links included for the purpose of attribution, providing context, or see-also information. The results for links to rarely linked domains are even more diverse: we can see from the table that these links are used for context, attribution, and as part of the source code functionality (albeit commented out), to name the top three. Six of the eight codes account for at least 10% of the links in this part of our sample.
Matching link target with purpose. Based on the qualitative analysis conducted for answering RQ2 and RQ3 about the targets and purposes of links in source code comments, we are now able to investigate the relationships between the different types of link targets and the different purposes which emerged from our qualitative analysis. To do so, we applied association rule learning using the apriori algorithm [1] as implemented in the R package arules 6 to our data, treating each link as a transaction containing two items: its target type and its purpose. We used 4 as threshold for support and 0.7 as threshold for confidence, i.e., all rules that we extracted are supported by at least four data points and we have at least a 70% confidence that the left hand side of the rule implies the right hand side. Table V shows the association rules extracted from our data with these settings, separately for each stratum in our sample. Unsurprisingly, the link target type license and the purpose of providing metadata are tightly connected, in particular for links referring to commonly linked domains. In fact, all links to licenses were found to have been included for the reason of providing metadata, and 72% of the metadata is license information. Links to software, organization, and personal homepages are also associated with metadata, across all strata. Although with a relatively low support of seven instances, it is also interesting to note the tight coupling of the link target type bug report and the purpose of admitting technical debt.
Summary: We identified different purposes for the inclusion of links in source code comments, with providing metadata and attribution being the most common. Links are also included for background information, to provide context, or to admit technical debt. In some cases, the link is part of source code which has been commented out.
D. Link Evolution (RQ4)
To understand how links evolve (RQ4), we investigated the revision histories of repositories in the samples from (RQ2). For each sample link, we searched an old version of the link that has been revised by a commit that introduced the link. We extracted such a commit introducing a link by using the git log command (-S option with tracking file renaming). We searched http(s) links removed from the code location where the sample link has been added. We identified 88 revised links out of 1,146 samples, including 24 (6.3%) in common, 31 (8.1%) in sometimes, and 33 (8.7%) in rare. Less than 10% of the links had been revised in each strata, that is, most of the links have never been updated. We manually analyzed the old and new paths of the links and identified the following evolution types:
• license replacement: a new link refers to a new software license. For example, a link to GNU GPL has been replaced with a link to the Apache License. • organization update: a project or an organization changed its name or website. For example, a project that acquired their own domain updated links to their project website. • change to https: a new link uses HTTPS instead of HTTP for the same location as the previous link. • content move: a new link refers to a slightly different location (e.g. the same path on a different server, the same document name on a different wiki), which is likely the same content. • content update: a new link refers to different content from the previous link, but the new content is likely updated. For example, the Apache Jackrabbit project replaced a link pointing to a draft version of a document 7 with a link to an RFC version. 8 • content change: a new link refers to relevant but different content from the previous link. For example, the Pi4J project replaced a link related to the usage of a serial port of Raspberry Pie 9 with another similar document. 10 • other: we could not identify types for some links whose contents are no longer available. It should be noted that the contents for 20 updated links are 404 Not Found.
Reasons for link evolution. Table VI shows the numbers of link evolution in the three strata. For commonly-linked domains, license replacement and updating organizational information account for about 80% of link revisions. For domains sometimes linked, organization update is the most common, followed by other and content change. For rarely- Summary: Links are rarely updated (less than 9%). Common modifications are updating licenses and organization homepages.
E. Link Target Evolution (RQ5)
After understanding the evolution of links, our next research question (RQ5) asks about the evolution of their targets. To investigate whether link targets referenced in source code comments evolve, we attempted to download all link targets in our sample of 1,146 links using the curl command with a timeout of 60 seconds. As already discussed as part of (RQ2), not all link targets are available. We were able to download a total of 1,034 link targets (90%). We then repeated the same download process exactly ten days later, to see how many of the link targets had changed within this short time frame and what kind of changes had happened.
Changes to the link target. Table VII summarizes the results of this analysis: out of the 1,034 link targets for which curl returned a result, 879 (85%) had not changed at all in the ten-day time frame (the downloaded content was exactly the same, as per the Windows file compare tool fc). We manually analyzed the 155 cases in which the content had changed by opening both versions in a web browser and conducting a visual comparison. The majority of the changes in the remaining 15% can be attributed to automatically generated changes, such as the display of a visitor count or the current date in a footer.
However, a non-negligible number of link targets underwent more significant changes in the ten-day time window: For six links for which we were able to retrieve data on the first download date, there was no content available anymore ten days later. For three links which had displayed an error message when we first attempted to download their content, the specific error message changed. Some link targets changed their website design, and for a few links, the content changed. For example, the download page of TaskWarrior 11 included the following notice when we first downloaded its content: "(For those of you wishing to build task from source on Cygwin, you will need some components installed (make, g++/clang, GnuTLS, libuuid, libreadline), but don't forget -task is a standard part of the Cygwin distribution, so you do not need to build from source, unless you want the latest development snapshot)." Ten days later, this notice was replaced with: "(Please note, that Cygwin is not supported anymore. Please use the Windows Subsystem for Linux to use Taskwarrior on Windows)." We argue that this kind of change is relevant to software developers.
Stack Overflow case study. To investigate this phenomenon in more detail, we conducted a case study with the subset of links pointing to Stack Overflow. As seen in Section III-A, stackoverflow.com is the second most referenced domain.
In all 9,654,702 obtained links, there are 32,197 links belonging to stackoverflow.com. Among those Stack Overflow links, there are varieties of expressions: an abbreviated path to an answer (/a/(answer id)), an abbreviated path to a question (/q/(question id)), and a full path to a question (/questions/(question id)/(title)). Older links start with 'http://' and newer links start with 'https://'. For each Stack Overflow link, we identified the timestamp of when the link was added to a repository by using the same git log command (-S option with tracking file renaming) used in Section III-D. For duplicate links, we consider only the oldest timestamp. Consequently, we obtained a list of 11,464 distinct links with their timestamps.
We then made use of the SOTorrent dataset [8] to investigate the extent to which Stack Overflow content had changed since the link to the question or answer had been added to a source code comment in a Git repository. We created a statistically representative sample of 372 links from the population of all unique links to Stack Overflow content in our dataset, and we queried SOTorrent to determine the following metrics for each link:
• the number of text edits on any post (i.e., question or answer) in the same thread, • the number of new comments on any post (i.e., question or answer) in the same thread, • the number of new answers in the same thread, and • the number of edits to the thread title. Thread updates. Figure 3 shows the results of this analysis. More than half of all Stack Overflow threads had at least one change made to the text of a question or answer in the same thread (median: 1, third quartile: 3) after they were committed to a Git repository as part of a source code comment, and more than half of these links attracted at least one new comment in the meantime (median: 2, third quartile: 7). While the number of new answers to a thread was zero in the median case, a quarter of the Stack Overflow threads attracted at least 2 new answers after the link was added in a source code comment (median: 0, third quartile: 2). In total, only 91 (24%) of the 372 Stack Overflow threads in our sample did not undergo any changes after they were added to a Git repository. Summary: We found that even within a short ten-day time window, a non-negligible portion of link targets referenced in source code comments evolve, in some cases adding or modifying pertinent information. In our case study on links pointing to Stack Overflow, we found that more than three quarters of all Stack Overflow threads linked in source code comments attracted at least one change (edit, new answer, or new comment) after being first referenced in a source code comment.
F. Link Decay (RQ6)
Among the obtained 9,654,702 links, there are 382,650 distinct links. To investigate the amount of dead links in source code comments (RQ6), we accessed all Web contents from the 382,650 unique links by using the Perl module LWP. 13 Link retrieval responses.
G. Fixing Dead Links (RQ7)
To fix dead links (RQ7), we collected fixable dead links and submitted pull requests to fix. We select dead links that are not metadata (need multiple files to be fixed) nor commented-out source code. Personal blog articles were avoided because they tend to be no longer available. Consequently we obtained 14 dead links to API documentation, research papers, and so on. After checking the original content in Wayback Machine 14 , we manually investigated new links by searching specific keywords in the original content. Our fixing process included first forking a personal copy of the project, fixing the link, and then later submission of a pull request to the project.
Pull request results. Developers showed they cared about dead links by accepting to all nine pull requests. 15 16 17 18 19 20 21 22 23 Since the link itself is a comment, we speculate that it has almost no conflicts with existing code, so our pull requests are likely to pass all tests and to be merged immediately. Developers responded with comments such as "LGTM (looks good to me)" and "Thanks for spotting the broken link".
Overall, the responses from developers provide sufficient motivation for tool support to assist with fixing broken links. We argue that such comments indicate that developers are concerned with keeping their links alive.
Summary: Developers generally responded positively to the request to fix dead links. All nine responsive projects accepted our pull requests to fix dead links.
IV. RECOMMENDATIONS
Our findings can be summarized into recommendations for developers and researchers.
Recommendations for software developers including links in source code comments are:
• Try referencing permanent links, as it is reported that more than 30% of links will not work after a 4 year period [18]. Referencing research papers with DOI is preferable instead of researchers' personal Web pages. Explicitly mentioning tags or commit hashes to referenced code in GitHub would be recommended, as software structure can be changed (we found many dead links to GitHub in Section III-F). • Check link targets for new information on a regular basis, as referenced external resources can be considered to be software documentation to support comprehension and maintenance activities. In addition, link target updates can be triggers of improving and updating code (as seen in Section III-E).
We can also consider future work with the following possible challenges.
• Further understanding of external sources. We found many sources as shown in Figure 1c and Table III. Although some sources have been already studied, for example, licenses [12], self-admitted technical debt [28], and Stack Overflow [38], other sources have not been well-studied with regard to their impact and influence on software development, such as research papers and Wikipedia articles. • Further studies of source code comments to understand how knowledge (related to knowledge-based theory [44] and human capital [26], [40]) is summarized and shared via source code comments. Further analyses of source code comment contents [27] would be required. • Tool support for external source referencing, tracking, and updating. Although we recommend developers to maintain links and associated code, it is not always possible. Tools or systems to help developers fix link issues and maintain code automatically could be practically useful.
V. THREATS TO VALIDITY
Threats to the construct validity exist in our approach to link identification. Since we identified links per line in source code comments, links located across multiple lines cannot be extracted. Note that we did not encounter any such multipleline links in our representative sample of 1,146 links. Hence we consider that the impact of incorrect link identification because of multiple-line links is small.
Threats to the external validity exist in our repository preparation. Although we analyzed a large amount of repositories on GitHub, we cannot generalize our findings to industry nor open source projects in general; some open source repositories are hosted outside of GitHub, e.g., on GitLab or private servers.
To mitigate threats to reliability, we prepared an online appendix of our 9,654,702 links with associated information (see Section II-C).
VII. CONCLUSION
To understand purposes, evolution, and decay of links in source code comments, we conducted (i) a quantitative study of 9,654,702 links from source code comments in 25,925 Git repositories to establish the prevalence of links in source code comments; (ii) a qualitative study of a stratified sample of 1,146 links to determine the kinds of link targets and purposes for including links present in our dataset; (iii) a quantitative and qualitative study to investigate the evolution of links in source code comments and their targets; and (iv) a quantitative study to determine the extent to which links in source code comments are affected by link decay.
Our work has shown that links in source code comments indeed suffer from decay, from insufficient versioning (when link targets evolve), and from lack of bidirectional traceability (which could help avoid decay). Based on this work which has established the prevalence of links in source code comments, their multiple purposes and targets, issues of decay, and practical needs of fixing dead links, there are many open avenues for future work: understanding the role of external sources for software development, further studies of source code comments, and tool support for external source referencing, to name a few. | 6,149 |
1901.07440 | 2913669491 | Links are an essential feature of the World Wide Web, and source code repositories are no exception. However, despite their many undisputed benefits, links can suffer from decay, insufficient versioning, and lack of bidirectional traceability. In this paper, we investigate the role of links contained in source code comments from these perspectives. We conducted a large-scale study of around 9.6 million links to establish their prevalence, and we used a mixed-methods approach to identify the links' targets, purposes, decay, and evolutionary aspects. We found that links are prevalent in source code repositories, that licenses, software homepages, and specifications are common types of link targets, and that links are often included to provide metadata or attribution. Links are rarely updated, but many link targets evolve. Almost 10 of the links included in source code comments are dead. We then submitted a batch of link-fixing pull requests to open source software repositories, resulting in most of our fixes being merged successfully. Our findings indicate that links in source code comments can indeed be fragile, and our work opens up avenues for future work to address these problems. | Self-admitted technical debt is a commenting activity that has been well-studied in recent years @cite_20 . @cite_18 and @cite_7 studied the removal of self-admitted technical debt based on the modification of comments. Our finding of referencing bug reports for self-admitted technical debt could be another opportunity to study development activities around technical debt. | {
"abstract": [
"Technical debt refers to the phenomena of taking shortcuts to achieve short term gain at the cost of higher maintenance efforts in the future. Recently, approaches were developed to detect technical debt through code comments, referred to as Self-Admitted Technical Debt (SATD). Due to its importance, several studies have focused on the detection of SATD and examined its impact on software quality. However, preliminary findings showed that in some cases SATD may live in a project for a long time, i.e., more than 10 years. These findings clearly show that not all SATD may be regarded as 'bad' and some SATD needs to be removed, while other SATD may be fine to take on.Therefore, in this paper, we study the removal of SATD. In an empirical study on five open source projects, we examine how much SATD is removed and who removes SATD? We also investigate for how long SATD lives in a project and what activities lead to the removal of SATD? Our findings indicate that the majority of SATD is removed and that the majority is self-removed (i.e., removed by the same person that introduced it). Moreover, we find that SATD can last between approx. 18-172 days, on median. Finally, through a developer survey, we find that developers mostly use SATD to track future bugs and areas of the code that need improvements. Also, developers mostly remove SATD when they are fixing bugs or adding new features. Our findings contribute to the body of empirical evidence on SATD, in particular, evidence pertaining to its removal.",
"Technical Debt (TD) has been defined as \"code being not quite right yet\", and its presence is often self-admitted by developers through comments. The purpose of such comments is to keep track of TD and appropriately address it when possible. Building on a previous quantitative investigation by on the removal of self-admitted technical debt (SATD), in this paper we perform an in-depth quantitative and qualitative study of how SATD is addressed in five Java open source projects. On the one hand, we look at whether SATD is \"accidentally\" removed, and the extent to which the SATD removal is being documented. We found that that (i) between 20 and 50 of SATD comments are accidentally removed while entire classes or methods are dropped, (ii) 8 of the SATD removal is acknowledged in commit messages, and (iii) while most of the changes addressing SATD require complex source code changes, very often SATD is addressed by specific changes to method calls or conditionals. Our results can be used to better plan TD management or learn patterns for addressing certain kinds of TD and provide recommendations to developers.",
"Throughout a software development life cycle, developers knowingly commit code that is either incomplete, requires rework, produces errors, or is a temporary workaround. Such incomplete or temporary workarounds are commonly referred to as 'technical debt'. Our experience indicates that self-admitted technical debt is common in software projects and may negatively impact software maintenance, however, to date very little is known about them. Therefore, in this paper, we use source-code comments in four large open source software projects-Eclipse, Chromium OS, Apache HTTP Server, and ArgoUML to identify self-admitted technical debt. Using the identified technical debt, we study 1) the amount of self-admitted technical debt found in these projects, 2) why this self-admitted technical debt was introduced into the software projects and 3) how likely is the self-admitted technical debt to be removed after their introduction. We find that the amount of self-admitted technical debt exists in 2.4 -- 31 of the files. Furthermore, we find that developers with higher experience tend to introduce most of the self-admitted technical debt and that time pressures and complexity of the code do not correlate with the amount of self-admitted technical debt. Lastly, although self-admitted technical debt is meant to be addressed or removed in the future, only between 26.3 -- 63.5 of self-admitted technical debt gets removed from projects after introduction."
],
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_20"
],
"mid": [
"2767729231",
"2805357019",
"2045336717"
]
} | 9.6 Million Links in Source Code Comments: Purpose, Evolution, and Decay | When Ted Nelson started Project Xanadu 1 in 1960, he envisioned "an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system-a literary, legal and business arrangement-for friction-less, non-negotiated quotation at any time and in any amount" [25]. Links were supposed to be visible and could be followed from all endpoints, with permission to link to a document explicitly granted by the act of publication [2]. Decades later, Nelson witnessed the birth of the World Wide Web, which in his words "trivialized this original Xanadu model, vastly but incorrectly simplifying these problems to a world of fragile ever-breaking one-way links, with no recognition of change or copyright, and no support for multiple versions or principled re-use" [25]. As predicted by Nelson, the Internet and its implementation of links have afforded us countless opportunities since, but also experienced issues such as link decay [17], [22], digital plagiarism [9], and the need to rely on external services to keep historical copies of web content [24].
In this work, we investigate the role of links contained in source code comments from the perspective of these opportunities and challenges: what purposes do they serve, how do they and their targets evolve, and how often do they break? The significance of this work is closely related to software documentation [33] and self-admitted technical debt [28]. To improve documentation and mitigate potential issues, it is important to understand developers' typical knowledge sharing activities by referencing external sources, and to investigate link decay as a potential problem.
Our work is related to and inspired by recent research on source code comments in terms of documentation, traceability, licensing, and attribution. For example, source code comments have been found to document technical debt [28] and to support articulation work [36]. They are fragile with respect to identifier renaming, i.e., traceability between comments and code is easily lost [32]. Source code comments located at the beginning of a file often include a text or a link indicating the copyright and license information of the file [12]. These comments are updated during the evolution of a product by the copyright holders [42]. Links in source code comments are sometimes used for attribution when source code has been taken from elsewhere-however, the vast majority of code snippets is copied without attribution [7], [8]. Despite these research efforts, to the best of our knowledge, the role of links in source code comments has not been studied comprehensively so far.
To fill this gap, in this paper, we first lay the foundation for understanding the role of links in source code comments by collecting 9,654,702 links from source code comments in 25,925 Git repositories. Our parser is able to extract comments from source code written in 7 programming languages. We find that links in source code comments are common: more than 80% of the repositories in our study contained at least one link. Through a qualitative study of a stratified sample of 1,146 links, we establish the kinds of link targets that are referenced in source code comments. To understand how links are used to indicate issues related to attribution, technical debt, copyright, and licensing, our qualitative study also uncovers the various purposes for including links in source code comments. We find that licenses, software homepages, and specifications are among the most prevalent types of link targets, and that links are often used to provide metadata or attribution. Link decay has the potential of making documentation in source code comments fragile and buggy. We investigate this issue from two perspectives: we analyze the evolution of the links in the repositories' commit histories and we examine how often link targets referenced in source code comments change. We find that links are rarely updated, but their targets evolve, in almost 10% of all cases leading to dead links. We then submit fixes to a subset of these broken links as pull requests, most of which were successfully merged by the maintainers of the corresponding open source projects.
In summary, this paper's contributions are three-fold:
• a large-scale and comprehensive study of around 9.6 million links to establish the prevalence of links in source code comments, • a mixed-methods study to identify targets, purposes, and evolutionary aspects of links in source code comments, and • an analysis of the extent to which links in source code comments are affected by link decay, with all nine linkfixing pull requests submitted to active open source projects already merged by the projects' maintainers.
II. RESEARCH METHOD
In this section, we present our research questions and data collection methodology, and we introduce the data contained in our online appendix.
A. Research Questions
The main goal of the study is to gain insights into the purposes, evolution and decay of links in source code comments. Based on this goal, we constructed seven research questions to guide our study. We now present each of these questions, along with the motivation for each.
(RQ1): How prevalent are links in source code comments?
The motivation of RQ1 is to understand whether the use of links in source code is a common practice in the wild. Furthermore, we would like to quantitatively explore the distribution, diversity, and spread of these links across different types of software projects. (RQ2): What kind of link targets are referenced in source code comments? (RQ3): What purpose do links in source code serve? RQ2 and RQ3 require a deeper analysis of the repositories, where we would like to understand the nature and purpose that the links serve. The key motivation for RQ2 is to identify the types of link targets that developers are likely to refer to in source code comments. Furthermore, we would like to characterize the most common types of linked domains.
The key motivation for RQ3 is to determine the reasons why developers use links.
(RQ4): How do links in source code comments evolve? (RQ5): How frequently do link targets referenced in source code comments change? (RQ6): How many links in source code comments are dead?
B. Data Collection
We now describe our methods for repository preparation, comment extraction, and link identification.
Repository preparation. In this work, we analyzed active software development repositories on GitHub written in common programming languages. As common programming languages, we selected seven languages: C, C++, Java, JavaScript, Python, PHP, and Ruby. These languages have been ranked consistently in the top 10 languages on GitHub from 2008 to 2017 (based on the number of repositories from 2008 to 2015 [20], the number of pull requests from 2014 to 2017 [10], and the number of pull requests in 2017 [13]).
Using the GHTorrent dataset 2 [16], we collected active repositories for the seven languages using the following criteria: (i) having more than 500 commits in their entire history (the same threshold used in previous work [4]), and (ii) having at least 100 commits in the most active two years. We designed the second criterion to remove long-term less active repositories and short-term projects that have not been maintained for long (and may not be software development projects). For example, we were able to exclude software-engineering-amsterdam/sea-of-ql, which is a repository of a collaboration space for students in a particular university course, and was reported as a false positive of software project identification [23]. We determine repositories' languages based on the GHTorrent information. Forked repositories are excluded if repositories are recorded in GHTorrent as forks of other repositories.
With the above criteria, we prepared the candidate list of target repositories for the seven languages as shown in Table I.
When we collected these candidate repositories (from May to June 2018), some repositories were not available because they had been deleted or made private. In total, we obtained more than 25,000 repositories, which is almost 90% of the candidate repositories.
Comment extraction. From each Git repository, we extract source files of the labeled language in the HEAD commit (the latest snapshot of a cloned repository). For example, only .java files are extracted from a Java repository. To process source files, we employ ANTLR4 lexical analyzers for six languages other than Ruby because their grammar definitions are available in the official example repository. 3 For Ruby, we use a standard library, Ripper parser.
We extract all single line comments (e.g., // in C) and multiline comments (/ * ... * /) according to the grammars. In the case of Python, string literals (''' ... ''') are also regarded as comments because they include documents (known as docstring). In the case of PHP, both HTML comments and PHP code comments are extracted.
Link identification. From the extracted comments, links are identified using the regular expression /http\S+/ (localhost and IP addresses, which are mainly used for private addresses, are excluded) and validated with the Perl module Data::Validate::URI. We identified a total of 9,654,702 links from the collected repositories as seen in Table I. All links are recorded with the information of the corresponding file, repository identifiers (pairs of account and repository names), commit hashes, and the line number where the surrounding comment starts. Considering the number of repositories, we found that repositories written in C, C++, and Java tend to contain more links compared to repositories in Python and Ruby.
C. Online Appendix
Our online appendix contains our 9,654,702 links associated with the information of languages and comment location (GitHub links including account names, repository names, commit hashes, file paths, and line numbers). The appendix is available at https://github.com/NAIST-SE/9.6MillionLinks.
III. FINDINGS
In this section, we present our findings for each research question.
A. Prevalence of Links (RQ1)
To understand the prevalence of links referenced in source code comments (RQ1), we conducted a quantitative analysis of our collected dataset in terms of link existence, domain diversity, and domain popularity.
Link existence. Figure 1a show the percentages of repositories that have at least one link in their source code comments. We see that, in every language, more than 80% of the repositories contain links in source code comments. Especially for repositories written in C, C++, and PHP, more than 90% of the repositories refer to external sources via links.
Domain diversity. In the obtained 9,654,702 links, there are 57,039 distinct domains (Internet hostnames). Figure 1b shows the distribution of the number of distinct domains per repository, for repositories that have at least one link in their source code comments. Median values are presented in the figure. We found that there is a diversity of links in a single repository even when summarized by their domains. Especially in repositories written in C, C++, JavaScript, and PHP, source code comments link to 10 or more different domains (median).
Popular domains. Figure 1c illustrates the proportion of languages shared by the top 10 most referenced domains. Note that domain ranking is based on the number of repositories instead of the number of links. If links belonging to a domain appear in a small number of repositories, the domain will be low-ranked even if those repositories contain many links.
The github.com domain is the top referenced domain in our dataset. More than 14,000 repositories across seven languages referenced content on github.com. As we will describe in detail in Section III-B, such content includes software homepage, code, and profile of a GitHub contributor. However, we find in Section III-F that many links to github.com are no longer available. We also found many links to code.google.com (7th rank). Such content includes bug report, software homepage, and code. In a statistically representative sample of common domains (sampling described in Section III-B), two out of three links to code.google.com are redirected to github.com, and one links to code.google.com/archive/.
The stackoverflow.com domain is the second most referenced domain and has been linked to from 8,189 repositories. As identified in previous work, Stack Overflow is widely used as a knowledge exchange platform between programmers [38], where programmers can obtain knowledge of good practices from code examples [29], [35], for example. The large number of links to stackoverflow.com in source code comments can be another piece of evidence of developers' needs for knowledge acquisition from external resources. We study how code could be obsolete by not being updated when external sources change in Section III-E.
The top domains differ by programming language: The www.apache.org domain is frequently linked from Java repositories, and the www.gnu.org domain is referenced from C and C++ repositories. Repositories written in JavaScript have many links to the Web-related domains of www.w3.org and developer.mozilla.org.
Summary: We revealed that links in source code comments are prevalent. In more than 80% of the 25,925 active repositories written in seven common languages, there exists at least one link in each repository. The top three most frequently referenced domains per repository are github.com, stackoverflow.com, and en.wikipedia.com. To understand what kind of link targets are referenced in source code comments (RQ2), we conducted a qualitative study of a statistically representative and stratified sample of all links in our dataset.
After an initial analysis of the link data, it quickly became obvious that some domains account for many links while other domains are rare. Based on this observation and to ensure diversity of our sample, we divided the data into three strata:
1) links to commonly linked domains, 2) links to domains sometimes linked, and 3) links to rarely linked domains.
To decide on thresholds for distinguishing domains into those that are commonly, sometimes, and rarely linked, we conducted a visual analysis of the distribution of links per domain in our dataset. Figure 2 shows this distribution using a log scale. While content from the most commonly linked domain was linked more than a million times, many domains appeared in our dataset with a much lower frequency. We used the "step" in the distribution on the left-hand side of Figure 2 to distinguish between domains that are commonly linked and domains that are sometimes linked, with a cutoff frequency of 230. We consider domains which account for exactly one link in our dataset to be rarely linked. Table II shows the number of domains and the number of links in each strata. We then drew a statistically representative sample from each bucket. The required sample size was calculated so that our conclusions about the ratio of links with a specific characteristic would generalize to all links in the same bucket with a confidence level of 95% and a confidence interval of 5. 4 The calculation of statistically significant sample sizes based on population size, confidence interval, and confidence level is well established (first published by Krejcie and Morgan in 1970 [19]). The qualitative analysis was conducted in multiple iterations: in the first iteration, the first two authors independently coded 20 links from the sample, discussed a common coding guide, and tested this coding guide on another 20 links from the sample, refining the guide, merging codes, and adding codes which had been missed. The initial codes were informed by those used by Aniche et al. [5] to categorize content posted on news aggregators, however, we found that their codes did not cover all types of link targets present in our dataset. In the second iteration, the four authors of this paper then independently coded another 30 links from the sample, using the coding guide designed by the first two authors. We then calculated the kappa agreement of this iteration between all four raters, for 30 cases and all 19 codes that emerged from the qualitative analysis. 5 The kappa agreement was 0.81 or "almost perfect" [39]. Based on this encouraging result, the remaining data was then coded by a single author.
The following list shows the 19 codes that emerged from our analysis along with a short description which was available in the coding guide:
• 404: link target does not exist (anymore) or cannot be accessed • licence: licence of a software project • software homepage: main web presence of a library or software project • specification: anything that resembles a requirements document or a technical standard • organization homepage: main web presence of an organization or company • other: anything that does not fit the other codes, including if sign-in is required • tutorial or article: technical article or tutorial, without commenting section (blog post otherwise) • API documentation: documentation of an API element • blog post: technical content with a commenting section • application: interactive application (e.g., web application, online utility) • bug report: bug report or issue in an online bug/issue tracker • research paper: academic paper • personal homepage: personal homepage of one individual • code: a source code file • forum thread: thread in a forum or entire forum • GitHub profile: profile of a GitHub contributor • book content: chapter/section of a book or entire book • Q&A thread: question-and-answer thread, but not Stack Overflow • Stack Overflow: question-and-answer thread on Stack Overflow Taxonomy of link targets. Table III shows the result of our qualitative analysis. For commonly-linked domains, license is the most common type of link target, accounting for more than half of the links in our sample, followed by software homepages, i.e., the main web presence of a library or software project. For domains that are linked sometimes from source code comments, the most common type of link target was 404, a non-existing link target. This is a first indicator of the decay 5 Kappa agreement was calculated using http://justusrandolph.net/kappa/. of links in source code comments, which we will analyze in detail in the next sections. Software homepages are also prevalent, as are organization homepages, both accounting for more than 10% of all links in our sample. Finally, for links from domains which are rarely linked, the problem of decay is even more serious, affecting 37% of the links in this sample.
In other words, we can conclude with a 95% confidence that between 32 and 42% of all links to domains which are rarely linked from source code comments are dead or inaccessible. The prevalence of the code "other" in the results for links to rarely linked domains is an indicator of the diversity of links present in source code comments.
Summary:
We identified more than a dozen different kinds of link targets, with dead links, licenses, and software homepages being the most prevalent. Dead links are particularly common for rarely linked domains.
C. Link Purpose (RQ3)
To understand the purpose of links referenced in source code comments (RQ3) and similar to (RQ2), we again employed a qualitative analysis of our statistically representative and stratified sample of 1,146 links, only this time focusing on the origin of a link (in a source comment) rather than the target of the link. We used the same iterative approach to design a coding guide, and validated the coding guide by having the four authors code 30 links independently, this time leading to a kappa agreement of 0.70 which indicates "substantial" agreement [39]. The somewhat lower agreement can be explained by the need to extrapolate the purpose of a link from its context in the source code alone, without being able to interview the contributor who added the link.
The following list shows all 8 codes that emerged from our analysis for link purpose, along with a short description which was available in the coding guide. The coding guide was informed by work on source code comments (e.g., [36]), selfadmitted technical debt (e.g., [28]), and attribution (e.g., [7]).
• metadata: the link relates to the author of the source code, a related organization, or the license • source/attribution: the comment explicitly indicates that the link is a source of some aspect of the source code (e.g., algorithm) • source code context: the link adds additional information to the source code (use this code for things that do not obviously fit into any of the previous) • see-also: the comment explicitly indicates that the link points to additional reading material (usually accompanied by a phrase such as "see also"). • commented-out source code: the link is part of the source code, e.g., as a parameter value, but has been commented out • link-only: the comment only contains the link • self-admitted technical debt: bug-related, like workaround, under development, and so on • @see: the link is accompanied by "@see", but no further explanation Note that our coding guide required the indicators of seealso and source/attribution to be explicit, thus reducing the guesswork required as part of the qualitative analysis.
Taxonomy of link purpose. Table IV shows the results of the qualitative analysis. For links to commonly linked domains, providing metadata, e.g., in the forum of licenses or author information, is by far the most common purpose of a link, covering three quarters of the links in our sample. For links to domains which are only sometimes linked, metadata only accounts for one third of the data, followed by links included for the purpose of attribution, providing context, or see-also information. The results for links to rarely linked domains are even more diverse: we can see from the table that these links are used for context, attribution, and as part of the source code functionality (albeit commented out), to name the top three. Six of the eight codes account for at least 10% of the links in this part of our sample.
Matching link target with purpose. Based on the qualitative analysis conducted for answering RQ2 and RQ3 about the targets and purposes of links in source code comments, we are now able to investigate the relationships between the different types of link targets and the different purposes which emerged from our qualitative analysis. To do so, we applied association rule learning using the apriori algorithm [1] as implemented in the R package arules 6 to our data, treating each link as a transaction containing two items: its target type and its purpose. We used 4 as threshold for support and 0.7 as threshold for confidence, i.e., all rules that we extracted are supported by at least four data points and we have at least a 70% confidence that the left hand side of the rule implies the right hand side. Table V shows the association rules extracted from our data with these settings, separately for each stratum in our sample. Unsurprisingly, the link target type license and the purpose of providing metadata are tightly connected, in particular for links referring to commonly linked domains. In fact, all links to licenses were found to have been included for the reason of providing metadata, and 72% of the metadata is license information. Links to software, organization, and personal homepages are also associated with metadata, across all strata. Although with a relatively low support of seven instances, it is also interesting to note the tight coupling of the link target type bug report and the purpose of admitting technical debt.
Summary: We identified different purposes for the inclusion of links in source code comments, with providing metadata and attribution being the most common. Links are also included for background information, to provide context, or to admit technical debt. In some cases, the link is part of source code which has been commented out.
D. Link Evolution (RQ4)
To understand how links evolve (RQ4), we investigated the revision histories of repositories in the samples from (RQ2). For each sample link, we searched an old version of the link that has been revised by a commit that introduced the link. We extracted such a commit introducing a link by using the git log command (-S option with tracking file renaming). We searched http(s) links removed from the code location where the sample link has been added. We identified 88 revised links out of 1,146 samples, including 24 (6.3%) in common, 31 (8.1%) in sometimes, and 33 (8.7%) in rare. Less than 10% of the links had been revised in each strata, that is, most of the links have never been updated. We manually analyzed the old and new paths of the links and identified the following evolution types:
• license replacement: a new link refers to a new software license. For example, a link to GNU GPL has been replaced with a link to the Apache License. • organization update: a project or an organization changed its name or website. For example, a project that acquired their own domain updated links to their project website. • change to https: a new link uses HTTPS instead of HTTP for the same location as the previous link. • content move: a new link refers to a slightly different location (e.g. the same path on a different server, the same document name on a different wiki), which is likely the same content. • content update: a new link refers to different content from the previous link, but the new content is likely updated. For example, the Apache Jackrabbit project replaced a link pointing to a draft version of a document 7 with a link to an RFC version. 8 • content change: a new link refers to relevant but different content from the previous link. For example, the Pi4J project replaced a link related to the usage of a serial port of Raspberry Pie 9 with another similar document. 10 • other: we could not identify types for some links whose contents are no longer available. It should be noted that the contents for 20 updated links are 404 Not Found.
Reasons for link evolution. Table VI shows the numbers of link evolution in the three strata. For commonly-linked domains, license replacement and updating organizational information account for about 80% of link revisions. For domains sometimes linked, organization update is the most common, followed by other and content change. For rarely- Summary: Links are rarely updated (less than 9%). Common modifications are updating licenses and organization homepages.
E. Link Target Evolution (RQ5)
After understanding the evolution of links, our next research question (RQ5) asks about the evolution of their targets. To investigate whether link targets referenced in source code comments evolve, we attempted to download all link targets in our sample of 1,146 links using the curl command with a timeout of 60 seconds. As already discussed as part of (RQ2), not all link targets are available. We were able to download a total of 1,034 link targets (90%). We then repeated the same download process exactly ten days later, to see how many of the link targets had changed within this short time frame and what kind of changes had happened.
Changes to the link target. Table VII summarizes the results of this analysis: out of the 1,034 link targets for which curl returned a result, 879 (85%) had not changed at all in the ten-day time frame (the downloaded content was exactly the same, as per the Windows file compare tool fc). We manually analyzed the 155 cases in which the content had changed by opening both versions in a web browser and conducting a visual comparison. The majority of the changes in the remaining 15% can be attributed to automatically generated changes, such as the display of a visitor count or the current date in a footer.
However, a non-negligible number of link targets underwent more significant changes in the ten-day time window: For six links for which we were able to retrieve data on the first download date, there was no content available anymore ten days later. For three links which had displayed an error message when we first attempted to download their content, the specific error message changed. Some link targets changed their website design, and for a few links, the content changed. For example, the download page of TaskWarrior 11 included the following notice when we first downloaded its content: "(For those of you wishing to build task from source on Cygwin, you will need some components installed (make, g++/clang, GnuTLS, libuuid, libreadline), but don't forget -task is a standard part of the Cygwin distribution, so you do not need to build from source, unless you want the latest development snapshot)." Ten days later, this notice was replaced with: "(Please note, that Cygwin is not supported anymore. Please use the Windows Subsystem for Linux to use Taskwarrior on Windows)." We argue that this kind of change is relevant to software developers.
Stack Overflow case study. To investigate this phenomenon in more detail, we conducted a case study with the subset of links pointing to Stack Overflow. As seen in Section III-A, stackoverflow.com is the second most referenced domain.
In all 9,654,702 obtained links, there are 32,197 links belonging to stackoverflow.com. Among those Stack Overflow links, there are varieties of expressions: an abbreviated path to an answer (/a/(answer id)), an abbreviated path to a question (/q/(question id)), and a full path to a question (/questions/(question id)/(title)). Older links start with 'http://' and newer links start with 'https://'. For each Stack Overflow link, we identified the timestamp of when the link was added to a repository by using the same git log command (-S option with tracking file renaming) used in Section III-D. For duplicate links, we consider only the oldest timestamp. Consequently, we obtained a list of 11,464 distinct links with their timestamps.
We then made use of the SOTorrent dataset [8] to investigate the extent to which Stack Overflow content had changed since the link to the question or answer had been added to a source code comment in a Git repository. We created a statistically representative sample of 372 links from the population of all unique links to Stack Overflow content in our dataset, and we queried SOTorrent to determine the following metrics for each link:
• the number of text edits on any post (i.e., question or answer) in the same thread, • the number of new comments on any post (i.e., question or answer) in the same thread, • the number of new answers in the same thread, and • the number of edits to the thread title. Thread updates. Figure 3 shows the results of this analysis. More than half of all Stack Overflow threads had at least one change made to the text of a question or answer in the same thread (median: 1, third quartile: 3) after they were committed to a Git repository as part of a source code comment, and more than half of these links attracted at least one new comment in the meantime (median: 2, third quartile: 7). While the number of new answers to a thread was zero in the median case, a quarter of the Stack Overflow threads attracted at least 2 new answers after the link was added in a source code comment (median: 0, third quartile: 2). In total, only 91 (24%) of the 372 Stack Overflow threads in our sample did not undergo any changes after they were added to a Git repository. Summary: We found that even within a short ten-day time window, a non-negligible portion of link targets referenced in source code comments evolve, in some cases adding or modifying pertinent information. In our case study on links pointing to Stack Overflow, we found that more than three quarters of all Stack Overflow threads linked in source code comments attracted at least one change (edit, new answer, or new comment) after being first referenced in a source code comment.
F. Link Decay (RQ6)
Among the obtained 9,654,702 links, there are 382,650 distinct links. To investigate the amount of dead links in source code comments (RQ6), we accessed all Web contents from the 382,650 unique links by using the Perl module LWP. 13 Link retrieval responses.
G. Fixing Dead Links (RQ7)
To fix dead links (RQ7), we collected fixable dead links and submitted pull requests to fix. We select dead links that are not metadata (need multiple files to be fixed) nor commented-out source code. Personal blog articles were avoided because they tend to be no longer available. Consequently we obtained 14 dead links to API documentation, research papers, and so on. After checking the original content in Wayback Machine 14 , we manually investigated new links by searching specific keywords in the original content. Our fixing process included first forking a personal copy of the project, fixing the link, and then later submission of a pull request to the project.
Pull request results. Developers showed they cared about dead links by accepting to all nine pull requests. 15 16 17 18 19 20 21 22 23 Since the link itself is a comment, we speculate that it has almost no conflicts with existing code, so our pull requests are likely to pass all tests and to be merged immediately. Developers responded with comments such as "LGTM (looks good to me)" and "Thanks for spotting the broken link".
Overall, the responses from developers provide sufficient motivation for tool support to assist with fixing broken links. We argue that such comments indicate that developers are concerned with keeping their links alive.
Summary: Developers generally responded positively to the request to fix dead links. All nine responsive projects accepted our pull requests to fix dead links.
IV. RECOMMENDATIONS
Our findings can be summarized into recommendations for developers and researchers.
Recommendations for software developers including links in source code comments are:
• Try referencing permanent links, as it is reported that more than 30% of links will not work after a 4 year period [18]. Referencing research papers with DOI is preferable instead of researchers' personal Web pages. Explicitly mentioning tags or commit hashes to referenced code in GitHub would be recommended, as software structure can be changed (we found many dead links to GitHub in Section III-F). • Check link targets for new information on a regular basis, as referenced external resources can be considered to be software documentation to support comprehension and maintenance activities. In addition, link target updates can be triggers of improving and updating code (as seen in Section III-E).
We can also consider future work with the following possible challenges.
• Further understanding of external sources. We found many sources as shown in Figure 1c and Table III. Although some sources have been already studied, for example, licenses [12], self-admitted technical debt [28], and Stack Overflow [38], other sources have not been well-studied with regard to their impact and influence on software development, such as research papers and Wikipedia articles. • Further studies of source code comments to understand how knowledge (related to knowledge-based theory [44] and human capital [26], [40]) is summarized and shared via source code comments. Further analyses of source code comment contents [27] would be required. • Tool support for external source referencing, tracking, and updating. Although we recommend developers to maintain links and associated code, it is not always possible. Tools or systems to help developers fix link issues and maintain code automatically could be practically useful.
V. THREATS TO VALIDITY
Threats to the construct validity exist in our approach to link identification. Since we identified links per line in source code comments, links located across multiple lines cannot be extracted. Note that we did not encounter any such multipleline links in our representative sample of 1,146 links. Hence we consider that the impact of incorrect link identification because of multiple-line links is small.
Threats to the external validity exist in our repository preparation. Although we analyzed a large amount of repositories on GitHub, we cannot generalize our findings to industry nor open source projects in general; some open source repositories are hosted outside of GitHub, e.g., on GitLab or private servers.
To mitigate threats to reliability, we prepared an online appendix of our 9,654,702 links with associated information (see Section II-C).
VII. CONCLUSION
To understand purposes, evolution, and decay of links in source code comments, we conducted (i) a quantitative study of 9,654,702 links from source code comments in 25,925 Git repositories to establish the prevalence of links in source code comments; (ii) a qualitative study of a stratified sample of 1,146 links to determine the kinds of link targets and purposes for including links present in our dataset; (iii) a quantitative and qualitative study to investigate the evolution of links in source code comments and their targets; and (iv) a quantitative study to determine the extent to which links in source code comments are affected by link decay.
Our work has shown that links in source code comments indeed suffer from decay, from insufficient versioning (when link targets evolve), and from lack of bidirectional traceability (which could help avoid decay). Based on this work which has established the prevalence of links in source code comments, their multiple purposes and targets, issues of decay, and practical needs of fixing dead links, there are many open avenues for future work: understanding the role of external sources for software development, further studies of source code comments, and tool support for external source referencing, to name a few. | 6,149 |
1901.07440 | 2913669491 | Links are an essential feature of the World Wide Web, and source code repositories are no exception. However, despite their many undisputed benefits, links can suffer from decay, insufficient versioning, and lack of bidirectional traceability. In this paper, we investigate the role of links contained in source code comments from these perspectives. We conducted a large-scale study of around 9.6 million links to establish their prevalence, and we used a mixed-methods approach to identify the links' targets, purposes, decay, and evolutionary aspects. We found that links are prevalent in source code repositories, that licenses, software homepages, and specifications are common types of link targets, and that links are often included to provide metadata or attribution. Links are rarely updated, but many link targets evolve. Almost 10 of the links included in source code comments are dead. We then submitted a batch of link-fixing pull requests to open source software repositories, resulting in most of our fixes being merged successfully. Our findings indicate that links in source code comments can indeed be fragile, and our work opens up avenues for future work to address these problems. | There are also studies which analyze link sharing occurring in other software artifacts. Gomez et al @cite_34 investigated link sharing on Stack Overflow to gain insights into how software developers discover and disseminate innovations. Rath et al @cite_30 investigated links to issue tracking systems in commit comments. They reported that developers often do not provide external links to issues. They evaluated several methods to automatically recover links by searching issues related to a given commit. Alqahtani et al @cite_1 proposed a tool to automatically link dependent components in a system to online resources for analyzing their vulnerabilities. Chen et al @cite_12 proposed a tool to link problematic source code to relevant Stack Overflow questions using similarity of source code fragments. | {
"abstract": [
"Software and systems traceability is widely accepted as an essential element for supporting many software development tasks. Today's version control systems provide inbuilt features that allow developers to tag each commit with one or more issue ID, thereby providing the building blocks from which project-wide traceability can be established between feature requests, bug fixes, commits, source code, and specific developers. However, our analysis of six open source projects showed that on average only 60 of the commits were linked to specific issues. Without these fundamental links the entire set of project-wide links will be incomplete, and therefore not trustworthy. In this paper we address the fundamental problem of missing links between commits and issues. Our approach leverages a combination of process and text-related features characterizing issues and code changes to train a classifier to identify missing issue tags in commit messages, thereby generating the missing links. We conducted a series of experiments to evaluate our approach against six open source projects and showed that it was able to effectively recommend links for tagging issues at an average of 96 recall and 33 precision. In a related task for augmenting a set of existing trace links, the classifier returned precision at levels greater than 89 in all projects and recall of 50 .",
"It is poorly understood how developers discover and adopt software development innovations such as tools, libraries, frameworks, or web sites that support developers. Yet, being aware of and choosing appropriate tools and components can have a significant impact on the outcome of a software project. In our study, we investigate link sharing on Stack Overflow to gain insights into how software developers discover and disseminate innovations. We find that link sharing is a significant phenomenon on Stack Overflow, that Stack Overflow is an important resource for software development innovation dissemination and that its part of a larger interconnected network of online resources used and referenced by developers. This knowledge can guide researchers and practitioners who build tools and services that support software developers in the exploration, discovery, and adoption of software development innovations.",
"Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.",
""
],
"cite_N": [
"@cite_30",
"@cite_34",
"@cite_1",
"@cite_12"
],
"mid": [
"2964064835",
"2128733218",
"2615622384",
""
]
} | 9.6 Million Links in Source Code Comments: Purpose, Evolution, and Decay | When Ted Nelson started Project Xanadu 1 in 1960, he envisioned "an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system-a literary, legal and business arrangement-for friction-less, non-negotiated quotation at any time and in any amount" [25]. Links were supposed to be visible and could be followed from all endpoints, with permission to link to a document explicitly granted by the act of publication [2]. Decades later, Nelson witnessed the birth of the World Wide Web, which in his words "trivialized this original Xanadu model, vastly but incorrectly simplifying these problems to a world of fragile ever-breaking one-way links, with no recognition of change or copyright, and no support for multiple versions or principled re-use" [25]. As predicted by Nelson, the Internet and its implementation of links have afforded us countless opportunities since, but also experienced issues such as link decay [17], [22], digital plagiarism [9], and the need to rely on external services to keep historical copies of web content [24].
In this work, we investigate the role of links contained in source code comments from the perspective of these opportunities and challenges: what purposes do they serve, how do they and their targets evolve, and how often do they break? The significance of this work is closely related to software documentation [33] and self-admitted technical debt [28]. To improve documentation and mitigate potential issues, it is important to understand developers' typical knowledge sharing activities by referencing external sources, and to investigate link decay as a potential problem.
Our work is related to and inspired by recent research on source code comments in terms of documentation, traceability, licensing, and attribution. For example, source code comments have been found to document technical debt [28] and to support articulation work [36]. They are fragile with respect to identifier renaming, i.e., traceability between comments and code is easily lost [32]. Source code comments located at the beginning of a file often include a text or a link indicating the copyright and license information of the file [12]. These comments are updated during the evolution of a product by the copyright holders [42]. Links in source code comments are sometimes used for attribution when source code has been taken from elsewhere-however, the vast majority of code snippets is copied without attribution [7], [8]. Despite these research efforts, to the best of our knowledge, the role of links in source code comments has not been studied comprehensively so far.
To fill this gap, in this paper, we first lay the foundation for understanding the role of links in source code comments by collecting 9,654,702 links from source code comments in 25,925 Git repositories. Our parser is able to extract comments from source code written in 7 programming languages. We find that links in source code comments are common: more than 80% of the repositories in our study contained at least one link. Through a qualitative study of a stratified sample of 1,146 links, we establish the kinds of link targets that are referenced in source code comments. To understand how links are used to indicate issues related to attribution, technical debt, copyright, and licensing, our qualitative study also uncovers the various purposes for including links in source code comments. We find that licenses, software homepages, and specifications are among the most prevalent types of link targets, and that links are often used to provide metadata or attribution. Link decay has the potential of making documentation in source code comments fragile and buggy. We investigate this issue from two perspectives: we analyze the evolution of the links in the repositories' commit histories and we examine how often link targets referenced in source code comments change. We find that links are rarely updated, but their targets evolve, in almost 10% of all cases leading to dead links. We then submit fixes to a subset of these broken links as pull requests, most of which were successfully merged by the maintainers of the corresponding open source projects.
In summary, this paper's contributions are three-fold:
• a large-scale and comprehensive study of around 9.6 million links to establish the prevalence of links in source code comments, • a mixed-methods study to identify targets, purposes, and evolutionary aspects of links in source code comments, and • an analysis of the extent to which links in source code comments are affected by link decay, with all nine linkfixing pull requests submitted to active open source projects already merged by the projects' maintainers.
II. RESEARCH METHOD
In this section, we present our research questions and data collection methodology, and we introduce the data contained in our online appendix.
A. Research Questions
The main goal of the study is to gain insights into the purposes, evolution and decay of links in source code comments. Based on this goal, we constructed seven research questions to guide our study. We now present each of these questions, along with the motivation for each.
(RQ1): How prevalent are links in source code comments?
The motivation of RQ1 is to understand whether the use of links in source code is a common practice in the wild. Furthermore, we would like to quantitatively explore the distribution, diversity, and spread of these links across different types of software projects. (RQ2): What kind of link targets are referenced in source code comments? (RQ3): What purpose do links in source code serve? RQ2 and RQ3 require a deeper analysis of the repositories, where we would like to understand the nature and purpose that the links serve. The key motivation for RQ2 is to identify the types of link targets that developers are likely to refer to in source code comments. Furthermore, we would like to characterize the most common types of linked domains.
The key motivation for RQ3 is to determine the reasons why developers use links.
(RQ4): How do links in source code comments evolve? (RQ5): How frequently do link targets referenced in source code comments change? (RQ6): How many links in source code comments are dead?
B. Data Collection
We now describe our methods for repository preparation, comment extraction, and link identification.
Repository preparation. In this work, we analyzed active software development repositories on GitHub written in common programming languages. As common programming languages, we selected seven languages: C, C++, Java, JavaScript, Python, PHP, and Ruby. These languages have been ranked consistently in the top 10 languages on GitHub from 2008 to 2017 (based on the number of repositories from 2008 to 2015 [20], the number of pull requests from 2014 to 2017 [10], and the number of pull requests in 2017 [13]).
Using the GHTorrent dataset 2 [16], we collected active repositories for the seven languages using the following criteria: (i) having more than 500 commits in their entire history (the same threshold used in previous work [4]), and (ii) having at least 100 commits in the most active two years. We designed the second criterion to remove long-term less active repositories and short-term projects that have not been maintained for long (and may not be software development projects). For example, we were able to exclude software-engineering-amsterdam/sea-of-ql, which is a repository of a collaboration space for students in a particular university course, and was reported as a false positive of software project identification [23]. We determine repositories' languages based on the GHTorrent information. Forked repositories are excluded if repositories are recorded in GHTorrent as forks of other repositories.
With the above criteria, we prepared the candidate list of target repositories for the seven languages as shown in Table I.
When we collected these candidate repositories (from May to June 2018), some repositories were not available because they had been deleted or made private. In total, we obtained more than 25,000 repositories, which is almost 90% of the candidate repositories.
Comment extraction. From each Git repository, we extract source files of the labeled language in the HEAD commit (the latest snapshot of a cloned repository). For example, only .java files are extracted from a Java repository. To process source files, we employ ANTLR4 lexical analyzers for six languages other than Ruby because their grammar definitions are available in the official example repository. 3 For Ruby, we use a standard library, Ripper parser.
We extract all single line comments (e.g., // in C) and multiline comments (/ * ... * /) according to the grammars. In the case of Python, string literals (''' ... ''') are also regarded as comments because they include documents (known as docstring). In the case of PHP, both HTML comments and PHP code comments are extracted.
Link identification. From the extracted comments, links are identified using the regular expression /http\S+/ (localhost and IP addresses, which are mainly used for private addresses, are excluded) and validated with the Perl module Data::Validate::URI. We identified a total of 9,654,702 links from the collected repositories as seen in Table I. All links are recorded with the information of the corresponding file, repository identifiers (pairs of account and repository names), commit hashes, and the line number where the surrounding comment starts. Considering the number of repositories, we found that repositories written in C, C++, and Java tend to contain more links compared to repositories in Python and Ruby.
C. Online Appendix
Our online appendix contains our 9,654,702 links associated with the information of languages and comment location (GitHub links including account names, repository names, commit hashes, file paths, and line numbers). The appendix is available at https://github.com/NAIST-SE/9.6MillionLinks.
III. FINDINGS
In this section, we present our findings for each research question.
A. Prevalence of Links (RQ1)
To understand the prevalence of links referenced in source code comments (RQ1), we conducted a quantitative analysis of our collected dataset in terms of link existence, domain diversity, and domain popularity.
Link existence. Figure 1a show the percentages of repositories that have at least one link in their source code comments. We see that, in every language, more than 80% of the repositories contain links in source code comments. Especially for repositories written in C, C++, and PHP, more than 90% of the repositories refer to external sources via links.
Domain diversity. In the obtained 9,654,702 links, there are 57,039 distinct domains (Internet hostnames). Figure 1b shows the distribution of the number of distinct domains per repository, for repositories that have at least one link in their source code comments. Median values are presented in the figure. We found that there is a diversity of links in a single repository even when summarized by their domains. Especially in repositories written in C, C++, JavaScript, and PHP, source code comments link to 10 or more different domains (median).
Popular domains. Figure 1c illustrates the proportion of languages shared by the top 10 most referenced domains. Note that domain ranking is based on the number of repositories instead of the number of links. If links belonging to a domain appear in a small number of repositories, the domain will be low-ranked even if those repositories contain many links.
The github.com domain is the top referenced domain in our dataset. More than 14,000 repositories across seven languages referenced content on github.com. As we will describe in detail in Section III-B, such content includes software homepage, code, and profile of a GitHub contributor. However, we find in Section III-F that many links to github.com are no longer available. We also found many links to code.google.com (7th rank). Such content includes bug report, software homepage, and code. In a statistically representative sample of common domains (sampling described in Section III-B), two out of three links to code.google.com are redirected to github.com, and one links to code.google.com/archive/.
The stackoverflow.com domain is the second most referenced domain and has been linked to from 8,189 repositories. As identified in previous work, Stack Overflow is widely used as a knowledge exchange platform between programmers [38], where programmers can obtain knowledge of good practices from code examples [29], [35], for example. The large number of links to stackoverflow.com in source code comments can be another piece of evidence of developers' needs for knowledge acquisition from external resources. We study how code could be obsolete by not being updated when external sources change in Section III-E.
The top domains differ by programming language: The www.apache.org domain is frequently linked from Java repositories, and the www.gnu.org domain is referenced from C and C++ repositories. Repositories written in JavaScript have many links to the Web-related domains of www.w3.org and developer.mozilla.org.
Summary: We revealed that links in source code comments are prevalent. In more than 80% of the 25,925 active repositories written in seven common languages, there exists at least one link in each repository. The top three most frequently referenced domains per repository are github.com, stackoverflow.com, and en.wikipedia.com. To understand what kind of link targets are referenced in source code comments (RQ2), we conducted a qualitative study of a statistically representative and stratified sample of all links in our dataset.
After an initial analysis of the link data, it quickly became obvious that some domains account for many links while other domains are rare. Based on this observation and to ensure diversity of our sample, we divided the data into three strata:
1) links to commonly linked domains, 2) links to domains sometimes linked, and 3) links to rarely linked domains.
To decide on thresholds for distinguishing domains into those that are commonly, sometimes, and rarely linked, we conducted a visual analysis of the distribution of links per domain in our dataset. Figure 2 shows this distribution using a log scale. While content from the most commonly linked domain was linked more than a million times, many domains appeared in our dataset with a much lower frequency. We used the "step" in the distribution on the left-hand side of Figure 2 to distinguish between domains that are commonly linked and domains that are sometimes linked, with a cutoff frequency of 230. We consider domains which account for exactly one link in our dataset to be rarely linked. Table II shows the number of domains and the number of links in each strata. We then drew a statistically representative sample from each bucket. The required sample size was calculated so that our conclusions about the ratio of links with a specific characteristic would generalize to all links in the same bucket with a confidence level of 95% and a confidence interval of 5. 4 The calculation of statistically significant sample sizes based on population size, confidence interval, and confidence level is well established (first published by Krejcie and Morgan in 1970 [19]). The qualitative analysis was conducted in multiple iterations: in the first iteration, the first two authors independently coded 20 links from the sample, discussed a common coding guide, and tested this coding guide on another 20 links from the sample, refining the guide, merging codes, and adding codes which had been missed. The initial codes were informed by those used by Aniche et al. [5] to categorize content posted on news aggregators, however, we found that their codes did not cover all types of link targets present in our dataset. In the second iteration, the four authors of this paper then independently coded another 30 links from the sample, using the coding guide designed by the first two authors. We then calculated the kappa agreement of this iteration between all four raters, for 30 cases and all 19 codes that emerged from the qualitative analysis. 5 The kappa agreement was 0.81 or "almost perfect" [39]. Based on this encouraging result, the remaining data was then coded by a single author.
The following list shows the 19 codes that emerged from our analysis along with a short description which was available in the coding guide:
• 404: link target does not exist (anymore) or cannot be accessed • licence: licence of a software project • software homepage: main web presence of a library or software project • specification: anything that resembles a requirements document or a technical standard • organization homepage: main web presence of an organization or company • other: anything that does not fit the other codes, including if sign-in is required • tutorial or article: technical article or tutorial, without commenting section (blog post otherwise) • API documentation: documentation of an API element • blog post: technical content with a commenting section • application: interactive application (e.g., web application, online utility) • bug report: bug report or issue in an online bug/issue tracker • research paper: academic paper • personal homepage: personal homepage of one individual • code: a source code file • forum thread: thread in a forum or entire forum • GitHub profile: profile of a GitHub contributor • book content: chapter/section of a book or entire book • Q&A thread: question-and-answer thread, but not Stack Overflow • Stack Overflow: question-and-answer thread on Stack Overflow Taxonomy of link targets. Table III shows the result of our qualitative analysis. For commonly-linked domains, license is the most common type of link target, accounting for more than half of the links in our sample, followed by software homepages, i.e., the main web presence of a library or software project. For domains that are linked sometimes from source code comments, the most common type of link target was 404, a non-existing link target. This is a first indicator of the decay 5 Kappa agreement was calculated using http://justusrandolph.net/kappa/. of links in source code comments, which we will analyze in detail in the next sections. Software homepages are also prevalent, as are organization homepages, both accounting for more than 10% of all links in our sample. Finally, for links from domains which are rarely linked, the problem of decay is even more serious, affecting 37% of the links in this sample.
In other words, we can conclude with a 95% confidence that between 32 and 42% of all links to domains which are rarely linked from source code comments are dead or inaccessible. The prevalence of the code "other" in the results for links to rarely linked domains is an indicator of the diversity of links present in source code comments.
Summary:
We identified more than a dozen different kinds of link targets, with dead links, licenses, and software homepages being the most prevalent. Dead links are particularly common for rarely linked domains.
C. Link Purpose (RQ3)
To understand the purpose of links referenced in source code comments (RQ3) and similar to (RQ2), we again employed a qualitative analysis of our statistically representative and stratified sample of 1,146 links, only this time focusing on the origin of a link (in a source comment) rather than the target of the link. We used the same iterative approach to design a coding guide, and validated the coding guide by having the four authors code 30 links independently, this time leading to a kappa agreement of 0.70 which indicates "substantial" agreement [39]. The somewhat lower agreement can be explained by the need to extrapolate the purpose of a link from its context in the source code alone, without being able to interview the contributor who added the link.
The following list shows all 8 codes that emerged from our analysis for link purpose, along with a short description which was available in the coding guide. The coding guide was informed by work on source code comments (e.g., [36]), selfadmitted technical debt (e.g., [28]), and attribution (e.g., [7]).
• metadata: the link relates to the author of the source code, a related organization, or the license • source/attribution: the comment explicitly indicates that the link is a source of some aspect of the source code (e.g., algorithm) • source code context: the link adds additional information to the source code (use this code for things that do not obviously fit into any of the previous) • see-also: the comment explicitly indicates that the link points to additional reading material (usually accompanied by a phrase such as "see also"). • commented-out source code: the link is part of the source code, e.g., as a parameter value, but has been commented out • link-only: the comment only contains the link • self-admitted technical debt: bug-related, like workaround, under development, and so on • @see: the link is accompanied by "@see", but no further explanation Note that our coding guide required the indicators of seealso and source/attribution to be explicit, thus reducing the guesswork required as part of the qualitative analysis.
Taxonomy of link purpose. Table IV shows the results of the qualitative analysis. For links to commonly linked domains, providing metadata, e.g., in the forum of licenses or author information, is by far the most common purpose of a link, covering three quarters of the links in our sample. For links to domains which are only sometimes linked, metadata only accounts for one third of the data, followed by links included for the purpose of attribution, providing context, or see-also information. The results for links to rarely linked domains are even more diverse: we can see from the table that these links are used for context, attribution, and as part of the source code functionality (albeit commented out), to name the top three. Six of the eight codes account for at least 10% of the links in this part of our sample.
Matching link target with purpose. Based on the qualitative analysis conducted for answering RQ2 and RQ3 about the targets and purposes of links in source code comments, we are now able to investigate the relationships between the different types of link targets and the different purposes which emerged from our qualitative analysis. To do so, we applied association rule learning using the apriori algorithm [1] as implemented in the R package arules 6 to our data, treating each link as a transaction containing two items: its target type and its purpose. We used 4 as threshold for support and 0.7 as threshold for confidence, i.e., all rules that we extracted are supported by at least four data points and we have at least a 70% confidence that the left hand side of the rule implies the right hand side. Table V shows the association rules extracted from our data with these settings, separately for each stratum in our sample. Unsurprisingly, the link target type license and the purpose of providing metadata are tightly connected, in particular for links referring to commonly linked domains. In fact, all links to licenses were found to have been included for the reason of providing metadata, and 72% of the metadata is license information. Links to software, organization, and personal homepages are also associated with metadata, across all strata. Although with a relatively low support of seven instances, it is also interesting to note the tight coupling of the link target type bug report and the purpose of admitting technical debt.
Summary: We identified different purposes for the inclusion of links in source code comments, with providing metadata and attribution being the most common. Links are also included for background information, to provide context, or to admit technical debt. In some cases, the link is part of source code which has been commented out.
D. Link Evolution (RQ4)
To understand how links evolve (RQ4), we investigated the revision histories of repositories in the samples from (RQ2). For each sample link, we searched an old version of the link that has been revised by a commit that introduced the link. We extracted such a commit introducing a link by using the git log command (-S option with tracking file renaming). We searched http(s) links removed from the code location where the sample link has been added. We identified 88 revised links out of 1,146 samples, including 24 (6.3%) in common, 31 (8.1%) in sometimes, and 33 (8.7%) in rare. Less than 10% of the links had been revised in each strata, that is, most of the links have never been updated. We manually analyzed the old and new paths of the links and identified the following evolution types:
• license replacement: a new link refers to a new software license. For example, a link to GNU GPL has been replaced with a link to the Apache License. • organization update: a project or an organization changed its name or website. For example, a project that acquired their own domain updated links to their project website. • change to https: a new link uses HTTPS instead of HTTP for the same location as the previous link. • content move: a new link refers to a slightly different location (e.g. the same path on a different server, the same document name on a different wiki), which is likely the same content. • content update: a new link refers to different content from the previous link, but the new content is likely updated. For example, the Apache Jackrabbit project replaced a link pointing to a draft version of a document 7 with a link to an RFC version. 8 • content change: a new link refers to relevant but different content from the previous link. For example, the Pi4J project replaced a link related to the usage of a serial port of Raspberry Pie 9 with another similar document. 10 • other: we could not identify types for some links whose contents are no longer available. It should be noted that the contents for 20 updated links are 404 Not Found.
Reasons for link evolution. Table VI shows the numbers of link evolution in the three strata. For commonly-linked domains, license replacement and updating organizational information account for about 80% of link revisions. For domains sometimes linked, organization update is the most common, followed by other and content change. For rarely- Summary: Links are rarely updated (less than 9%). Common modifications are updating licenses and organization homepages.
E. Link Target Evolution (RQ5)
After understanding the evolution of links, our next research question (RQ5) asks about the evolution of their targets. To investigate whether link targets referenced in source code comments evolve, we attempted to download all link targets in our sample of 1,146 links using the curl command with a timeout of 60 seconds. As already discussed as part of (RQ2), not all link targets are available. We were able to download a total of 1,034 link targets (90%). We then repeated the same download process exactly ten days later, to see how many of the link targets had changed within this short time frame and what kind of changes had happened.
Changes to the link target. Table VII summarizes the results of this analysis: out of the 1,034 link targets for which curl returned a result, 879 (85%) had not changed at all in the ten-day time frame (the downloaded content was exactly the same, as per the Windows file compare tool fc). We manually analyzed the 155 cases in which the content had changed by opening both versions in a web browser and conducting a visual comparison. The majority of the changes in the remaining 15% can be attributed to automatically generated changes, such as the display of a visitor count or the current date in a footer.
However, a non-negligible number of link targets underwent more significant changes in the ten-day time window: For six links for which we were able to retrieve data on the first download date, there was no content available anymore ten days later. For three links which had displayed an error message when we first attempted to download their content, the specific error message changed. Some link targets changed their website design, and for a few links, the content changed. For example, the download page of TaskWarrior 11 included the following notice when we first downloaded its content: "(For those of you wishing to build task from source on Cygwin, you will need some components installed (make, g++/clang, GnuTLS, libuuid, libreadline), but don't forget -task is a standard part of the Cygwin distribution, so you do not need to build from source, unless you want the latest development snapshot)." Ten days later, this notice was replaced with: "(Please note, that Cygwin is not supported anymore. Please use the Windows Subsystem for Linux to use Taskwarrior on Windows)." We argue that this kind of change is relevant to software developers.
Stack Overflow case study. To investigate this phenomenon in more detail, we conducted a case study with the subset of links pointing to Stack Overflow. As seen in Section III-A, stackoverflow.com is the second most referenced domain.
In all 9,654,702 obtained links, there are 32,197 links belonging to stackoverflow.com. Among those Stack Overflow links, there are varieties of expressions: an abbreviated path to an answer (/a/(answer id)), an abbreviated path to a question (/q/(question id)), and a full path to a question (/questions/(question id)/(title)). Older links start with 'http://' and newer links start with 'https://'. For each Stack Overflow link, we identified the timestamp of when the link was added to a repository by using the same git log command (-S option with tracking file renaming) used in Section III-D. For duplicate links, we consider only the oldest timestamp. Consequently, we obtained a list of 11,464 distinct links with their timestamps.
We then made use of the SOTorrent dataset [8] to investigate the extent to which Stack Overflow content had changed since the link to the question or answer had been added to a source code comment in a Git repository. We created a statistically representative sample of 372 links from the population of all unique links to Stack Overflow content in our dataset, and we queried SOTorrent to determine the following metrics for each link:
• the number of text edits on any post (i.e., question or answer) in the same thread, • the number of new comments on any post (i.e., question or answer) in the same thread, • the number of new answers in the same thread, and • the number of edits to the thread title. Thread updates. Figure 3 shows the results of this analysis. More than half of all Stack Overflow threads had at least one change made to the text of a question or answer in the same thread (median: 1, third quartile: 3) after they were committed to a Git repository as part of a source code comment, and more than half of these links attracted at least one new comment in the meantime (median: 2, third quartile: 7). While the number of new answers to a thread was zero in the median case, a quarter of the Stack Overflow threads attracted at least 2 new answers after the link was added in a source code comment (median: 0, third quartile: 2). In total, only 91 (24%) of the 372 Stack Overflow threads in our sample did not undergo any changes after they were added to a Git repository. Summary: We found that even within a short ten-day time window, a non-negligible portion of link targets referenced in source code comments evolve, in some cases adding or modifying pertinent information. In our case study on links pointing to Stack Overflow, we found that more than three quarters of all Stack Overflow threads linked in source code comments attracted at least one change (edit, new answer, or new comment) after being first referenced in a source code comment.
F. Link Decay (RQ6)
Among the obtained 9,654,702 links, there are 382,650 distinct links. To investigate the amount of dead links in source code comments (RQ6), we accessed all Web contents from the 382,650 unique links by using the Perl module LWP. 13 Link retrieval responses.
G. Fixing Dead Links (RQ7)
To fix dead links (RQ7), we collected fixable dead links and submitted pull requests to fix. We select dead links that are not metadata (need multiple files to be fixed) nor commented-out source code. Personal blog articles were avoided because they tend to be no longer available. Consequently we obtained 14 dead links to API documentation, research papers, and so on. After checking the original content in Wayback Machine 14 , we manually investigated new links by searching specific keywords in the original content. Our fixing process included first forking a personal copy of the project, fixing the link, and then later submission of a pull request to the project.
Pull request results. Developers showed they cared about dead links by accepting to all nine pull requests. 15 16 17 18 19 20 21 22 23 Since the link itself is a comment, we speculate that it has almost no conflicts with existing code, so our pull requests are likely to pass all tests and to be merged immediately. Developers responded with comments such as "LGTM (looks good to me)" and "Thanks for spotting the broken link".
Overall, the responses from developers provide sufficient motivation for tool support to assist with fixing broken links. We argue that such comments indicate that developers are concerned with keeping their links alive.
Summary: Developers generally responded positively to the request to fix dead links. All nine responsive projects accepted our pull requests to fix dead links.
IV. RECOMMENDATIONS
Our findings can be summarized into recommendations for developers and researchers.
Recommendations for software developers including links in source code comments are:
• Try referencing permanent links, as it is reported that more than 30% of links will not work after a 4 year period [18]. Referencing research papers with DOI is preferable instead of researchers' personal Web pages. Explicitly mentioning tags or commit hashes to referenced code in GitHub would be recommended, as software structure can be changed (we found many dead links to GitHub in Section III-F). • Check link targets for new information on a regular basis, as referenced external resources can be considered to be software documentation to support comprehension and maintenance activities. In addition, link target updates can be triggers of improving and updating code (as seen in Section III-E).
We can also consider future work with the following possible challenges.
• Further understanding of external sources. We found many sources as shown in Figure 1c and Table III. Although some sources have been already studied, for example, licenses [12], self-admitted technical debt [28], and Stack Overflow [38], other sources have not been well-studied with regard to their impact and influence on software development, such as research papers and Wikipedia articles. • Further studies of source code comments to understand how knowledge (related to knowledge-based theory [44] and human capital [26], [40]) is summarized and shared via source code comments. Further analyses of source code comment contents [27] would be required. • Tool support for external source referencing, tracking, and updating. Although we recommend developers to maintain links and associated code, it is not always possible. Tools or systems to help developers fix link issues and maintain code automatically could be practically useful.
V. THREATS TO VALIDITY
Threats to the construct validity exist in our approach to link identification. Since we identified links per line in source code comments, links located across multiple lines cannot be extracted. Note that we did not encounter any such multipleline links in our representative sample of 1,146 links. Hence we consider that the impact of incorrect link identification because of multiple-line links is small.
Threats to the external validity exist in our repository preparation. Although we analyzed a large amount of repositories on GitHub, we cannot generalize our findings to industry nor open source projects in general; some open source repositories are hosted outside of GitHub, e.g., on GitLab or private servers.
To mitigate threats to reliability, we prepared an online appendix of our 9,654,702 links with associated information (see Section II-C).
VII. CONCLUSION
To understand purposes, evolution, and decay of links in source code comments, we conducted (i) a quantitative study of 9,654,702 links from source code comments in 25,925 Git repositories to establish the prevalence of links in source code comments; (ii) a qualitative study of a stratified sample of 1,146 links to determine the kinds of link targets and purposes for including links present in our dataset; (iii) a quantitative and qualitative study to investigate the evolution of links in source code comments and their targets; and (iv) a quantitative study to determine the extent to which links in source code comments are affected by link decay.
Our work has shown that links in source code comments indeed suffer from decay, from insufficient versioning (when link targets evolve), and from lack of bidirectional traceability (which could help avoid decay). Based on this work which has established the prevalence of links in source code comments, their multiple purposes and targets, issues of decay, and practical needs of fixing dead links, there are many open avenues for future work: understanding the role of external sources for software development, further studies of source code comments, and tool support for external source referencing, to name a few. | 6,149 |
1901.07440 | 2913669491 | Links are an essential feature of the World Wide Web, and source code repositories are no exception. However, despite their many undisputed benefits, links can suffer from decay, insufficient versioning, and lack of bidirectional traceability. In this paper, we investigate the role of links contained in source code comments from these perspectives. We conducted a large-scale study of around 9.6 million links to establish their prevalence, and we used a mixed-methods approach to identify the links' targets, purposes, decay, and evolutionary aspects. We found that links are prevalent in source code repositories, that licenses, software homepages, and specifications are common types of link targets, and that links are often included to provide metadata or attribution. Links are rarely updated, but many link targets evolve. Almost 10 of the links included in source code comments are dead. We then submitted a batch of link-fixing pull requests to open source software repositories, resulting in most of our fixes being merged successfully. Our findings indicate that links in source code comments can indeed be fragile, and our work opens up avenues for future work to address these problems. | Traceability links between source code and documents is another related research topic. Scanniello et al @cite_13 reported that developers can understand source code effectively if they can refer to design models including source code element names. Their observation has been obtained through a controlled experiment of program comprehension tasks with UML models produced in a requirements engineering phase and a design phase. Antoniol et al @cite_15 proposed a method to identify links between source files and design documents because developers may update source file names without updating related documents. Their method uses similarity of attribute names of a class to identify its original class definition in design documents. Rahimi et al @cite_5 proposed a rule-based method to update links between source files and requirements documents. Their method recognizes a change scenario from semantic differences of source code and then updates links according to a rule corresponding to the change scenario. Those methods would be effective to automatically update traceability links. Similar tool support for external source referencing is a future direction of our research. | {
"abstract": [
"Traceability provides support for diverse software engineering activities including safety analysis, compliance verification, test-case selection, and impact prediction. However, in practice, there is a tendency for trace links to degrade over time as the system continually evolves. This is especially true for links between source-code and upstream artifacts such as requirements --- because developers frequently refactor and change code without updating the links. In this paper we present TLE (Trace Link Evolver), a solution for automating the evolution of bidirectional trace links between source code classes or methods and requirements. TLE depends on a set of heuristics coupled with refactoring detection tools and informational retrieval algorithms to detect predefined change scenarios that occur across contiguous versions of a software system. We first evaluate TLE at the class level in a controlled experiment to evolve trace links for revisions of two Java applications. Second, we comparatively evaluate several variants of TLE across six releases of our in-house Dronology project. We study the results of integrating human analyst feed back in the evolution cycle of this emerging project. Additionally, in this system, we compare the efficacy of class-level versus method-level evolution of trace links. Finally, we evaluate TLE in a larger scale across 27 releases of the Cassandra Database System and show that the evolved trace links are significantly more accurate than those generated using only information retrieval techniques.",
"Traceability is a key issue to ensure consistency among software artifacts of subsequent phases of the development cycle. However, few works have so far addressed the theme of tracing object oriented (OO) design into its implementation and evolving it. This paper presents an approach to checking the compliance of OO design with respect to source code and support its evolution. The process works on design artifacts expressed in the OMT (Object Modeling Technique) notation and accepts Cpp source code. It recovers an “as is” design from the code, compares the recovered design with the actual design and helps the user to deal with inconsistencies. The recovery process exploits the edit distance computation and the maximum match algorithm to determine traceability links between design and code. The output is a similarity measure associated to designdcode class pairs, which can be classified as matched and unmatched by means of a maximum likelihood threshold. A graphic display of the design with different green levels associated to different levels of match and red for the unmatched classes is provided as a support to update the design and improve its traceability to the code.",
"In this paper, we present the results of long-term research conducted in order to study the contribution made by software models based on the Unified Modeling Language (UML) to the comprehensibility of Java source-code deprived of comments. We have conducted 12 controlled experiments in different experimental contexts and on different sites with participants with different levels of expertise (i.e., Bachelor’s, Master’s, and PhD students and software practitioners from Italy and Spain). A total of 333 observations were obtained from these experiments. The UML models in our experiments were those produced in the analysis and design phases. The models produced in the analysis phase were created with the objective of abstracting the environment in which the software will work (i.e., the problem domain), while those produced in the design phase were created with the goal of abstracting implementation aspects of the software (i.e., the solution application domain). Source-code comprehensibility was assessed with regard to correctness of understanding, time taken to accomplish the comprehension tasks, and efficiency as regards accomplishing those tasks. In order to study the global effect of UML models on source-code comprehensibility, we aggregated results from the individual experiments using a meta-analysis. We made every effort to account for the heterogeneity of our experiments when aggregating the results obtained from them. The overall results suggest that the use of UML models affects the comprehensibility of source-code, when it is deprived of comments. Indeed, models produced in the analysis phase might reduce source-code comprehensibility, while increasing the time taken to complete comprehension tasks. That is, browsing source code and this kind of models together negatively impacts on the time taken to complete comprehension tasks without having a positive effect on the comprehensibility of source code. One plausible justification for this is that the UML models produced in the analysis phase focus on the problem domain. That is, models produced in the analysis phase say nothing about source code and there should be no expectation that they would, in any way, be beneficial to comprehensibility. On the other hand, UML models produced in the design phase improve source-code comprehensibility. One possible justification for this result is that models produced in the design phase are more focused on implementation details. Therefore, although the participants had more material to read and browse, this additional effort was paid back in the form of an improved comprehension of source code."
],
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"2766319419",
"1555930296",
"2786674049"
]
} | 9.6 Million Links in Source Code Comments: Purpose, Evolution, and Decay | When Ted Nelson started Project Xanadu 1 in 1960, he envisioned "an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system-a literary, legal and business arrangement-for friction-less, non-negotiated quotation at any time and in any amount" [25]. Links were supposed to be visible and could be followed from all endpoints, with permission to link to a document explicitly granted by the act of publication [2]. Decades later, Nelson witnessed the birth of the World Wide Web, which in his words "trivialized this original Xanadu model, vastly but incorrectly simplifying these problems to a world of fragile ever-breaking one-way links, with no recognition of change or copyright, and no support for multiple versions or principled re-use" [25]. As predicted by Nelson, the Internet and its implementation of links have afforded us countless opportunities since, but also experienced issues such as link decay [17], [22], digital plagiarism [9], and the need to rely on external services to keep historical copies of web content [24].
In this work, we investigate the role of links contained in source code comments from the perspective of these opportunities and challenges: what purposes do they serve, how do they and their targets evolve, and how often do they break? The significance of this work is closely related to software documentation [33] and self-admitted technical debt [28]. To improve documentation and mitigate potential issues, it is important to understand developers' typical knowledge sharing activities by referencing external sources, and to investigate link decay as a potential problem.
Our work is related to and inspired by recent research on source code comments in terms of documentation, traceability, licensing, and attribution. For example, source code comments have been found to document technical debt [28] and to support articulation work [36]. They are fragile with respect to identifier renaming, i.e., traceability between comments and code is easily lost [32]. Source code comments located at the beginning of a file often include a text or a link indicating the copyright and license information of the file [12]. These comments are updated during the evolution of a product by the copyright holders [42]. Links in source code comments are sometimes used for attribution when source code has been taken from elsewhere-however, the vast majority of code snippets is copied without attribution [7], [8]. Despite these research efforts, to the best of our knowledge, the role of links in source code comments has not been studied comprehensively so far.
To fill this gap, in this paper, we first lay the foundation for understanding the role of links in source code comments by collecting 9,654,702 links from source code comments in 25,925 Git repositories. Our parser is able to extract comments from source code written in 7 programming languages. We find that links in source code comments are common: more than 80% of the repositories in our study contained at least one link. Through a qualitative study of a stratified sample of 1,146 links, we establish the kinds of link targets that are referenced in source code comments. To understand how links are used to indicate issues related to attribution, technical debt, copyright, and licensing, our qualitative study also uncovers the various purposes for including links in source code comments. We find that licenses, software homepages, and specifications are among the most prevalent types of link targets, and that links are often used to provide metadata or attribution. Link decay has the potential of making documentation in source code comments fragile and buggy. We investigate this issue from two perspectives: we analyze the evolution of the links in the repositories' commit histories and we examine how often link targets referenced in source code comments change. We find that links are rarely updated, but their targets evolve, in almost 10% of all cases leading to dead links. We then submit fixes to a subset of these broken links as pull requests, most of which were successfully merged by the maintainers of the corresponding open source projects.
In summary, this paper's contributions are three-fold:
• a large-scale and comprehensive study of around 9.6 million links to establish the prevalence of links in source code comments, • a mixed-methods study to identify targets, purposes, and evolutionary aspects of links in source code comments, and • an analysis of the extent to which links in source code comments are affected by link decay, with all nine linkfixing pull requests submitted to active open source projects already merged by the projects' maintainers.
II. RESEARCH METHOD
In this section, we present our research questions and data collection methodology, and we introduce the data contained in our online appendix.
A. Research Questions
The main goal of the study is to gain insights into the purposes, evolution and decay of links in source code comments. Based on this goal, we constructed seven research questions to guide our study. We now present each of these questions, along with the motivation for each.
(RQ1): How prevalent are links in source code comments?
The motivation of RQ1 is to understand whether the use of links in source code is a common practice in the wild. Furthermore, we would like to quantitatively explore the distribution, diversity, and spread of these links across different types of software projects. (RQ2): What kind of link targets are referenced in source code comments? (RQ3): What purpose do links in source code serve? RQ2 and RQ3 require a deeper analysis of the repositories, where we would like to understand the nature and purpose that the links serve. The key motivation for RQ2 is to identify the types of link targets that developers are likely to refer to in source code comments. Furthermore, we would like to characterize the most common types of linked domains.
The key motivation for RQ3 is to determine the reasons why developers use links.
(RQ4): How do links in source code comments evolve? (RQ5): How frequently do link targets referenced in source code comments change? (RQ6): How many links in source code comments are dead?
B. Data Collection
We now describe our methods for repository preparation, comment extraction, and link identification.
Repository preparation. In this work, we analyzed active software development repositories on GitHub written in common programming languages. As common programming languages, we selected seven languages: C, C++, Java, JavaScript, Python, PHP, and Ruby. These languages have been ranked consistently in the top 10 languages on GitHub from 2008 to 2017 (based on the number of repositories from 2008 to 2015 [20], the number of pull requests from 2014 to 2017 [10], and the number of pull requests in 2017 [13]).
Using the GHTorrent dataset 2 [16], we collected active repositories for the seven languages using the following criteria: (i) having more than 500 commits in their entire history (the same threshold used in previous work [4]), and (ii) having at least 100 commits in the most active two years. We designed the second criterion to remove long-term less active repositories and short-term projects that have not been maintained for long (and may not be software development projects). For example, we were able to exclude software-engineering-amsterdam/sea-of-ql, which is a repository of a collaboration space for students in a particular university course, and was reported as a false positive of software project identification [23]. We determine repositories' languages based on the GHTorrent information. Forked repositories are excluded if repositories are recorded in GHTorrent as forks of other repositories.
With the above criteria, we prepared the candidate list of target repositories for the seven languages as shown in Table I.
When we collected these candidate repositories (from May to June 2018), some repositories were not available because they had been deleted or made private. In total, we obtained more than 25,000 repositories, which is almost 90% of the candidate repositories.
Comment extraction. From each Git repository, we extract source files of the labeled language in the HEAD commit (the latest snapshot of a cloned repository). For example, only .java files are extracted from a Java repository. To process source files, we employ ANTLR4 lexical analyzers for six languages other than Ruby because their grammar definitions are available in the official example repository. 3 For Ruby, we use a standard library, Ripper parser.
We extract all single line comments (e.g., // in C) and multiline comments (/ * ... * /) according to the grammars. In the case of Python, string literals (''' ... ''') are also regarded as comments because they include documents (known as docstring). In the case of PHP, both HTML comments and PHP code comments are extracted.
Link identification. From the extracted comments, links are identified using the regular expression /http\S+/ (localhost and IP addresses, which are mainly used for private addresses, are excluded) and validated with the Perl module Data::Validate::URI. We identified a total of 9,654,702 links from the collected repositories as seen in Table I. All links are recorded with the information of the corresponding file, repository identifiers (pairs of account and repository names), commit hashes, and the line number where the surrounding comment starts. Considering the number of repositories, we found that repositories written in C, C++, and Java tend to contain more links compared to repositories in Python and Ruby.
C. Online Appendix
Our online appendix contains our 9,654,702 links associated with the information of languages and comment location (GitHub links including account names, repository names, commit hashes, file paths, and line numbers). The appendix is available at https://github.com/NAIST-SE/9.6MillionLinks.
III. FINDINGS
In this section, we present our findings for each research question.
A. Prevalence of Links (RQ1)
To understand the prevalence of links referenced in source code comments (RQ1), we conducted a quantitative analysis of our collected dataset in terms of link existence, domain diversity, and domain popularity.
Link existence. Figure 1a show the percentages of repositories that have at least one link in their source code comments. We see that, in every language, more than 80% of the repositories contain links in source code comments. Especially for repositories written in C, C++, and PHP, more than 90% of the repositories refer to external sources via links.
Domain diversity. In the obtained 9,654,702 links, there are 57,039 distinct domains (Internet hostnames). Figure 1b shows the distribution of the number of distinct domains per repository, for repositories that have at least one link in their source code comments. Median values are presented in the figure. We found that there is a diversity of links in a single repository even when summarized by their domains. Especially in repositories written in C, C++, JavaScript, and PHP, source code comments link to 10 or more different domains (median).
Popular domains. Figure 1c illustrates the proportion of languages shared by the top 10 most referenced domains. Note that domain ranking is based on the number of repositories instead of the number of links. If links belonging to a domain appear in a small number of repositories, the domain will be low-ranked even if those repositories contain many links.
The github.com domain is the top referenced domain in our dataset. More than 14,000 repositories across seven languages referenced content on github.com. As we will describe in detail in Section III-B, such content includes software homepage, code, and profile of a GitHub contributor. However, we find in Section III-F that many links to github.com are no longer available. We also found many links to code.google.com (7th rank). Such content includes bug report, software homepage, and code. In a statistically representative sample of common domains (sampling described in Section III-B), two out of three links to code.google.com are redirected to github.com, and one links to code.google.com/archive/.
The stackoverflow.com domain is the second most referenced domain and has been linked to from 8,189 repositories. As identified in previous work, Stack Overflow is widely used as a knowledge exchange platform between programmers [38], where programmers can obtain knowledge of good practices from code examples [29], [35], for example. The large number of links to stackoverflow.com in source code comments can be another piece of evidence of developers' needs for knowledge acquisition from external resources. We study how code could be obsolete by not being updated when external sources change in Section III-E.
The top domains differ by programming language: The www.apache.org domain is frequently linked from Java repositories, and the www.gnu.org domain is referenced from C and C++ repositories. Repositories written in JavaScript have many links to the Web-related domains of www.w3.org and developer.mozilla.org.
Summary: We revealed that links in source code comments are prevalent. In more than 80% of the 25,925 active repositories written in seven common languages, there exists at least one link in each repository. The top three most frequently referenced domains per repository are github.com, stackoverflow.com, and en.wikipedia.com. To understand what kind of link targets are referenced in source code comments (RQ2), we conducted a qualitative study of a statistically representative and stratified sample of all links in our dataset.
After an initial analysis of the link data, it quickly became obvious that some domains account for many links while other domains are rare. Based on this observation and to ensure diversity of our sample, we divided the data into three strata:
1) links to commonly linked domains, 2) links to domains sometimes linked, and 3) links to rarely linked domains.
To decide on thresholds for distinguishing domains into those that are commonly, sometimes, and rarely linked, we conducted a visual analysis of the distribution of links per domain in our dataset. Figure 2 shows this distribution using a log scale. While content from the most commonly linked domain was linked more than a million times, many domains appeared in our dataset with a much lower frequency. We used the "step" in the distribution on the left-hand side of Figure 2 to distinguish between domains that are commonly linked and domains that are sometimes linked, with a cutoff frequency of 230. We consider domains which account for exactly one link in our dataset to be rarely linked. Table II shows the number of domains and the number of links in each strata. We then drew a statistically representative sample from each bucket. The required sample size was calculated so that our conclusions about the ratio of links with a specific characteristic would generalize to all links in the same bucket with a confidence level of 95% and a confidence interval of 5. 4 The calculation of statistically significant sample sizes based on population size, confidence interval, and confidence level is well established (first published by Krejcie and Morgan in 1970 [19]). The qualitative analysis was conducted in multiple iterations: in the first iteration, the first two authors independently coded 20 links from the sample, discussed a common coding guide, and tested this coding guide on another 20 links from the sample, refining the guide, merging codes, and adding codes which had been missed. The initial codes were informed by those used by Aniche et al. [5] to categorize content posted on news aggregators, however, we found that their codes did not cover all types of link targets present in our dataset. In the second iteration, the four authors of this paper then independently coded another 30 links from the sample, using the coding guide designed by the first two authors. We then calculated the kappa agreement of this iteration between all four raters, for 30 cases and all 19 codes that emerged from the qualitative analysis. 5 The kappa agreement was 0.81 or "almost perfect" [39]. Based on this encouraging result, the remaining data was then coded by a single author.
The following list shows the 19 codes that emerged from our analysis along with a short description which was available in the coding guide:
• 404: link target does not exist (anymore) or cannot be accessed • licence: licence of a software project • software homepage: main web presence of a library or software project • specification: anything that resembles a requirements document or a technical standard • organization homepage: main web presence of an organization or company • other: anything that does not fit the other codes, including if sign-in is required • tutorial or article: technical article or tutorial, without commenting section (blog post otherwise) • API documentation: documentation of an API element • blog post: technical content with a commenting section • application: interactive application (e.g., web application, online utility) • bug report: bug report or issue in an online bug/issue tracker • research paper: academic paper • personal homepage: personal homepage of one individual • code: a source code file • forum thread: thread in a forum or entire forum • GitHub profile: profile of a GitHub contributor • book content: chapter/section of a book or entire book • Q&A thread: question-and-answer thread, but not Stack Overflow • Stack Overflow: question-and-answer thread on Stack Overflow Taxonomy of link targets. Table III shows the result of our qualitative analysis. For commonly-linked domains, license is the most common type of link target, accounting for more than half of the links in our sample, followed by software homepages, i.e., the main web presence of a library or software project. For domains that are linked sometimes from source code comments, the most common type of link target was 404, a non-existing link target. This is a first indicator of the decay 5 Kappa agreement was calculated using http://justusrandolph.net/kappa/. of links in source code comments, which we will analyze in detail in the next sections. Software homepages are also prevalent, as are organization homepages, both accounting for more than 10% of all links in our sample. Finally, for links from domains which are rarely linked, the problem of decay is even more serious, affecting 37% of the links in this sample.
In other words, we can conclude with a 95% confidence that between 32 and 42% of all links to domains which are rarely linked from source code comments are dead or inaccessible. The prevalence of the code "other" in the results for links to rarely linked domains is an indicator of the diversity of links present in source code comments.
Summary:
We identified more than a dozen different kinds of link targets, with dead links, licenses, and software homepages being the most prevalent. Dead links are particularly common for rarely linked domains.
C. Link Purpose (RQ3)
To understand the purpose of links referenced in source code comments (RQ3) and similar to (RQ2), we again employed a qualitative analysis of our statistically representative and stratified sample of 1,146 links, only this time focusing on the origin of a link (in a source comment) rather than the target of the link. We used the same iterative approach to design a coding guide, and validated the coding guide by having the four authors code 30 links independently, this time leading to a kappa agreement of 0.70 which indicates "substantial" agreement [39]. The somewhat lower agreement can be explained by the need to extrapolate the purpose of a link from its context in the source code alone, without being able to interview the contributor who added the link.
The following list shows all 8 codes that emerged from our analysis for link purpose, along with a short description which was available in the coding guide. The coding guide was informed by work on source code comments (e.g., [36]), selfadmitted technical debt (e.g., [28]), and attribution (e.g., [7]).
• metadata: the link relates to the author of the source code, a related organization, or the license • source/attribution: the comment explicitly indicates that the link is a source of some aspect of the source code (e.g., algorithm) • source code context: the link adds additional information to the source code (use this code for things that do not obviously fit into any of the previous) • see-also: the comment explicitly indicates that the link points to additional reading material (usually accompanied by a phrase such as "see also"). • commented-out source code: the link is part of the source code, e.g., as a parameter value, but has been commented out • link-only: the comment only contains the link • self-admitted technical debt: bug-related, like workaround, under development, and so on • @see: the link is accompanied by "@see", but no further explanation Note that our coding guide required the indicators of seealso and source/attribution to be explicit, thus reducing the guesswork required as part of the qualitative analysis.
Taxonomy of link purpose. Table IV shows the results of the qualitative analysis. For links to commonly linked domains, providing metadata, e.g., in the forum of licenses or author information, is by far the most common purpose of a link, covering three quarters of the links in our sample. For links to domains which are only sometimes linked, metadata only accounts for one third of the data, followed by links included for the purpose of attribution, providing context, or see-also information. The results for links to rarely linked domains are even more diverse: we can see from the table that these links are used for context, attribution, and as part of the source code functionality (albeit commented out), to name the top three. Six of the eight codes account for at least 10% of the links in this part of our sample.
Matching link target with purpose. Based on the qualitative analysis conducted for answering RQ2 and RQ3 about the targets and purposes of links in source code comments, we are now able to investigate the relationships between the different types of link targets and the different purposes which emerged from our qualitative analysis. To do so, we applied association rule learning using the apriori algorithm [1] as implemented in the R package arules 6 to our data, treating each link as a transaction containing two items: its target type and its purpose. We used 4 as threshold for support and 0.7 as threshold for confidence, i.e., all rules that we extracted are supported by at least four data points and we have at least a 70% confidence that the left hand side of the rule implies the right hand side. Table V shows the association rules extracted from our data with these settings, separately for each stratum in our sample. Unsurprisingly, the link target type license and the purpose of providing metadata are tightly connected, in particular for links referring to commonly linked domains. In fact, all links to licenses were found to have been included for the reason of providing metadata, and 72% of the metadata is license information. Links to software, organization, and personal homepages are also associated with metadata, across all strata. Although with a relatively low support of seven instances, it is also interesting to note the tight coupling of the link target type bug report and the purpose of admitting technical debt.
Summary: We identified different purposes for the inclusion of links in source code comments, with providing metadata and attribution being the most common. Links are also included for background information, to provide context, or to admit technical debt. In some cases, the link is part of source code which has been commented out.
D. Link Evolution (RQ4)
To understand how links evolve (RQ4), we investigated the revision histories of repositories in the samples from (RQ2). For each sample link, we searched an old version of the link that has been revised by a commit that introduced the link. We extracted such a commit introducing a link by using the git log command (-S option with tracking file renaming). We searched http(s) links removed from the code location where the sample link has been added. We identified 88 revised links out of 1,146 samples, including 24 (6.3%) in common, 31 (8.1%) in sometimes, and 33 (8.7%) in rare. Less than 10% of the links had been revised in each strata, that is, most of the links have never been updated. We manually analyzed the old and new paths of the links and identified the following evolution types:
• license replacement: a new link refers to a new software license. For example, a link to GNU GPL has been replaced with a link to the Apache License. • organization update: a project or an organization changed its name or website. For example, a project that acquired their own domain updated links to their project website. • change to https: a new link uses HTTPS instead of HTTP for the same location as the previous link. • content move: a new link refers to a slightly different location (e.g. the same path on a different server, the same document name on a different wiki), which is likely the same content. • content update: a new link refers to different content from the previous link, but the new content is likely updated. For example, the Apache Jackrabbit project replaced a link pointing to a draft version of a document 7 with a link to an RFC version. 8 • content change: a new link refers to relevant but different content from the previous link. For example, the Pi4J project replaced a link related to the usage of a serial port of Raspberry Pie 9 with another similar document. 10 • other: we could not identify types for some links whose contents are no longer available. It should be noted that the contents for 20 updated links are 404 Not Found.
Reasons for link evolution. Table VI shows the numbers of link evolution in the three strata. For commonly-linked domains, license replacement and updating organizational information account for about 80% of link revisions. For domains sometimes linked, organization update is the most common, followed by other and content change. For rarely- Summary: Links are rarely updated (less than 9%). Common modifications are updating licenses and organization homepages.
E. Link Target Evolution (RQ5)
After understanding the evolution of links, our next research question (RQ5) asks about the evolution of their targets. To investigate whether link targets referenced in source code comments evolve, we attempted to download all link targets in our sample of 1,146 links using the curl command with a timeout of 60 seconds. As already discussed as part of (RQ2), not all link targets are available. We were able to download a total of 1,034 link targets (90%). We then repeated the same download process exactly ten days later, to see how many of the link targets had changed within this short time frame and what kind of changes had happened.
Changes to the link target. Table VII summarizes the results of this analysis: out of the 1,034 link targets for which curl returned a result, 879 (85%) had not changed at all in the ten-day time frame (the downloaded content was exactly the same, as per the Windows file compare tool fc). We manually analyzed the 155 cases in which the content had changed by opening both versions in a web browser and conducting a visual comparison. The majority of the changes in the remaining 15% can be attributed to automatically generated changes, such as the display of a visitor count or the current date in a footer.
However, a non-negligible number of link targets underwent more significant changes in the ten-day time window: For six links for which we were able to retrieve data on the first download date, there was no content available anymore ten days later. For three links which had displayed an error message when we first attempted to download their content, the specific error message changed. Some link targets changed their website design, and for a few links, the content changed. For example, the download page of TaskWarrior 11 included the following notice when we first downloaded its content: "(For those of you wishing to build task from source on Cygwin, you will need some components installed (make, g++/clang, GnuTLS, libuuid, libreadline), but don't forget -task is a standard part of the Cygwin distribution, so you do not need to build from source, unless you want the latest development snapshot)." Ten days later, this notice was replaced with: "(Please note, that Cygwin is not supported anymore. Please use the Windows Subsystem for Linux to use Taskwarrior on Windows)." We argue that this kind of change is relevant to software developers.
Stack Overflow case study. To investigate this phenomenon in more detail, we conducted a case study with the subset of links pointing to Stack Overflow. As seen in Section III-A, stackoverflow.com is the second most referenced domain.
In all 9,654,702 obtained links, there are 32,197 links belonging to stackoverflow.com. Among those Stack Overflow links, there are varieties of expressions: an abbreviated path to an answer (/a/(answer id)), an abbreviated path to a question (/q/(question id)), and a full path to a question (/questions/(question id)/(title)). Older links start with 'http://' and newer links start with 'https://'. For each Stack Overflow link, we identified the timestamp of when the link was added to a repository by using the same git log command (-S option with tracking file renaming) used in Section III-D. For duplicate links, we consider only the oldest timestamp. Consequently, we obtained a list of 11,464 distinct links with their timestamps.
We then made use of the SOTorrent dataset [8] to investigate the extent to which Stack Overflow content had changed since the link to the question or answer had been added to a source code comment in a Git repository. We created a statistically representative sample of 372 links from the population of all unique links to Stack Overflow content in our dataset, and we queried SOTorrent to determine the following metrics for each link:
• the number of text edits on any post (i.e., question or answer) in the same thread, • the number of new comments on any post (i.e., question or answer) in the same thread, • the number of new answers in the same thread, and • the number of edits to the thread title. Thread updates. Figure 3 shows the results of this analysis. More than half of all Stack Overflow threads had at least one change made to the text of a question or answer in the same thread (median: 1, third quartile: 3) after they were committed to a Git repository as part of a source code comment, and more than half of these links attracted at least one new comment in the meantime (median: 2, third quartile: 7). While the number of new answers to a thread was zero in the median case, a quarter of the Stack Overflow threads attracted at least 2 new answers after the link was added in a source code comment (median: 0, third quartile: 2). In total, only 91 (24%) of the 372 Stack Overflow threads in our sample did not undergo any changes after they were added to a Git repository. Summary: We found that even within a short ten-day time window, a non-negligible portion of link targets referenced in source code comments evolve, in some cases adding or modifying pertinent information. In our case study on links pointing to Stack Overflow, we found that more than three quarters of all Stack Overflow threads linked in source code comments attracted at least one change (edit, new answer, or new comment) after being first referenced in a source code comment.
F. Link Decay (RQ6)
Among the obtained 9,654,702 links, there are 382,650 distinct links. To investigate the amount of dead links in source code comments (RQ6), we accessed all Web contents from the 382,650 unique links by using the Perl module LWP. 13 Link retrieval responses.
G. Fixing Dead Links (RQ7)
To fix dead links (RQ7), we collected fixable dead links and submitted pull requests to fix. We select dead links that are not metadata (need multiple files to be fixed) nor commented-out source code. Personal blog articles were avoided because they tend to be no longer available. Consequently we obtained 14 dead links to API documentation, research papers, and so on. After checking the original content in Wayback Machine 14 , we manually investigated new links by searching specific keywords in the original content. Our fixing process included first forking a personal copy of the project, fixing the link, and then later submission of a pull request to the project.
Pull request results. Developers showed they cared about dead links by accepting to all nine pull requests. 15 16 17 18 19 20 21 22 23 Since the link itself is a comment, we speculate that it has almost no conflicts with existing code, so our pull requests are likely to pass all tests and to be merged immediately. Developers responded with comments such as "LGTM (looks good to me)" and "Thanks for spotting the broken link".
Overall, the responses from developers provide sufficient motivation for tool support to assist with fixing broken links. We argue that such comments indicate that developers are concerned with keeping their links alive.
Summary: Developers generally responded positively to the request to fix dead links. All nine responsive projects accepted our pull requests to fix dead links.
IV. RECOMMENDATIONS
Our findings can be summarized into recommendations for developers and researchers.
Recommendations for software developers including links in source code comments are:
• Try referencing permanent links, as it is reported that more than 30% of links will not work after a 4 year period [18]. Referencing research papers with DOI is preferable instead of researchers' personal Web pages. Explicitly mentioning tags or commit hashes to referenced code in GitHub would be recommended, as software structure can be changed (we found many dead links to GitHub in Section III-F). • Check link targets for new information on a regular basis, as referenced external resources can be considered to be software documentation to support comprehension and maintenance activities. In addition, link target updates can be triggers of improving and updating code (as seen in Section III-E).
We can also consider future work with the following possible challenges.
• Further understanding of external sources. We found many sources as shown in Figure 1c and Table III. Although some sources have been already studied, for example, licenses [12], self-admitted technical debt [28], and Stack Overflow [38], other sources have not been well-studied with regard to their impact and influence on software development, such as research papers and Wikipedia articles. • Further studies of source code comments to understand how knowledge (related to knowledge-based theory [44] and human capital [26], [40]) is summarized and shared via source code comments. Further analyses of source code comment contents [27] would be required. • Tool support for external source referencing, tracking, and updating. Although we recommend developers to maintain links and associated code, it is not always possible. Tools or systems to help developers fix link issues and maintain code automatically could be practically useful.
V. THREATS TO VALIDITY
Threats to the construct validity exist in our approach to link identification. Since we identified links per line in source code comments, links located across multiple lines cannot be extracted. Note that we did not encounter any such multipleline links in our representative sample of 1,146 links. Hence we consider that the impact of incorrect link identification because of multiple-line links is small.
Threats to the external validity exist in our repository preparation. Although we analyzed a large amount of repositories on GitHub, we cannot generalize our findings to industry nor open source projects in general; some open source repositories are hosted outside of GitHub, e.g., on GitLab or private servers.
To mitigate threats to reliability, we prepared an online appendix of our 9,654,702 links with associated information (see Section II-C).
VII. CONCLUSION
To understand purposes, evolution, and decay of links in source code comments, we conducted (i) a quantitative study of 9,654,702 links from source code comments in 25,925 Git repositories to establish the prevalence of links in source code comments; (ii) a qualitative study of a stratified sample of 1,146 links to determine the kinds of link targets and purposes for including links present in our dataset; (iii) a quantitative and qualitative study to investigate the evolution of links in source code comments and their targets; and (iv) a quantitative study to determine the extent to which links in source code comments are affected by link decay.
Our work has shown that links in source code comments indeed suffer from decay, from insufficient versioning (when link targets evolve), and from lack of bidirectional traceability (which could help avoid decay). Based on this work which has established the prevalence of links in source code comments, their multiple purposes and targets, issues of decay, and practical needs of fixing dead links, there are many open avenues for future work: understanding the role of external sources for software development, further studies of source code comments, and tool support for external source referencing, to name a few. | 6,149 |
1901.07366 | 2912586918 | Advertisements are unavoidable in modern society. Times Square is notorious for its incessant display of advertisements. Its popularity is worldwide and smaller cities possess miniature versions of the display, such as Pittsburgh and its digital works in Oakland on Forbes Avenue. Tokyo's Ginza district recently rose to popularity due to its upscale shops and constant onslaught of advertisements to pedestrians. Advertisements arise in other mediums as well. For example, they help popular streaming services, such as Spotify, Hulu, and Youtube TV gather significant streams of revenue to reduce the cost of monthly subscriptions for consumers. Ads provide an additional source of money for companies and entire industries to allocate resources toward alternative business motives. They are attractive to companies and nearly unavoidable for consumers. One challenge for advertisers is examining a advertisement's effectiveness or usefulness in conveying a message to their targeted demographics. Rather than constructing a single, static image of content, a video advertisement possesses hundreds of frames of data with varying scenes, actors, objects, and complexity. Therefore, measuring effectiveness of video advertisements is important to impacting a billion-dollar industry. This paper explores the combination of human-annotated features and common video processing techniques to predict effectiveness ratings of advertisements collected from Youtube. This task is seen as a binary (effective vs. non-effective), four-way, and five-way machine learning classification task. The first findings in terms of accuracy and inference on this dataset, as well as some of the first ad research, on a small dataset are presented. Accuracies of 84 , 65 , and 55 are reached on the binary, four-way, and five-way tasks respectively. | Finally, in 2006, researchers investigated the use of neural networks to predict television ad effectiveness @cite_30 . They achieved an accuracy of 99 | {
"abstract": [
"Abstract This study aims to incorporate Artificial Neural Network (ANN) for measuring the effectiveness of the TV broadcast advertisements (toothpaste) by discovering important factors that influence the advertisement effectiveness. The information about the effects of each of these factors has been studied and it is used for measuring the advertisement effectiveness. Fifty attributes are examined to derive values from thirteen factors. These thirteen factors are used as input to ANN model. The data collected from 837 respondents are used for training and testing the ANN. The backpropagation algorithm is used for adjusting the weights in the ANN. Experimental results show that the ANN model achieves 99 accuracy for measuring the advertisement effectiveness."
],
"cite_N": [
"@cite_30"
],
"mid": [
"2061533513"
]
} | Measuring Effectiveness of Video Advertisements | In modern society, advertisements touch nearly every aspect of a person's everyday living. Advertisements appear on television, the internet, and mobile videogames. They are also wildly popular in videos, busy city centers, radio, billboards, and posters. Behind every advertisement, there are content, market research, and ads research teams ensuring their company's advertisements achieve the highest click rate possible, converting the audience into active consumers. In general, when creating advertisements, the content and evoked emotion are heavily taken into consideration. For example, Ford wants to develop commercials with cars. Ford commercials generally want to evoke emotions to ensure their consumers feel 'safe' or 'positive'. In contrast, a company developing home security systems wants to evoke 'fear' into the audience in the sense that consumers are fearful of burglars, so they must act fast and buy a security system. Clearly, advertisements are multifaceted creations with the ability to convey symbolism, propaganda, emotions, and careful thinking.
In particular, when designing advertisements, the main goal is ensuring effectiveness. Generally, an effective ad implies a higher conversion rate for consumers buying a product. Effectiveness can be formulated by delivering a product and message in a unique, straightforward manner that stands out to consumers. The advertisements industry's market cap was over $200 billion in 2018 [8]. In fact, in 2017, Fox charged $5 million for a 30 second ad during Super Bowl LII, reaching more than 100 million viewers [18]. Thus, even a trivial boost in an ad's effectiveness can lead to a millions of dollars of added revenue. One rising problem with current advertisements is their failure to draw in millennials [13]. Therefore, the investigation of advertisement effectiveness is not only important to boost revenue and prestige of a company's products, but to also engage the next generation of consumers. In fact, Hulu is implementing ad selectors [16]. These ad selectors allow viewers to select one of many displayed ads, therefore handing some of the power to the consumer, rather than force feeding generalized content to large audiences. One research paper from 2012 investigated a similar idea to ad selectors, the video ad 'skip' feature on YouTube videos [24].
In this paper, we investigate a combination of features gathered from human annotators, common video processing techniques, and features extracted by Ye et al. Some of these features are low-level computer vision, such as shot boundary, average hue, and optical flow, others are higher level, such as object detection, facial expression detection, and text, while a few encapsulate higher level information about the advertisement, such as memorability, climaxes, and duration. A thorough investigation is performed to gather inference on the distribution and approaches to investigating this challenging task, while preprocessing any features and the dataset beforehand. Next, support vector machines, decision trees, and a logistic regression classifier are trained on the features, where one classifier is trained on one feature at a time. Individual classifiers performed near the baseline on the binary, four-way, and five-way classification tasks. However, a unique, hybrid ensemble learning algorithm is introduced to model the underlying distribution of the training dataset, which is used to predict effectiveness ratings of the testing dataset, resulting in significantly higher accuracies on all three tasks, ranging from a boost of 30-35%. As such, this paper utilizes a mix of human-annotated labels and computational video processing techniques to gather insights into effectiveness for advertisements, unseen in this field since most studies involve human surveys or a computational approach, but from a human observation standpoint. This process requires a trivial amount of input to predict any video's effectiveness on any media platform, any resolution, any type of advertisement, and in a non-specific culture. Thus, this paper presents a first contribution to automated, human-less video effectiveness contribution and inference, as shown in Figure 1.
III. METHODS
First, we will discuss the dataset, along with its features, data collection procedure, and brief overview into the values they can hold. Next, preprocessing of the data will be discussed in terms of removing class imbalance. Then, the discussion transitions to additional features extracted from the advertisements and features collected from Ye et al.'s dataset, as well as their importance. Fourth, this section contains exploratory analysis on the advertisements dataset, showcasing topics and sentiments distributions, effectiveness distributions for topics and sentiments, correlations between features and effectiveness, and reliability of the human annotators' ratings across the dataset. Also, we will discuss the learning methods through the use of support vector machines, decision trees, and logistic regression classifiers, as well as an ensemble of classifiers. Finally, the experiments section will showcase the key results of the learning process and wrap up any notable conclusions about this dataset.
A. Dataset
Hussain et. al released a dataset to CVPR 2017 collected from Amazon Mechanical Turk human annotators. The dataset consists of two parts: 64,832 static image advertisements and 3,477 video advertisements. Both datasets are similar with few differences in features. The static image dataset possesses labels for the topic, sentiment, question/answer statement, symbolism, strategy, and slogan. In comparison, the video dataset has labels for topic, sentiment, action/reason statement, funniness, degree of excitement, language, and effectiveness.
Topics are in a range of thirty-eight possibilities describing the overall theme of the advertisement, such as 'cars and automobiles', 'safety', 'shopping', or 'domestic violence'. Sentiments describe emotion evoked in the user, such as 'cheerful', 'jealous', 'disturbed', 'sad', and more, with thirty possibilities. Funniness and excitement are binary variables with value 0 indicating unfunny/unexciting and 1 implying funny/exciting. The ternary language feature takes the value 0 if the advertisement is non-English, 1 if it English, and -1 if language is unimportant to the video, such as a voiceless ad. Action/reason statements consist of a simple call to action and motivation statement combined into one. For example, one automobile commercial uses the action/reason statement, "I should buy this car because it is pet-friendly." Every statement's action and reason are broken up with 'because'. As such, the action asked of the consumer it to buy the car and the reason is its pet-friendly characteristic. Action/reason statements vary in complexity throughout the dataset. Finally, the goal of this research is to predict the output label, which is the 'effective' feature. Effectiveness is also a discrete value ranging from one to five.
All videos were gathered from YouTube and verified as an advertisement rather than an unrelated video. Then, human annotators on Amazon Mechanical Turk labelled all seven features for each video. Five annotators were assigned to each video to control for possible high variance in labels but were kept anonymous. Therefore, controlling for bias in work identity is unfeasible in these experiments. Additionally, the raw version of the video dataset contains all ratings for each feature for each video, while an alternative, clean version utilizes mean or mode across all five labels of each feature to compute a simplified, or 'clean', representation of that video's ratings.
B. Data Preprocessing
The most immediate issue in terms of preprocessing is ensuring class balance. After investigation, we discovered the dataset consists of 193 samples of effectiveness 1, 261 samples of effectiveness 2, 1319 samples of effectiveness 3, 426 samples of effectiveness 4, and 1278 samples of effectiveness 5. The overall effectiveness for a given video is computed as the mode of the five ratings for the video. In general, any numeric, discrete feature with several human-annotated labels was aggregated by use of mode to better represent the video's underlying ground truth value. This specifically refers to the provided clean dataset [17], which are the effectiveness, exciting, funny, language, sentiments, and topics features. In case of ties, the lowest value was chosen. To ensure class balance, the class with lowest count determined the number of randomly sampled videos from each class. Therefore, 193 samples were used for each class, reducing the dataset's size to 965 and increasing difficulty of the task significantly.
C. Features
The features gathered for analysis and future learning are a combination of human-annotated labels, output from popular video processing techniques, and Ye et al. Twenty-one features were used for the machine learning approach, while correlations were calculated for an additional six features. As such, this subsection is broken down into two further sections exploring features from video processing techniques and data collected from [37]. Features collected from humans are not discussed as they have been previously mentioned are straightforward in their collection; that data was already
1) Low-level and Computed Features:
• Color -Average Hue -Median Hue -Average Intensity -Average Intensity over middle 30% -Average Intensity over middle 60% • Average Memorability • Video Duration • Text -Average Word Length -Meaningful Words -Average Sentence Length -Most Common Word • Shot Boundaries
• Optical Flow Next, to gain further insights into the factors that contribute to an ad video's effectiveness, fourteen new features were developed and computed from each video. The first five deal with the colors and visuals: average hue, median hue, average intensity, average intensity over middle 30% of video, and average intensity over middle 60% of video. Hue is classified as a 3-dimensional vector of a pixel's red, blue, and green color value. Intensity is calculated as the greyscale value of a pixel. The latter two features attempt to gauge the most captivating portions of the video. The middle 30% of a video is a window covering 30% the size of the height and width located in the center of the image; the middle 60% is computed in similar fashion. Next, average memorability across frames is computed utilizing [19]. Duration of the video is gathered from calls to the YouTube API.
Text content is gathered from Google Cloud Vision's optical character recognition (OCR) API. The average duration of a video is 15 seconds at 24 frames per second, thus consisting of 360 frames. Every 60 th frame is extracted from the video, totalling 6 frames of text information per video on average. All 6 frames were passed into the API and text was extracted, resulting in four new features: average word length, meaningful words contained, average sentence length, and most common word. A meaningful word is defined as any non-trivial word from text, such as proper nouns, locations, objects, and adjectives. It is important to note popular text preprocessing was performed to ensure useful results, especially when finding meaningful words. For example, each word was stemmed (i.e. 'confused'/'confusing' become 'confuse') and stop words (i.e. 'a'/'the'/'by'/etc.) were omitted to ensure meaningfulness of features. These preprocessing techniques were handled out with the popular Python library, NLTK [4].
Furthermore, shot boundaries were computed in addition to average optical flow of videos. Shot boundaries were measured by counting the number of scene changes throughout the video. This measures the video's quickness; higher scene changes equates to a faster video. The reason for using this metric is analyzing whether fast or slower videos translate to higher or lower effectiveness due to speed of delivery of the author's message. Additionally, optical flow is computed as the sum of the average optical flow change from frame to frame. Therefore, it is seen as a summation of vector magnitudes, representing the change in the video's content; higher optical flow is equivalent to intense content shifts. Finally, optical flow was converted into a 30-bin across the entire video. Every bin consists of the sum of vector magnitudes for that portion of frames in the video. Then, the bin was normalized via L1 norm such that all bin values sum to one.
2) Ye et al. features: [37] investigated climax in video advertisements. They gathered additional features on the video dataset including facial expressions, emotions, audio, common objects, detected climaxes, and scenes. All of these features were utilized to gain further insight into advertisement effectiveness.
• Audio • Objects • Places/Scenes • Facial Expressions • Emotions
The audio signal per frame was averaged to represent a video's 'loudness'. Furthermore, all objects contained in the videos was gathered. Then, the probability of each object's occurrence was calculated (i.e. 1000 objects were detected, and 800 were 'person', so 'person' now has value 0.8) on the entire dataset to provide prior probabilities. Finally, each video's individual object probability distribution was calculated and each object's probability was divided by the respective prior to calculate the final distribution. For example, if the probability of a person in a specific video is 0.6, but the aforementioned prior is 0.8, then the final value in the feature vector is 0.6 / 0.8 = 0.75. This indicates a person appears 75% of the time relative to the average. In total, 786,602 objects were discovered across 80 unique classes, representing a diverse array of objects. There were 872,870 detected places/scenes, 365 of which are unique, 34,572 detected facial expressions, 8 of which were unique, and 3,659,717 detected emotions, 26 of which were unique. These three aggregated features were preprocessed in similar fashion to the objects feature (i.e. a prior was calculated to determine the probability across the entire dataset and each video's value is a ratio between its probability distribution over the prior).
The audio signal (one-dimensional), object probability distribution (80-dimensional), as well as places, expressions, and emotions distributions, were used as separate feature vectors for future training. The number of climaxes per video was summed up to represent an additional feature to represent the amount of highlights in a video.
These features are valuable since they allow more analysis, correlations, and construction of learning models, which in turn provides more insight into the dataset. Also, each by itself is important in analyzing most of the content placed throughout the video, such as the objects and facial expressions, which were not gathered in the previous dataset, while climax data provides a high level perspective of how actionpacked the advertisement is, which can lead to the user being more engaged. Without these features, the overall dataset is left without important content in the video, not gathered previously, that helps provide insights into the content.
D. Data Analysis
Simple data analysis was performed to view general trends of the dataset. Since effectiveness is the output label, correlations between each feature and the output label were computed with results demonstrated in Table II. Correlations were performed on the entire dataset. All correlations were weak or non-existent. The features representing number of shot boundaries and number of unique annotated sentiments performed the worst. As such, fitting a linear regression line between any of the features and effectiveness will provide poor results and accuracy. Despite this, the duration, exciting, and audio features showcase the strongest positive or negative correlations, so they were later selected as features to include in our ensemble. Ensuring consistency across the human annotators and their effective ratings is key to prevent outliers. Unfortunately, kappa statistics are not available for use on this dataset since anotators are anonymous and it is not possible to match one annotator's ratings to specific videos. However, the coefficient of variation, represented as c v = σ µ ×100%, measures volatility of a distribution, which in this case will be the ratings for a given video, as a quality assurance check [11]. There were 3114 videos (89.56%) with c v ≤ 0.5. Additionally, there were 2932 (84.33%) videos with c v ≤ 0.4 and 2244 (64.54%) videos with c v ≤ 0.3. This indicates a majority of videos were under 30% variability in terms of ratings and about 20% of videos had somewhere between 30% and 40% variability. Clearly, the ratings of videos were relatively consistent and reliable.
In search of useful indications of effectiveness, the 200 most effective and least effective advertisements were grouped by topic and sentiment. Results can be seen in Figures 2 & 3 respectively. Keep in mind the dataset contains an uneven distribution of each topic and sentiment (e.g. 'safety' might show up three times more often than 'automobiles'). Therefore, if raw results were plotted in a pie chart, they may be skewed. For example, if videos with topic 'safety' take up 5% of the entire dataset and take up 5% of the 200 most effective ads, this is to be expected. This follows the [28] ground truth distribution of the entire dataset. Also, Figure 4 represents how much more likely a given sentiment is to appear in the 200 most or least effective ads compared to the overall dataset, while Figure 5 represents the same information for topics. As supplementary analysis, topics' and sentiments' vs. effectiveness distributions are shown in Figure 6. Each data point, or dot, represents at least one sample with that mean effectiveness rating.
Finally, analysis of some videos in the dataset was performed by watching close to two hundred different videos, roughly 5% of the population. One video, https://www.youtube.com/watch?v=jhFqSlvbKAM, stands out since it contains Kobe Bryant and Lionel Messi, two of the most popular basketball and soccer players in the world. Also, the video itself has 146m videos on YouTube. The mode of annotator labels was '3' with individual ratings being ['5', '3', '3', '4', '5'], indicating celebrities are not guaranteed to make an ad effective. As another example, https://www.youtube.com/watch?v=-usbQDfTIqE is an Atari commercial from 1982. Although the production quality is poor compared to modern advertisements, annotators listed the video as a '5' with individual ratings of ['4', '5', '5', '3', '4'], which comes out to an average of 4.2. After averaging the previous video's ratings, coming out to a '4', the second video was deemed more effective than the first despite the presence of celebrities, higher production quality, more YouTube views, and more modernity. Additionally, many videos associated with obesity, drugs, addictions, and related topics typically contained dark music, low video saturation, and sad facial expressions. A final mention is the use of positive facial expressions with product placement. For example, take https://www.youtube.com/watch?v=2Md5lPyuvsk, starring Michael Jackson. Every scene change contains a can of Pepsi next to a happy child, furthering this claim. In general, every video contains a diverse mix of features, not always leading to the most reasonable deductive conclusion in terms of effectiveness. As shown, YouTube views do not always equate to higher effectiveness as mentioned earlier, and a plethora of features can be gathered from video processing to gain further insights into these advertisements' effective ratings.
E. Learning
Training and testing classification algorithms on this data was simple, requiring anywhere from several seconds to several minutes. As such, any attempt at boosting accuracy never had to deal with changing hardware on a machine or optimizing code, allowing quick investigation into key challenges of this task. The first challenge to tackle was the dataset's size. After preprocessing and feature gathering, 23 features spanned 594 dimensions. With a dataset of 965 total samples, a 594 dimensional feature vector is an infeasible training vector. Therefore, individual support vector machines (SVMs) [7], decision trees [35], and logistic regression [1] models were trained on each feature and a hybrid of bagging and stacking [5], two common ensemble learning techniques, was used to aggregate class predictions of the classifiers. Not only does this approach prevent overfitting, a serious concern with this data, but it also boosts confidence of the individually weak classifiers' predictions.
Specifically, each SVM was trained to achieve optimal results. Accuracies in Table I were the average accuracies of each SVM trained on the respective feature across five simulations. Beforehand, the dataset was randomly split for 80% training and 20% testing. Each SVM was rigorously tested in terms of the hyperparameters. For every classifier, the one-vs-rest multiclass algorithm was utilized to convert the naturally binary SVM into a multiclass classifier. This almost always produced superior results compared to one-vs-one, as well as reduced training and testing time. Different kernels worked better for different SVMs, which was discovered after rigorous testing, but most classifiers used either a polynomial or linear kernel. In Table I, results for binary classification, four-way classification, and five-way classification across the effectiveness ratings are shown. Neural networks were avoided due to their requirement of large datasets and greater risk of overfitting.
In addition to the SVMs, a logistic regression classifier was trained on the 'exciting' feature and a decision tree was trained on the topics and sentiments features. By default, a logistic regression model in sci-kit learn utilizes one-vs-rest for multi-class classification. Probabilities of classification were not utilized as a weighting parameter in terms of accuracy or further analysis. A decision tree was investigated for the topics and sentiments features as test runs to see if they boosted accuracy. Both tree classifiers were trained using the Gini impurity metric (default on sci-kit learn and standard in the scientific community) and new branches were created until a depth with a minimum splitting criterion was reached.
The unique hybrid ensemble learning algorithm, shown in Algorithm 1, provides the most significant boost in accuracy to predictions. To implement this, all individual classifiers were first trained on 80% of the dataset. Then, for each classifier, the accuracy was computed for each topic and for each sentiment on the training data, resulting in 68 'bins' of accuracies. To classify the testing dataset, the classifier with the highest accuracy for that test video's topic or sentiment was chosen to predict the output label. In this sense, a form of stacking Fig. 7. Confusion Matrix of each multiclass classification task was utilized since each classifier had a 'weight' assigned to it (its accuracy on test video's topic or sentiment). It is also a form of bagging since each classifier votes on a class for each iteration. Only the class with the largest vote dictates the final prediction. Pseudocode of this hybrid approach is represented in Algorithm 1.
F. Experiments
As shown, the hybrid ensemble learning algorithm produces accuracies far surpassing the baseline accuracies of 20% on five-way classification, 25% on four-way classification, and 50% on binary classification. In addition, these accuracies were achieved on small dataset of merely 965 samples. This is a significant feat since many modern machine learning and computer vision tasks require several magnitudes of samples higher in order to accomplish notable results.
Of the features, the highest in terms of accuracy was the advertisement's topic, duration, exciticement level. Also, the classifier trained on all features stood out noticeably with a 56.69% binary accuracy. On four-way classification, results were relativey the same with a few more standout features, such as the ad's sentiment and average hue. Finally, most classifiers on five-way classification performed close to the baseline, but the average hue, human emotions, all features aggregated, sentiments, and exciting classifiers produced the best results, deviating significantly from the 20% ± 1% accuracies.
In Figure 7, the three confusion matrices displaying true positives, false positives, true negatives, and false negatives are shown for all three classification tasks (five-way, four-way, binary). The binary task matrix is straightforward. For any misclassified sample, it is simply placed into the other false positive or false negative bin. Thankfully, the true positive and true negative bins contain similar numbers, so there are no significant gaps in misclassification errors. For the fourway task, most classifications were true positives. Bins with misclassified samples largely remained in the 4-6 sample range. Statistically, there are no misclassification bin outliers; no misclassified bin contains a significant portion of that class's samples compared to other bins. Results are more ambiguous with the five-way classification, as to be expected. True positive results are consistent; effectiveness 1 and 5 have highest true positive ratios. Meanwhile, effectiveness rating 3 is the most ambiguous. Clearly, with the fewest true positive ratings, many videos were rated as 2 or 4. This issue did not arise with other effectiveness ratings, other than a medium amount of samples from effectiveness 4 being predicted as effectiveness 5. Interestingly, a large number of samples from each class were predicted as effectiveness 1. This includes both classes 4 and 5, which are the most shocking. Investigation into the machine learning models, for example the support vectors, does not provide insight into the reasoning why many samples are misclassified this way.
In general, most samples were classified correctly and appeared as true positives. This is to be expected from the achieved accuracies. Furthermore, in addition to successful prediction of effectiveness ratings, statistical inference was provided for further insights into misclassifications, specific advertisement analysis, and individual feature classifier accuracies showcasing its success, rather than aggregating all features together as a blackbox.
IV. CONCLUSIONS
As shown in Figure I and discussed previously, no feature was a significant indicator of effectiveness by itself. But, our ensemble proved effective, boosting accuracy on the tasks by as much as 35%. Clearly, an ensemble of weak features on this dataset provides an accurate model of predicting advertisement effectiveness.
We presented a unique approach computational approach on a new advertisements dataset which has not been seen before in the fields of media studies and computer vision. This approach arose from simple human annotated labels, with most of them being metafeatures of the video itself, such as language, topic, and sentiment, while others were extracted through common video processing techniques. Finally, features from an additional paper were brought in to showcase the usefulness of higher level characteristics, such as climaxes, objects, and facial expressions. This standalone approach requires minimal human input, if any, relying heavily on computational aspects, which many marketing companies and media studies papers do not have in place.
With that being said, there is an abundance of future work in the the field of computional advertisement understanding, and specifically in measuring effectiveness. In the related work, other papers explored additional features, such as celebrities, from a non-computational perspective, which can be brought into the computational sphere. This work also has potential create a pipeline for advertisement generation in the future. Finally, if a larger dataset is gathered, neural networks become more of a possibility, as they require large datasets to be trained effectively. There are several avenues to explore future work in this fiel V. FUTURE WORK Through this research, we have displayed feasibility of measuring effectiveness of advertisements through various audio, visual, and metadata features of a given video, excluding the use of YouTube comments or ratings, which are unavailable for a video before it is officially released to the public.
One future avenue of work is exploration of the static image dataset. The most significant challenge with the video dataset is the initially small sample size of 3,477 videos, which was further reduced to 965 samples after normalizing across effectiveness. The image dataset contains ≈ 64k images. Therefore, decreased risk of overfitting and more accurate computation of a ground-truth prior distribution for classifiers should result from the use of this dataset. It will also allow use of a validation set, many-fold cross-validation, and data modelling with higher complexity. Furthermore, the task will generally be less arduous. Videos contain more information than static images, such as duration, optical flow data, shot boundary data, climax data, and more. Despite this lack of information, static images should be more straightforward in which features to extract, resulting in higher clarity of results of the advertisement's faults.
In terms of harvesting additional information, the idea of celebrity detection was initially thought useful since popular clothing and product brands (i.e. Adidas for Lionel Messi and Nike for Lebron James) sign multi-million sponsorship deals [33]. Surely, this implies these brands value a celebrity's popularity to gain additional revenue for their brand. Also, GumGum [11], a computer vision startup based in Los Angeles, provides valuations of brands on specific sports team's jersey's, further supporting this hypothesis. However, due to monetary concerns, this feature was not explored.
A unique approach with potential to boost accuracy is training a neural network, such as a convolutional neural network, to predict the topic or sentiment of a video. In turn, this showcases how well a computer is able to represent a video's overall theme. A higher accuracy indicates an advertisement's message is successfully transferred to a human audience, increasing effectiveness.
Recently, there has been research on infographics, a form of modern advertisement with heavy information content [6]. Bylinskii et al. investigated textual information flow and predicted tags suitable for the content in the infographic. This same idea can be applied to advertisements, but moreso with static images rather than video advertisements, to place tags on internet advertisements on social media sites, such as Facebook, Instagram, and YouTube, further enhancing effectiveness and outreach of a brand or product.
A feat with significant capital potential is constructing a formula for effective video advertisements. For example, generating permutations of features and testing their individual effectiveness, then ranking their effectiveness. Furthermore, this ranking can become more granular for specific demographics, such as kids, adults, cultures, geographic regions, and income levels. To supplement this idea, a tool to recommend improvements to an advertisement can help boost additional revenue or outreach initiatives before the advertisement is released to the public via the internet, television, or mobile devices.
As a notable recent technological advance, Burger King has experimented with the use of artificial intelligence to generate advertisement descriptions for their products [36]. Once generated, they utilize text-to-speech software to annotate their advertisements. A potential future application is automatic generation of video advertisements with the use of generative adversarial networks [15]. Paired with the aforementioned improvement recommendation tool, a challenge is posed to generate advertisements, provide feedback to itself, and create highly effective videos. This task is difficult due to lack of training data, high complexity of video advertisement features, and recent developments of GANs.
Finally, although the dataset provides a thorough insight into the world of advertisements, not all videos were from major brands. Therefore, a slight bias may have been introduced into the dataset. It would be interesting to investigate the comparison of these results with the results from a commercial advertisement dataset, perhaps from a major cable television network or popular streaming service.
As a result, this research has opened up several gateways to boosting how content creators generate advertisements and how consumers consume advertisements. Additionally, several avenues of research have been proposed to generate momentum within the computer vision advertisements community with new, interesting challenges. | 4,993 |
1907.07513 | 2961524200 | The phenomenon of residential segregation was captured by Schelling's famous segregation model where two types of agents are placed on a grid and an agent is content with her location if the fraction of her neighbors which have the same type as her is at least @math , for some @math . Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty cell. We analyze a generalized game-theoretic model of Schelling segregation which allows more than two agent types and more general underlying graphs modeling the residential area. For this we show that both aspects heavily influence the dynamic properties and the tractability of finding an optimal placement. We map the boundary of when improving response dynamics (IRD), i.e., the natural approach for finding equilibrium states, are guaranteed to converge. For this we prove several sharp threshold results where guaranteed IRD convergence suddenly turns into the strongest possible non-convergence result: a violation of weak acyclicity. In particular, we show such threshold results also for Schelling's original model, which is in contrast to the standard assumption in many empirical papers. Furthermore, we show that in case of convergence, IRD find an equilibrium in @math steps, where @math is the number of edges in the underlying graph and show that this bound is met in empirical simulations starting from random initial agent placements. | Recently, a series of papers by Young @cite_24 , Zhang @cite_21 @cite_28 , @cite_13 , @cite_6 @cite_22 , @cite_18 @cite_27 and @cite_19 initiated a rigorous analysis of stochastic processes induced by Schelling's model. In these processes either two randomly chosen unhappy agents of different type swap positions @cite_24 @cite_21 @cite_28 or a randomly chosen agent changes her type with a certain probability @cite_6 @cite_18 @cite_19 @cite_27 @cite_22 . It is worth noticing that both types of processes are closely related but not identical to Schelling's original model where discontent agents move to different positions until they become content with their current location. The focus of the above mentioned works is on investigating the expected size of the obtained homogeneous regions, but it is also shown that the stochastic processes starting from a uniform random agent placement converge with high probability to a stable placement. The convergence time was considered by Mobius & Rosenblat @cite_12 who observe that the Markov chain analyzed in @cite_24 @cite_21 @cite_28 has a very high mixing time. @cite_19 show in the two-dimensional grid case a dichotomy in mixing times for high @math and very low @math values. | {
"abstract": [
"Schelling's model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In BK , Brandt, Immorlica, Kamath and Kleinberg provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model's behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an level of intolerance for neighbouring agents of opposite type leads almost certainly to segregation.",
"We prove that the two-dimensional Schelling segregation model yields monochromatic regions of size exponential in the area of individuals' neighborhoods, provided that the tolerance parameter is a constant strictly less than 1 2 but sufficiently close to it. Our analysis makes use of a connection with the first-passage percolation model from the theory of stochastic processes.",
"This paper presents a variation of the Schelling [J. Math. Sociol. 1 (1971) 143; T.C. Schelling, Micromotives and Macrobehavior, Norton, New York, 1978] model to show that segregation emerges and persists even if every person in the society prefers to live in a half-black, half-white neighborhood. In contrast to Schelling’s inductive approach, we formulate neighborhood transition as a spatial game played on a lattice graph. The model is rigorously analyzed using techniques recently developed in stochastic evolutionary game theory. We derive our primary results mathematically and use agent-based simulations to explore the dynamics of segregation.",
"Using dimes and pennies on a checkerboard, Schelling (1971, 1978) studied the link between residential preferences and segregational neighborhood patterns. While his approach clearly has methodological advantages in studying the dynamics of residential segregation, Schelling's checkerboard model has never been rigorously analyzed. We propose an extension of the Schelling model that incorporates economic variables. Using techniques recently developed in stochastic evolutionary game theory, we mathematically characterize the model's long-term dynamics.",
"We analyze the Schelling model of segregation in which a society of n individuals live in a ring. Each individual is one of two races and is only satisfied with his location so long as at least half his 2w nearest neighbors are of the same race as him. In the dynamics, randomly-chosen unhappy individuals successively swap locations. We consider the average size of monochromatic neighborhoods in the final stable state. Our analysis is the first rigorous analysis of the Schelling dynamics. We note that, in contrast to prior approximate analyses, the final state is nearly integrated: the average size of monochromatic neighborhoods is independent of n and polynomial in w.",
"Suggests a reorientation of game theory in which players are not hyper-rational and knowledge is incomplete; postulates a simple adaptive learning process; and applies this framework to the study of social and economic institutions. Discusses learning; dynamic and stochastic stability; adaptive learning in small games; variations on the learning process; local interaction; equilibrium and disequilibrium selection in general games; bargaining; and contracts. Young is Scott and Barbara Black Professor of Economics at Johns Hopkins University. Bibliography; index.",
"The Schelling segregation model attempts to explain possible causes of racial segregation in cities. Schelling considered residents of two types, where everyone prefers that the majority of his or her neighbors are of the same type. He showed through simulations that even mild preferences of this type can lead to segregation if residents move whenever they are not happy with their local environments. We generalize the Schelling model to include a broad class of bias functions determining individuals happiness or desire to move, called the General Influence Model. We show that for any influence function in this class, the dynamics will be rapidly mixing and cities will be integrated (i.e., there will not be clustering) if the racial bias is sufficiently low. Next we show complementary results for two broad classes of influence functions: Increasing Bias Functions (IBF), where an individual's likelihood of moving increases each time someone of the same color leaves (this does not include Schelling's threshold models), and Threshold Bias Functions (TBF) with the threshold exceeding one half, reminiscent of the model Schelling originally proposed. For both classes (IBF and TBF), we show that when the bias is sufficiently high, the dynamics take exponential time to mix and we will have segregation and a large \"ghetto\" will form.",
"Schelling’s models of segregation, first described in 1969 (Am Econ Rev 59:488–493, 1969) are among the best known models of self-organising behaviour. Their original purpose was to identify mechanisms of urban racial segregation. But his models form part of a family which arises in statistical mechanics, neural networks, social science, and beyond, where populations of agents interact on networks. Despite extensive study, unperturbed Schelling models have largely resisted rigorous analysis, prior results generally focusing on variants in which noise is introduced into the dynamics, the resulting system being amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory (Young in Individual strategy and social structure: an evolutionary theory of institutions, Princeton University Press, Princeton, 1998). A series of recent papers ( in: Proceedings of the 44th annual ACM symposium on theory of computing (STOC 2012), 2012); in: 55th annual IEEE symposium on foundations of computer science, Philadelphia, 2014, J Stat Phys 158:806–852, 2015), has seen the first rigorous analyses of 1-dimensional unperturbed Schelling models, in an asymptotic framework largely unknown in statistical mechanics. Here we provide the first such analysis of 2- and 3-dimensional unperturbed models, establishing most of the phase diagram, and answering a challenge from in: Proceedings of the 44th annual ACM symposium on theory of computing (STOC 2012), 2012).",
"",
"Abstract The Schelling segregation models are “agent based” population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy."
],
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_13",
"@cite_12"
],
"mid": [
"2953060309",
"2951330075",
"2118282281",
"2170702389",
"2020712831",
"1481080069",
"2220177418",
"2963904935",
"",
"2074222073"
]
} | Convergence and Hardness of Strategic Schelling Segregation (full version) | Residential segregation is a well-known and remarkable phenomenon in many major metropolitan areas. There, local and myopic location choices by many individuals with preferences over their direct residential neighborhood yield cityscapes which are severely segregated along racial and ethnical lines (see Fig. 1(a)). Hence, local strategic choices on the micro level lead to an emergent phenomenon on the macro level. This paradigm of "micromotives" versus "macrobehavior" [33] was first investigated and modeled by Thomas Schelling who proposed a very simple stylized model for analyzing residential segregation [31,32]. With the use of two types of coins as two types of individual agents and graph paper serving as residential area, Schelling demonstrated the emergence of segregated neighborhoods under the simple assumption of the following threshold behavior: agents are content with their current location if the fraction of agents of their own type in their neighborhood is at least τ , where 0 < τ < 1 is a global parameter which applies to all agents. Content agents do not move, but discontent agents will swap their location with some other random discontent agent or perform a random jump to an unoccupied place. Given this, Schelling demonstrated by experiment that starting from a uniformly random distribution of the agents (see Fig. 1(b)) the induced random process yields a residential pattern which shows strong segregation (see Fig. 1(c)). While this is to be expected for intolerant agents, i.e., τ > 1 2 , the astonishing finding of Schelling was that this also happens for tolerant agents, i.e., τ ≤ 1 2 . This counter-intuitive observation explains why even in a very tolerant population segregation along racial/ethnical, religious or socio-economical lines can emerge.
Schelling's elegant model became one of the landmark models in sociology and it spurred a significant number of research articles which studied and motivated variants of the model, e.g. the works by Clark [12], Alba & Logan [1], Benard & Willer [5], Henry et al. [26] and Bruch [9], to name only a few. Interestingly, also a physical analogue of Schelling's model was found by Vinković & Kirman [35] but it was argued by Clark & Fosset [13] that such models do not enhance the understanding of the underlying social dynamics. In contrast, they promote simulation studies via agent-based models where the agents' utility function is inspired by real-world behavior. Schelling's model as an agent-based system can be easily simulated on a computer and many such empirical simulation studies were conducted to investigate the influence of various parameters on the obtained segregation, e.g. see the works by Fossett [17], which use the simulation framework SimSeg [18], Epstein & Axtell [16], Gaylord & d'Andria [21], Pancs & Vriend [30], Singh et al. [34] and Benenson et al. [6].
All these empirical studies consider essentially an induced random process, i.e., that discontent agents are activated at random and active agents then swap or jump to other randomly selected positions. In some frameworks, like SimSeg [18] or the model by Pancs & Vriend [30], agents only change their location if this yields an improvement according to some utility function. This assumption of having rational agents which act strategically matches the behavior of real-world agents which would only move if this improves their situation. This paper sets out to explore the properties of such strategic dynamic processes and the tractability of the induced optimization problems.
Model and Notation
We consider a network G = (V, E), where V is the set of nodes and E is the set of edges, which is connected, unweighted and undirected. The network G serves as the underlying graph modeling the residential area in which the agents will select a location. If every node in G has the same degree ∆, i.e., the same number of incident edges, then we say that G is a ∆-regular graph. Let deg G (v) be the degree of a node v ∈ V in G and for a given node u ∈ V let Γ G (u) denote the set of nodes v = u so that an edge {u, v} exists in E. We call Γ G (u) the neighborhood of u in network G. Let A be the set of agents and P (A) = {T 1 , T 2 , . . . , T k } be any partition of A into k non-empty distinct sets, called types, which model racial/ethnic, religious or socio-economic 3 groups. For k = 2 this corresponds to Schelling's original model [31,32] with two different types of agents. Let t : A → P (A) be a surjective function such that t(a) = T if a ∈ T . We say that agent a is of type t(a). A state of our games is defined by an injective placement p G : A → V which assigns every agent to a node in the network G and we call p G (a) agent a's location under placement p G . Two agents a, b ∈ A are neighbors under placement p G if p G (b) ∈ Γ G (p G (a)) and we denote the set of neighbors of a under placement p G as N p G (a). For any agent a ∈ A, we define N T p G (a) = {b ∈ T | b ∈ N p G (a)}, as the set of agents of type T in the neighborhood of agent a under placement p G .
For any agent a ∈ A in a placement p G , we define agent a's positive neighborhood N + p G (a) as N t(a) p G (a). For agent a's negative neighborhood, we define two different versions, called the one-versus-all and one-versus-one versions. In the one-versus-all version an agent wants a certain fraction of agents of her own type in her neighborhood, regardless of the specific types of neighboring agents with other types, so
N − p G (a) is N p G (a) \ N + p G (a)
. In contrast to this, in the one-versus-one version an agent only compares the number of own-type agents to the number of agents in the largest group of agents with different type in her neighborhood. Thus, we define the negative neighborhood of an agent a under placement p G as the set of neighboring agents of the type T = t(a) that make up the largest proportion among all neighbors, i.e.,
N − p G (a) = N T p G (a) such that T ∈ P (A) \ {t(a)} and |N T p G (a)| ≥ |N T p G (a)| for all T ∈ P (A) \ {t(a)}.
Notice that the one-versus-all and one-versus-one version coincide for k = 2, thus both versions generalize the two type case. If an agent a has no neighboring agents, i.e., N p G (a) = ∅, we say that a is isolated, otherwise a is un-isolated.
Let τ ∈ (0, 1) be the intolerance parameter. Similar to Schelling's model we say that an agent a is content with placement p G if agent a is un-isolated and at least a τ -fraction of the agents in agent a's positive and negative neighborhood under p G are in agent a's positive neighborhood.
Hence, agent a is content if she is un-isolated and
|N + p G (a)| |N + p G (a)|+|N − p G (a)| ≥ τ , otherwise a is discontent with placement p G . We call the ratio pnr p G (a) = |N + p G (a)| |N + p G (a)|+|N −
p G (a)| the positive neighborhood ratio of agent a. An agent's aim is to find a node in the given network where she is content or, if this is not possible, where she has the highest possible positive neighborhood ratio. Therefore, and analogous to [11], we define the cost function of an agent a in a placement p G for network G as follows:
cost p G (a) = max{0, τ − pnr p G (a)}, if a is un-isolated, τ, if a is isolated.
Thus, agent a is content with placement p G , if and only if cost p G (a) = 0. The placement cost, denoted cost p G (A), of a placement p G in a network G is simply the number of all discontent agents:
cost p G (A) = |{a ∈ A | cost p G (a) = 0}|.
The Strategic Games: The strategy space of an agent is the set of all nodes in the network G. An agent can change her strategy either via swapping with another agent who agrees or via jumping to another unoccupied node in network. This yields the Swap Schelling Game (SSG) and the Jump Schelling Game (JSG). For the SSG we will assume that all nodes of G are occupied. A location swap, or swap, of two agents a, b ∈ A under placement p G is to exchange the occupied nodes of both agents. This yields a new placement p G with p G (a) = p G (b), p G (b) = p G (a) and p G (x) = p G (x), for any other agent x ∈ A \ {a, b}. Two agents a, b ∈ A would only agree to such a swap if it strictly decreases the cost of both agents, i.e., cost p G (a) < cost p G (a) and cost p G (b) < cost p G (b). Hence, swapping agents are always of different types. If for some placement p G no improving swap exists, then we say that p G is swap-stable.
In the JSG we assume that there exist empty nodes in the underlying graph and an agent can change her strategy to any currently empty node, which we denote as a jump to that node. An agent will only jump to another empty node, if this strictly decreases her cost. An equilibrium placement in the JSG where no agent can improve via jumping is called jump-stable.
If the game is clear from the context, we will simply say that a placement p G is stable. If we have more than two different agent types we denote the one-versus-all version of the SSG and the JSG as 1-k-SSG and 1-k-JSG, respectively and the one-versus-one version of both games as 1-1-SSG and 1-1-JSG, respectively.
Improving Response Dynamics and Potential Games: We analyze whether improving response dynamics (IRD), i.e., the natural approach for finding equilibrium states where agents sequentially try to change towards better strategies until no agent can further improve, will converge. For showing this we employ ordinal potential functions. Such a function Φ maps placements to real numbers such that if an agent (or a pair of agents) under placement p G can improve by a jump (or a swap) which results in placement p G then Φ(p G ) > Φ(p G ) holds. That is, any improving strategy change also decreases the potential function value. The existence of an ordinal potential function shows that a game is a potential game [29], which guarantees the existence of pure equilibria and that IRD must terminate in an equilibrium. In contrast, an improving response cycle (IRC) is a sequence of improving strategy changes which visits the same state of the game twice. The existence of an IRC directly implies that a potential function cannot exist and thus, that IRD may not terminate. However, even with existing IRCs it is still possible, that from any state of the game there exists a finite sequence of improving strategy-changes which leads to an equilibrium. In this case the game is weakly acyclic [36]. Thus, the strongest possible non-convergence result is a proof that a game is not weakly acyclic.
Our Contribution
Our main contribution is a thorough investigation of the convergence behavior of improving response dynamics in variants of Schelling's model. Previous work, including Schelling's original papers and all the mentioned empirical simulation studies, assume that IRD always converge to an equilibrium. We challenge this basic assumption by precisely mapping the boundary of when IRD are assured to find an equilibrium. We show that IRD behave radically different in the swap version compared to the jump version. Moreover, we show that this contrasting behavior can even be found within these two variants. We demonstrate the extreme cases of guaranteed IRD convergence, i.e., the existence of an ordinal potential function, and the strongest possible non-convergence result, i.e., that even weakly acyclicity is violated. For this, we provide sharp threshold results where for some τ * IRD are guaranteed to convergence for τ ≤ τ * and we have non-weak-acyclicity for τ > τ * , depending on the underlying graph. See Table 1.
In case of IRD convergence, we show that this happens after O(|E|) many jumps/swaps on an underlying graph G = (V, E). We show via experiments that instances with randomly chosen initial placements meet this upper bound.
Besides analyzing IRD, we start a discussion about segregation models with more than two agent types. Besides the simple generalization of differentiating only between own type and other types, i.e., the 1-k-SSG and 1-k-JSG, we propose a more natural alternative, called the 1-1-SSG and the 1-1-JSG, where agents compare the type ratios only with the largest subgroup in their neighborhood. The idea here is that a minority group mainly cares about if there is a dominant other group within the neighborhood.
Moreover, we investigate the influence of the underlying graph on the hardness of computing an optimal placement. We show that computing this is NP-hard for arbitrary underlying graphs if τ = 1 2 or if τ is close to the maximum degree in the graph. In contrast to this, we provide an efficient algorithm for computing the optimum placement on a 2-regular graph with two agent 5
1-k-SSG 1-1-SSG 1-k-JSG 1-1-JSG reg. (Thm.2) (Thm.4) τ ≤ 1 ∆ (Thm.7) τ ≤ 2 ∆ (Thm.10) τ ≤ 1 ∆ o (Thm.5) τ ≥ 6 ∆ o (Thm.8) τ > 2 ∆ o (Thm.11) τ > 2 ∆ arb. [11] k = 2, τ ≤ 1 2 ×(Thm.6) ×(Thm.9) ×(Thm.12)
×(Thm.1&3) ow. Table 1: Results regarding IRD. "reg." stands for ∆-regular graphs, "arb" for arbitrary graphs, which model the residential area. " " denotes that IRD converge to an equilibrium, "o" denotes the existence of an IRC. "×" denotes that the version is not weakly acyclic. If τ is omitted, the result holds for any 0 < τ < 1.
types. The number of agent types also has an influence: we establish NP-hardness even on 2-regular graphs if there are sufficiently many agent types.
Schelling Dynamics for the Swap Schelling Game
In the following section we analyze the convergence behavior of IRD for the strategic segregation process via swaps. Chauhan et al. [11] already proved initial results in this direction, in particular that the SSG for two types of agents converges for the whole range of τ , i.e τ ∈ (0, 1), on ∆regular graphs and for τ ≤ 1 2 on arbitrary graphs. We close the gap and present a matching non-convergence bound in the SSG on arbitrary graphs.
The 1-k-variant seems to be a straightforward generalization of the two type case. An agent simply compares the number of neighbors of her type with the total number of neighbors. Interestingly, our IRD convergence results for the 1-k-SSG with k > 2 for arbitrary networks for τ ≤ 1 2 are in sharp contrast to the results for k = 2: On arbitrary networks with tolerant agents, i.e., with τ ≤ 1 2 , and k > 2 types IRD convergence is no longer guaranteed. For the 1-1-variant an agent compares the number of neighboring agents of her type with the size of the largest group of agents with a different type in her neighborhood. This captures the realistic setting where agents simply try to avoid being in a neighborhood where another group of agents dominates. We will show that even on a ∆-regular network an improving response cycle exists for the 1-1-SSG for sufficiently high τ .
IRD Convergence for the One-versus-All Version
For SSGs with k = 2 on regular networks and arbitrary networks with τ ≤ 1 2 the existence of a potential function was shown before in [11]. We show that this bound is tight, i.e., that for τ > 1 2 IRD may not converge.
Theorem 1. IRD are not guaranteed to converge in the SSG with k = 2 for τ ∈ 1 2 , 1 on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We prove the statement by providing an improving response cycle where in every step exactly one improving swap is possible. The construction is shown in Fig. 2 and we assume that x is sufficiently large, e.g.,
x = max 1 τ −0.5 , 1 2−2τ .
We have orange agents of type T 1 and blue agents of type T 2 . The orange agents in the groups u i and the blue agents in the groups v i , respectively, with 1 ≤ i ≤ 4 are interconnected and form a clique. for τ ∈ 1 2 , 1 . The agents types are marked orange and blue. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
a c d b (1) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (a) Initial placement a c d b (2) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (b) Placement after the first swap a c d b (3) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (c) Placement after the second swap a c d b (4) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (d)
During the whole cycle the agents in u i and v i , respectively, are content. An orange agent z ∈ u i has 4x neighbors and at most one neighbor is blue. Hence, the positive neighborhood ratio of agent z is larger than τ . The same applies for a blue agent y ∈ v i . The agent y has 4x − 3 neighbors and at most one neighbor is orange. Therefore, an agent z ∈ u i and an agent y ∈ v i , respectively, never have an incentive to swap their position with another agent, since they are content.
In the initial placement ( Fig. 2(a)), both agents a and d are discontent. By swapping their positions, agent a can decrease her cost from τ − 1 3 to τ − x−1 2x and agent d decreases her cost from τ − x+1 2x to max 0, τ − 2 3 . This is the only possible swap since neither b nor c have the opportunity to improve their costs via swapping with c, d, and a, b, respectively. However, after the first swap ( Fig. 2(b)) agent a is still not content. Swapping with agent c decreases agent a's cost to τ − 2x−1 4x , and agent c can decrease her cost from τ − 2x+1 4x to τ − x+1 2x . Again, no other swap is possible since agent b would increase her cost by swapping with agent c or d. After this (Fig. 2(c)), agent b and d have the opportunity to swap and decrease their cost from τ − x+1 2x to max 0, τ − 2 3 and τ − 1 3 to τ − x−1 2x , respectively. Once more there is no other valid swap. Agent a does not want to swap with agent d and agent b not with agent c. Finally (Fig. 2(d)), agent a and d swap and both agents decrease their costs to τ − 1 2 . Neither does agent b want to swap with agent c nor can agent c improve by swapping with agent a. After the fourth step the obtained placement is equivalent to the initial placement ( Fig. 2(a)), only the blue agents a and b, and the orange agents c and d, respectively, have exchanged positions.
Since all the executed swaps were the only possible strategy changes, this proves that the SSG is not weakly acyclic, since, starting with the given initial placement, there is no possibility to reach a stable placement via improving swaps.
We now generalize the results from [11] by showing that convergence is guaranteed for the 1-k-SSG for any k ≥ 2.
Theorem 2. IRD are guaranteed to converge in O(|E|) moves for the 1-k-SSG with τ ∈ (0, 1) on any ∆-regular network G = (V, E).
Proof. We show that Φ(p G ) = 1 2 a∈A |N − p G (a)| is an ordinal potential function. An agent a has no incentive to swap if she is content and she will never swap with an agent who has her own type, since this cannot be an improvement for both agents. Therefore, there will only be swaps between discontent agents of different types. Since we consider a ∆-regular network we have |N p G (a)| = |N + p G (a)| + |N − p G (a)| = ∆ for all a ∈ A. A swap between two agents a and b changes the current placement p G only in the locations of the involved agents and yields a new placement p G . Since a swap is an improvement for the agent a who swaps, it holds that
|N + p G (a)| ∆ < |N + p G (a)| ∆ .
The same is true for the other agent b. Thus the following holds for agent a (and agent b likewise)
|N + p G (a)| < |N + p G (a)| ⇐⇒ ∆ − |N − p G (a)| < ∆ − |N − p G (a)| ⇐⇒ |N − p G (a)| < |N − p G (a)|.
It follows that Φ(p G ) − Φ(p G ) > 0 and therefore the potential function value decreases if two agents swap their current position. Since Φ(p G ) ≤ m where m is the number of edges in the underlying network and Φ(p G ) decreases after every swap by at least 1 the IRD find an equilibrium in O(m).
We contrast the above result by showing that guaranteed IRD convergence is impossible for any τ on arbitrary networks. This emphasizes the influence of the number of agent types on the convergence behavior of the IRD.
Theorem 3. IRD are not guaranteed to converge in the 1-k-SSG with k > 2 for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We give an example of an improving response cycle, where in every step exactly one improving swap exists, for any τ ≤ 0.5. Together with the improving response cycle given in Theorem 1 for τ > 0.5 this yields the statement.
Consider Fig. 3 with a sufficiently high x, e.g., x > 3 4τ − 1 and τ ≤ 0.5. We have orange agents of type T 1 , blue agents of type T 2 and gray agents of type T 3 . The agents in one group u i and v j , respectively, with 1 ≤ i ≤ 4 and 1 ≤ j ≤ 2 are interconnected and form a clique.
During the whole cycle the agents in u i and v j , respectively, are content. An agent in u i ∪ v j has at most two neighboring agents of different type and at least two agents of her type. Since τ ≤ 0.5 these agents are content. Therefore they have no incentive to swap. In the initial placement ( Fig. 3(a)), agents a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 4(x+1) while agent d becomes content after the swap. This is the only possible swap. Agent c does not want to swap with agent a or b since she increase her cost, as well agent b cannot improve by swapping with c or d. Then (Fig. 3(b)), agent a is still discontent and willing to swap her position with another agent. Swapping with agent c decreases her cost to τ − 3 8(x+1) while c can improve from τ − 5 8(x+1) to τ − 3 4(x+1) . Again, this is the only possible swap, since d is content and c still doesn't want to swap with b. After this (Fig. 3(c)), agent d has no neighbor of her type, so she swaps with agent b who becomes content. Agent d reduces her cost from τ to τ − 1 4(x+1) . Agent a does not want to swap with d and agent b not with c since both a and b would have no agent of their own type in their neighborhood. Finally (Fig. 3(d)), agents a and d want to swap. Agent d decreases her cost to τ − 1 2(x+1) and agent a decreases her cost from τ − 3 8(x+1) to τ − 1 2(x+1) . No other two agents have any incentive to swap their position, since neither agent c nor d want to swap with agent b since they would not have a neighboring agent of their type. For the same reason agent a is not interested in swapping with c. The resulting placement is equivalent to the initial one, only the blue agents a and b and the orange agents c and d exchanged positions.
Since all swaps are the only ones possible, this shows that the 1-k-SSG is not weakly acyclic as there is no possibility to reach a stable placement. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
IRD Convergence for the One-versus-One Version
Remember, that in the 1-1-SSG and 1-1-JSG, respectively, an agent only considers the largest group of neighboring agents of one type, which is different from her own type. We start with a simple positive result for the 1-1-SSG. Proof. Any agent a of type T who has a neighbor b of the same type is content, since τ ≤ 1 ∆ . Since b has a as a neighbor, b will also be content. Since both agents are content, neither of them will consider to swap positions, and therefore both will remain content.
Any agent a who is discontent can't have a neighbor of the same type, otherwise a would be content. The cost of a must be τ in this case. Since a only considers a swap that decreases her cost, after swapping the cost of a can be at most max(0, τ − 1 ∆ ), which means a is content and will continue to be so, as we showed before.
Since agents are content at least after their first swap, and agents that are content will never swap again, each agent will participate in at most one swap. Therefore, the game converges after at most |A| swaps.
If τ is high enough, then the 1-1-SSG is no longer a potential game.
Theorem 5. IRD are not guaranteed to converge in the 1-1-SSG for τ ≥ 6 ∆ on ∆-regular networks.
Proof. We use a similar instance as in the proof of Theorem 6. Consider Fig. 4
with x > 5(1−τ )
6τ . We omit the edges between the cliques u 1 , u 2 and u 3 , of gray agents. Now, the highest degree in the graph is 6(x + 1). In order to make the graph regular, we insert new nodes filled with agents such that each new agent is the only agent of its type, and connect these new nodes with existing nodes and each other as needed.
In the initial placement ( Fig. 4(a)) agent a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 3x+1 while agent d becomes either content after the swap or, if τ > 1 2 , has costs of τ − 1 2 . Then ( Fig. 4(b)), agent a is still discontent. Swapping with agent c decreases her cost to τ − 2 4x+2 while agent c can improve from τ − 2 4x+2 to τ − 2 3x+2 . In the next step (Fig. 4(c)), agent d has no neighboring agent of her type. Therefore she swaps with agent b who becomes content, if τ ≤ 1 2 , as a result of the swap or has costs equal τ − 1 2 . Agent 9 d reduces her cost from τ to τ − 1 6x+1 . Finally (Fig. 4(d)) agent a and agent d want to swap. Agent d has the possibility to decrease her cost to τ − 1 4x+1 and agent a can decrease her own cost from τ − 3 4x+3 to τ − 5 6x+5 . From x > 5(1−τ ) 6τ as our only limitation and ∆ = 6(x + 1) we obtain τ ≥ 6 ∆ , where equality is reached if x is chosen as low as possible.
The situation is much worse on arbitrary graphs as the following theorem shows.
Theorem 6. IRD are not guaranteed to converge in the 1-1-SSG for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We show the statement by giving an example for an improving response cycle where in every step exactly one improving swap exists. Consider Fig. 4
with x > max 5(1−τ ) 6τ , τ 1−τ .
We have orange agents of type T 1 , blue agents of type T 2 and gray agents of type T 3 . The agents in one group u i and v i , respectively, with i ∈ {1, 2, 3, 4, 5} are interconnected and form a clique.
During the whole cycle the agents in u i and v i , respectively, are content. Agent v 2 has at most 2 neighbors of any type other than T 1 and at least 3x neighbors of her own type. All the other agents in u i ∪ v i have at most one neighbor of another type and at least x neighboring agents of their own type. Therefore the positive neighborhood ratio pnr of an agent z ∈ u i ∪ v i is larger than τ for x > 1 and z has no incentive to swap. In the initial placement ( Fig. 4(a)) agent a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 3x+1 while agent d becomes content after the swap. This is the only possible swap. Agent c does not want to swap with agent a or b since she would be worse off and agent b cannot improve by swapping with d. Then (Fig. 4(b)), agent a is still discontent. Swapping with agent c decreases her cost to τ − 2 4x+2 while agent c can improve from τ − 2 4x+2 to τ − 2 3x+2 . Again, this is the only possible swap, since d is content and c would not improve by swapping with agent b. In the next step (Fig. 4(c)), agent d has no neighboring agent of her type. Therefore she swaps with agent b who becomes content as a result of the swap. Agent d reduces her cost from τ to τ − 1 6x+1 . Agent a does not want to swap with d since at the new position she wouldn't have a neighboring agent of her own type and agent b not with c since this wouldn't be an improvement for b. Finally (Fig. 4(d)) agent a and agent d want to swap. Agent d has the possibility to decrease her cost to τ − 1 4x+1 and agent a can decrease her own cost from τ − 3 4x+3 to τ − 5 6x+5 . No other two agents have the incentive to swap their position, since agent c does not want to swap with agent a or b. We end up in a placement which is equivalent to the initial one, only the blue agents a and b and the orange agents c and d exchanged positions.
Since all swaps were the only ones possible, this shows that the 1-1-SSG is not weakly acyclic as there is no possibility to reach a stable placement.
Schelling Dynamics for the Jump Schelling Game
We now analyze the convergence behavior of IRD for the strategic segregation process via jumps. Chauhan et al. [11] proved that the JSG converges for τ ∈ (0, 1) on 2-regular graphs. Furthermore they showed that there exists an IRC for τ ∈ 1 3 , 2 3 on a 8-regular grid if the agents have a favorite location, i.e., a node to whom an agent a wants to be as close as possible without increasing her costs. In particular such a favorite location is necessary for their IRC. We show that convergence is not guaranteed even without a favorite location on arbitrary graphs and sharp the threshold for ∆-regular graphs at τ = 2 ∆ . We first turn our focus to the 1-k-JSG, where an agent only distinguishes between own and other types. Hence, an agent simply compares the number of neighbors of her type with the total number of neighbors. for any τ ∈ (0, 1). Agents types are marked orange, blue and gray. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
IRD Convergence for the One-versus-All Version
In [11] only for the JSG on 2-regular graphs the existence of an ordinal potential function was shown. In contrast, we prove a sharp threshold result, with the threshold being at τ = 2 ∆ , for the convergence of IRD for the 1-k-JSG on ∆-regular graphs, for any ∆ ≥ 2. Moreover, we show that the game is not weakly acyclic on arbitrary graphs. Proof. For any ∆-regular network G we define the weight w p G (e) of any edge e = {u, v} ∈ E as:
w p G (e) =
1, if u and v are occupied by agents of different types for p G , c, if either u or v, but not both, are empty for p G , 0, otherwise,
with 1 2 − 1 2∆ < c < 1 2 .
We prove that Φ(p G ) = e∈E w p G (e) is an ordinal potential function. Note that τ is sufficiently small, so that an agent becomes content if she has two neighbors of her type. Therefore, an agent who is willing to jump to another node has at most one neighbor of the same type. Without loss of generality, we assume the existence of a discontent agent y for placement p G . Let p G be a placement that results from a jump of y. Let a = |N + p G (y)|, b = |N − p G (y)| and let ε be the number of empty nodes in the neighborhood of p G (y). Let a = |N + p G (y)| and b = |N − p G (y)| be the number of agents of the same type and of different type, respectively, and let ε be the number of empty nodes in the neighborhood of p G (y). We will show that if an agent jumps, Φ changes it holds that Φ(p G ) − Φ(p G ) = 0a + 1b + cε + ca + cb + 0ε − ca + cb + 0ε + 0a + 1b + cε
= −ca + (1 − c)b + cε + ca + (c − 1)b − cε > 0,
and therefore Φ decreases for every improving jump of an agent.
There is no incentive for agent y to decrease the number of neighbors of the same type because decreasing this number would mean that either a ≥ 2, i.e., agent y is content and does not want to jump, or a = 1 and therefore a = 0 which is never an improvement. Hence, we have to distinguish between two cases:
If a < a , then agent y increases the number of neighbors of the same type. Since we consider a ∆-regular network, we have a + b + ε = ∆ and a + b + ε = ∆, so b = ∆ − a − ε and b = ∆ − a − ε . Hence,
− ca + (1 − c)b + cε + ca + (c − 1)b − cε = − ca + (1 − c)(∆ − a − ε) + cε + ca + (c − 1)(∆ − a − ε ) − cε = − ca + (1 − c)(−a − ε) + cε + ca + (c − 1)(−a − ε ) − cε = − ca − a − ε + ca + cε + cε + ca − ca − cε + a + ε − cε = (2c − 1)ε + (1 − 2c)ε − a + a ≥ (2c − 1)ε − a + a , since 1 − 2c > 0 and ε ≥ 0. If ε = 0, we obtain (2c − 1)ε − a + a = −a + a > 0. If ε > 0, we have (2c − 1)ε − a + a > 2 1 2 − 1 2∆ − 1 ε − a + a = −ε ∆ − a + a ≥ 0, since ε ∆ ≤ 1 ≤ a − a.
If a = a , then the number of same type neighbors of agent y stays the same. Since y improves her positive neighborhood ratio and since a = a the number of different type neighbors of y has to decrease and therefore b < b. We denote the difference as δ with b = b + δ. Therefore it holds that δ > 0. Since we consider a ∆-regular network, it follows that ε = ε + δ. Hence,
− ca + (1 − c)b + cε + ca + (c − 1)b − cε = − ca + (1 − c)(b + δ) + cε + ca + (c − 1)b − c(ε + δ) = − ca + (1 − c)δ + ca − cδ = (1 − c)δ − cδ = (1 − 2c)δ > 0,
where the second to last equality holds since a = a .
Since Φ(p G ) ≤ m where m is the number of edges in the underlying graph and Φ(p G ) decreases after every jump by at least (1 − 2c) the IRD find an equilibrium in O(m). Actually Theorem 7 is tight and convergence is not guaranteed if τ > 2 ∆ .
Theorem 8. The 1-k-JSG for τ > 2 ∆ on ∆-regular graphs is no potential game.
Proof. We prove the statement by providing an improving response cycle. See Fig. 5. If we have more than two types of different agents, all agents of types dissimilar from T 1 and T 2 can be placed outside of the neighborhood of the agents a, b and c who are involved in the IRC. Let τ > 2 ∆ . In the initial placement, agent a is discontent and has cost of τ − 2 ∆ . By jumping next to agent c she becomes content. Because of this jump, agent b becomes isolated. Jumping next to the agents d and y decreases her costs from τ to τ − 1 ∆−1 . After the second step, the obtained placement is equivalent to the initial placement. Only agents a, b, and c changed their roles. Hence, the next two jumps from agents c and a are like the first two: First, agent c jumps next to agent b to become content, then agent a jumps next to the agents c and z to avoid an isolated position. We end up in an equivalent placement to the initial one.
If the underlying network is an arbitrary network the situation is worse.
Theorem 9. IRD are not guaranteed to converge in the 1-k-JSG for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated. Figure 5: An IRC for the JSG for τ > 2 ∆ on a ∆-regular network. Empty nodes are white, agents of type T 1 are orange, type T 2 agents are blue. Multiple nodes in series represent a clique of ∆ − 2 nodes. An edge between a clique and a single node denotes that each clique node is connected to that single node. An edge between two cliques represents that each clique node as exactly one neighbor in the other clique. With this the network is indeed ∆-regular: Each node is connected to all nodes of exactly one group of size ∆ − 2 and to two other nodes. for any τ ∈ (0, 1). Agents of type T 1 are orange, type T 2 agents are blue. Multiple nodes in a series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
Proof. We show the statement by giving an example of an improving response cycle where in every step exactly one agent has exactly one improving jump. Consider Fig. 6. We assume that x is sufficiently high, e.g. x > max 2 τ , 1 1−τ . If we have more than two different types of agents, all agents of types dissimilar to T 1 and T 2 can be placed in cliques outside of the neighborhood of all of the agents involved in the IRC. If these cliques are placed inside network components which are neither connected to the IRC nodes, nor to each other, the agents of these types will never become discontent. Hence, the jumps of the given IRC are the only ones possible. In the construction we have four orange agents, a, b, c, d, of type T 1 and 2x + 1 blue agents in the sets u and v and f of type T 2 and one white empty node. All nodes which are occupied by the blue agents are interconnected and form a clique.
During the whole cycle, all blue agents are content. A blue agent z ∈ T 2 has 2x+2 neighbors of whom at least 2x are of the same type. Hence, the positive neighborhood ratio of an agent z is larger than τ and she has no incentive to jump to another currently empty node. Also the orange agent d remains content during the entire cycle since she is never isolated and has never a neighboring agent of a different type. In the initial placement ( Fig. 6(a)), the orange agent a is discontent, since her only neighboring agent f is blue. Therefore, a jumps to the empty node. Agent b and, depending on the value of τ , agent c are discontent. However, jumping to the empty node next to agent d is not an improvement for them. Now (Fig. 6(b)) agent b is discontent, since x is chosen sufficiently high that the positive neighborhood ratio of b is smaller than τ . Hence, jumping to the empty node next to agent a improves the cost of b from τ − 2 x+2 to max (0, τ − 0.5). Again, this is the only valid jump, since agent c would still have exactly 13 one blue agent and one orange agent in her neighborhood by jumping next to agent a. After two further jumps (Fig. 6(c) and 6 (d)) by agents c and a, which are equivalent to those shown in Fig. 6(a) and Fig. 6(b), restore the initial placement.
Since all executed jumps were the only ones possible, this shows that the JSG is not weakly acyclic as there is no possibility to reach a stable placement via improving jumps.
IRD Convergence for the One-versus-One Version
Now we turn to the 1-1-JSG. By using the same proof as in Theorem 4 with jumps instead of swaps we get the following positive result. The same IRC which proves Theorem 8 for the 1-k-JSG yields the next result.
Theorem 11. IRD may not converge in the 1-1-JSG for τ > 2 ∆ on ∆-regular graphs.
Finally the proof of Theorem 9 works for the following result as well.
Theorem 12. IRD are not guaranteed to converge in the 1-1-JSG for τ ∈ (0, 1) on arbitrary networks and weakly acyclicity is violated.
Computational Hardness of Finding Optimal Placements
Here, we investigate the computational hardness of computing an optimal placement, i.e., a placement where as many agents as possible are content.
Hardness Properties for Two Types
We start with two types of agents and show that finding an optimal placement for the SSG in an arbitrary network G is NP-hard by giving a reduction from the Balanced Satisfactory Problem (BSP), which was introduced in [22,23] and proven to be NP-hard in [4]. This result directly implies that finding an optimal placement for the JSG with no empty nodes is NP-hard as well.
Theorem 13. Finding an optimal placement of agents for the two types SSG in a network G is NP-hard for τ = 1 2 .
Proof. We prove the statement by giving a reduction from the BSP. Given a network G = (V, E) with an even number of nodes. Let v ∈ V and V ⊆ V . We denote by deg V (v) the number of nodes in V which are adjacent to v. A balanced satisfactory partition exists if there is a non-trivial partition
V 1 , V 2 of the nodes V with V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅ and |V 1 | = |V 2 | such that each node v ∈ V i with i ∈ {1, 2} has at least deg V i (v) ≥ deg G (v) 2
, i.e., each node has at least as many neighbors in its own part as in the other. If such a partition exists, we can find it by computing an optimal placement p * G in the network G for two different types of agents of size |V | 2 and τ = 1 2 . The cost of a placement p G is the number of discontent agents. Obviously, a placement p G without discontent agents and thus the placement cost cost p G (A) = 0 is optimal. For a content agent a ∈ A we have pnr p G (a) ≥ 1 2 = τ and thus, if there are no empty nodes we know N + p G (a) ≥ deg G (p G (a)) 2 . If we have a placement where all agents are content we can gather all nodes which are occupied by agents of type T 1 to the subset V 1 and all agents 14 which are occupied by agents of type T 2 to the subset V 2 . It holds for every a ∈ A that
deg V i (p * G (a)) = N + p G (a) ≥ deg G (p G (a)) 2
. Hence, calculating an optimal placement must be NPhard.
The above proof relies on the fact that there are no empty nodes. The computational hardness of the JSG changes if many empty nodes exist. Obviously, it is easy to find an optimal placement if there are enough empty nodes to separate both types of agents completely and a suitable separator is known. Mapping the boundary for the transition from NP-hardness to efficient computation is a challenging question for future work.
Next we show that finding an optimal placement is hard for high τ via a reduction from Minimum Cut Into Equal Size (MCIES) which was proven to be NP-hard in [19].
Theorem 14.
Finding an optimal placement in the SSG on an arbitrary network G = (V, E) with maximum node degree
∆ G = max{deg G (v) | v ∈ V } is NP-hard for τ > 3∆ G 3∆ G +1 . Proof.
We prove the statement by giving a reduction from MCIES. Given a network G = (V, E) and an integer W ∈ N. MCIES is the decision whether there is a non-trivial partition
V 1 , V 2 with V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅ and |V 1 | = |V 2 | such that |{{v 1 , v 2 } ∈ V | v 1 ∈ V 1 , v 2 ∈ V 2 }| ≤ W ,
i.e., there are at most W edges between the two parts.
Let ∆ G = max{deg G (v) | v ∈ V } be the maximum node degree in G. We create a network G = (V , E ) in which every node v ∈ V is replaced by a clique C v in G of size 3∆ G + 1. Each edge {u, v} ∈ E will be replaced by an edge {u , v } between two nodes u ∈ C u and v ∈ C v such that each node in G has at most one neighbor outside its clique. Therefore, the degree of nodes in G is either 3∆ G or 3∆ G + 1, and so the maximum node degree ∆ G in G is 3∆ G + 1. We have two different agent types, each consisting of |V | 2 agents. Let τ >
∆ G −1 ∆ G = 3∆ G 3∆ G +1 .
Because of this, an agent is content in G if she has no neighbors of a different type. For a placement p G to be optimal, all cliques C have to be uniform, i.e. assign agents of the same type to each node in C. Otherwise another non-uniform clique C has to exist and we can re-assign the agents in both cliques in a placement p G to make C uniform. In p G all agent of both cliques are discontent, while in p G at least 2∆ G + 1 agents in C that have no neighbors outside C are content. Since each clique is only connected to at most ∆ G other nodes, at most 2∆ G agents are discontent in p G that were content in p G . Therefore, p G would not be optimal.
If we have an optimal placement with 2W discontent agents, we can gather all v ∈ V where C v is occupied by agents of type T 1 into V 1 , and similarly into V 2 for T 2 . We then have W edges between the two sets V 1 and V 2 . Hence, a placement with 2W discontent agents correspond to an MCIES with W = W edges between the partitions and vice versa.
For the above theorems, we used a placement cost function which counts the number of discontent agents. However, we remark that even if we change this definition into summing up the cost of all agents, i.e., cost p G (A) = a∈A cost p G (a), like social cost, the above hardness results still hold. This relates to the hardness results from Elkind et al. [15] which hold for the JSG with τ = 1 in the presence of stubborn agents which are unwilling to move.
We contrast the above results by providing an efficient algorithm for computing an optimal placement for the SSG and the JSG on a 2-regular network with two different agent types by employing a well-known dynamic programming algorithm for Subset Sum [14,20].
Theorem 15. Finding an optimal placement of agents of two types in the SSG on a 2-regular network with n nodes can be done in O(n 2 ) for τ > 1 2 . Proof. Let G = (V, E) be a 2-regular network, consisting of m rings. Ring i has r i nodes. Given a partition of the agents P (A) = {T 1 , T 2 } with |T 1 | = n 1 and |T 2 | = n 2 .
For finding a placement that minimizes cost p G (A), we take the multiset r 1 , . . . , r m as elements and n 1 as target sum as an instance of Subset Sum. Which we can solve in O(n 2 ) since n 1 ≤ n. 15
In case of a Yes-instance, we can place the agents of type T 1 on the rings indicated by the selected elements. Thus no agents of different types are on the same ring. If the instance is a No-instance, then in the optimal placement there is exactly one ring with agents of different type. This implies that at least 3 and at most 4 agents are discontent. To check if an optimal placement with 3 discontent agent is possible, we solve the Subset Sum instance with target sum n 1 + 1. If this is possible, then we place the n 1 agents on the respective rings such that exactly one node is empty. Then all empty nodes are filled with type T 2 agents. If the instance with target sum n 1 +1 is a No-instance, we greedily fill the rings with consecutive type T 1 agents such that we get one ring with empty spots. Then we fill all the empty spots with type T 2 agents to obtain exactly 4 discontent agents.
Optimal placements for the JSG can be found with an analogous algorithm.
Hardness Properties for More Types
Compared to the previous subsection we now show that also the number of different agent types has an influence on the computational hardness of finding an optimal placement. We establish NP-hardness even on 2-regular networks if there are sufficiently many agent types by giving a reduction from 3-Partition which was proven to be NP-hard in [20].
Theorem 16. Finding an optimal placement of agents of an arbitrary number of types in the 1-1-SSG and 1-k-SSG on a 2-regular network with τ > 1 2 is NP-hard.
Proof. We prove the statement by giving a polynomial time reduction from 3-Partition. Given a multiset S of 3k positive integers. 3-Partition concerns whether S can be partitioned into k disjoint sets S i with i ∈ {1, . . . , k} of size three, such that the sum of the numbers in each subset is equal, i.e., s i ∈S 1 s i = s i ∈S 2 s i = · · · = s i ∈S k s i . As these sets are disjoint, we already for all s i ∈ S. Based on a 3-Partition instance, we generate a 2-regular graph, containing a ring for each s i ∈ S with s i nodes. Thus our graph has n = s i ∈S s i nodes in total. We can assume s i ≥ 3 for all s i ∈ S, since adding a constant to all elements does not change the existence of a solution. We now take a set of n agents A partitioned into types P (A) = {T 1 , . . . , T k }. Each type consists of n k agents. Assume we find an optimal placement with cost p G (A) = 0 for τ > 1 2 . This means, that there is no ring that contains agents of different types, since an agent is discontent if she has a neighboring agent of different type. Thus, we have a disjoint partitioning of the rings, such that the number of nodes in each partition adds up to n k = s i ∈S s i k . We also assumed that n 4k < s i < n 2k , thus all agents of a type T i have to be placed on exactly three rings. This directly implies a solution for the 3-Partition instance. If the corresponding 3-Partition instance has a solution S 1 , . . . , S k , this produces a partitioning of the rings, such that each partition contains s i ∈S s i k = n k nodes. Placing the agent types according to this partitioning won't produce any ring with agents of different types on it. Such a placement has cost p G (A) = 0, which has to be optimal.
Since our reduction can be done in polynomial time for unary encoded instances of 3partition, this proofs the NP-hardness of finding an optimal placement.
To conclude the section on the computational hardness, we want to emphasize that solving the question whether finding an optimal placement is easy or hard does not allow us to make 16 Figure 7: A network where the optimal placement p * G is not in equilibrium for τ > 0.9. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected. equivalent statements for computing stable placements. The following example illustrates the rather counter-intuitive fact that an optimal placement is not necessarily stable.
Theorem 17. For the SSG with two different types of agents there is a network G where no optimal placement is stable.
Proof. We prove the statement by giving an example. Consider Fig. 7. The pictured network has two cliques u i and v i with 1 ≤ i ≤ 3 of size ten. Let τ > 0.9. The placement p * G depicted in Fig. 7a has cost p * G (A) = 7, and the placement p G in Fig. 7b has cost p G (A) = 8. The former is optimal since every placement p G other than the given two has to place agents of different types in at least one of the cliques. This would cause all agents in the clique to become discontent and thus yield cost p G (A) ≥ 10. However, the agents a and b want to swap in placement p * G . Hence, the unique optimal placement p * G is not stable.
Simulation
As a final aspect, we enrich our theoretical results with empirical results for the versions where IRD convergence is guaranteed. We find that for the versions with two agent types the IRD starting from uniformly random placements produce an equilibrium in c · m steps, where c is a positive constant and m is the number of edges in the underlying network. See Fig. 8. This meets our upper bound of O(m). Interestingly, IRD convergence is faster on random 8-regular graphs than on 8-regular toroidal grids. This hints that geometry may influence the convergence speed. The details of the simulation can be found in the appendix.
Simulation Set-up.
For our simulations we considered two different network topologies: toroidal grids with the Moore neighborhood, i.e., the nodes have diagonal edges and all inner nodes have degree 8 and random 8-regular networks. We generated grids with 100 × 100 up to 300 × 300 nodes where the grid sides increased in steps of 20. To have comparable random 8-regular graphs we generated them with the same number of nodes. For each configuration we ran the IRD starting from 100 random initial placements do derive the results depicted in Fig. 8.
To get the initial placements, the agents were placed uniformly at random on the nodes of the network and we assumed equal proportions of each agent type. For the jump game we used 6% empty nodes. In each round the discontent agents are activated in a random order and each activated agent iterates randomly over all possible locations for a swap or a jump and chooses the first location which yields an improvement.
Conclusion and Open Questions
We conducted a thorough analysis of the dynamic properties of the game-theoretic version of Schelling's segregation model and provided tight threshold results for the IRD convergence for several versions of the game. Furthermore, we found that the number of agent types and the underlying graph has severe impact on the computational hardness of computing optimal placements.
It remains open whether IRD always converge for the 1-1-SSG with τ ∈ 1 ∆ , 6 ∆ , and for the 1-1-JSG with τ ∈ 1 ∆ , 2 ∆ . Since most versions are not guaranteed to converge via IRD, the existence of stable placements for all graph types is not given. Elkind et al. [15] showed that for the 1-k-JSG that stable placements exist if the underlying network is a star or a graph with maximum degree 2 and τ = 1. Furthermore they proved that if the underlying network is a tree the existence of a stable placement may fail to exist for τ = 1 in the 1-k-JSG. However, in general, it remains an open question in terms of different values of τ and for different underlying networks whether stable placements exist and whether they can be computed efficiently. We conjecture the following: Conjecture 1. Equilibria are not guaranteed to exist in all cases for which we constructed IRCs.
Also the computational hardness of finding optimal placements for some variants deserves further study and this could be extended to study the existence of other interesting states, e.g., stable states with low segregation.
Our IRD convergence results can be straightforwardly adapted to hold for the extended model by Chauhan et al. [11], where agents also have single-peaked preferences over the locations. Moreover, we are positive that also our computational hardness results can be carried over.
Last but no least, we emphasize that there are many possible ways to model Schelling segregation with at least three agent types. For example, types could have preferences over other types which then yields a rich unexplored setting. | 10,462 |
1907.07513 | 2961524200 | The phenomenon of residential segregation was captured by Schelling's famous segregation model where two types of agents are placed on a grid and an agent is content with her location if the fraction of her neighbors which have the same type as her is at least @math , for some @math . Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty cell. We analyze a generalized game-theoretic model of Schelling segregation which allows more than two agent types and more general underlying graphs modeling the residential area. For this we show that both aspects heavily influence the dynamic properties and the tractability of finding an optimal placement. We map the boundary of when improving response dynamics (IRD), i.e., the natural approach for finding equilibrium states, are guaranteed to converge. For this we prove several sharp threshold results where guaranteed IRD convergence suddenly turns into the strongest possible non-convergence result: a violation of weak acyclicity. In particular, we show such threshold results also for Schelling's original model, which is in contrast to the standard assumption in many empirical papers. Furthermore, we show that in case of convergence, IRD find an equilibrium in @math steps, where @math is the number of edges in the underlying graph and show that this bound is met in empirical simulations starting from random initial agent placements. | Very recently, @cite_20 studied a variant of the model by @cite_8 , where the agents are partitioned into stubborn and strategic agents. The former agents do not move and the latter agents try to maximize the fraction of same-type agents in their neighborhood by jumping to a suitable empty location. This corresponds to a variant of the JSG with @math . They show that equilibria are not guaranteed to exist and that deciding equilibrium existence or the existence of an agent placement with certain social welfare is NP-hard. This relates to our hardness results for computing socially optimal states. They also prove that the price of anarchy and the price of stability can be unbounded. | {
"abstract": [
"We consider strategic games that are inspired by Schelling's model of residential segregation. In our model, the agents are partitioned into k types and need to select locations on an undirected graph. Agents can be either stubborn, in which case they will always choose their preferred location, or strategic, in which case they aim to maximize the fraction of agents of their own type in their neighborhood. We investigate the existence of equilibria in these games, study the complexity of finding an equilibrium outcome or an outcome with high social welfare, and also provide upper and lower bounds on the price of anarchy and stability. Some of our results extend to the setting where the preferences of the agents over their neighbors are defined by a social network rather than a partition into types.",
"Schelling’s segregation model is a landmark model in sociology. It shows the counter-intuitive phenomenon that residential segregation between individuals of different groups can emerge even when all involved individuals are tolerant. Although the model is widely studied, no pure game-theoretic version where rational agents strategically choose their location exists. We close this gap by introducing and analyzing generalized game-theoretic models of Schelling segregation, where the agents can also have individual location preferences."
],
"cite_N": [
"@cite_20",
"@cite_8"
],
"mid": [
"2917822652",
"2809709428"
]
} | Convergence and Hardness of Strategic Schelling Segregation (full version) | Residential segregation is a well-known and remarkable phenomenon in many major metropolitan areas. There, local and myopic location choices by many individuals with preferences over their direct residential neighborhood yield cityscapes which are severely segregated along racial and ethnical lines (see Fig. 1(a)). Hence, local strategic choices on the micro level lead to an emergent phenomenon on the macro level. This paradigm of "micromotives" versus "macrobehavior" [33] was first investigated and modeled by Thomas Schelling who proposed a very simple stylized model for analyzing residential segregation [31,32]. With the use of two types of coins as two types of individual agents and graph paper serving as residential area, Schelling demonstrated the emergence of segregated neighborhoods under the simple assumption of the following threshold behavior: agents are content with their current location if the fraction of agents of their own type in their neighborhood is at least τ , where 0 < τ < 1 is a global parameter which applies to all agents. Content agents do not move, but discontent agents will swap their location with some other random discontent agent or perform a random jump to an unoccupied place. Given this, Schelling demonstrated by experiment that starting from a uniformly random distribution of the agents (see Fig. 1(b)) the induced random process yields a residential pattern which shows strong segregation (see Fig. 1(c)). While this is to be expected for intolerant agents, i.e., τ > 1 2 , the astonishing finding of Schelling was that this also happens for tolerant agents, i.e., τ ≤ 1 2 . This counter-intuitive observation explains why even in a very tolerant population segregation along racial/ethnical, religious or socio-economical lines can emerge.
Schelling's elegant model became one of the landmark models in sociology and it spurred a significant number of research articles which studied and motivated variants of the model, e.g. the works by Clark [12], Alba & Logan [1], Benard & Willer [5], Henry et al. [26] and Bruch [9], to name only a few. Interestingly, also a physical analogue of Schelling's model was found by Vinković & Kirman [35] but it was argued by Clark & Fosset [13] that such models do not enhance the understanding of the underlying social dynamics. In contrast, they promote simulation studies via agent-based models where the agents' utility function is inspired by real-world behavior. Schelling's model as an agent-based system can be easily simulated on a computer and many such empirical simulation studies were conducted to investigate the influence of various parameters on the obtained segregation, e.g. see the works by Fossett [17], which use the simulation framework SimSeg [18], Epstein & Axtell [16], Gaylord & d'Andria [21], Pancs & Vriend [30], Singh et al. [34] and Benenson et al. [6].
All these empirical studies consider essentially an induced random process, i.e., that discontent agents are activated at random and active agents then swap or jump to other randomly selected positions. In some frameworks, like SimSeg [18] or the model by Pancs & Vriend [30], agents only change their location if this yields an improvement according to some utility function. This assumption of having rational agents which act strategically matches the behavior of real-world agents which would only move if this improves their situation. This paper sets out to explore the properties of such strategic dynamic processes and the tractability of the induced optimization problems.
Model and Notation
We consider a network G = (V, E), where V is the set of nodes and E is the set of edges, which is connected, unweighted and undirected. The network G serves as the underlying graph modeling the residential area in which the agents will select a location. If every node in G has the same degree ∆, i.e., the same number of incident edges, then we say that G is a ∆-regular graph. Let deg G (v) be the degree of a node v ∈ V in G and for a given node u ∈ V let Γ G (u) denote the set of nodes v = u so that an edge {u, v} exists in E. We call Γ G (u) the neighborhood of u in network G. Let A be the set of agents and P (A) = {T 1 , T 2 , . . . , T k } be any partition of A into k non-empty distinct sets, called types, which model racial/ethnic, religious or socio-economic 3 groups. For k = 2 this corresponds to Schelling's original model [31,32] with two different types of agents. Let t : A → P (A) be a surjective function such that t(a) = T if a ∈ T . We say that agent a is of type t(a). A state of our games is defined by an injective placement p G : A → V which assigns every agent to a node in the network G and we call p G (a) agent a's location under placement p G . Two agents a, b ∈ A are neighbors under placement p G if p G (b) ∈ Γ G (p G (a)) and we denote the set of neighbors of a under placement p G as N p G (a). For any agent a ∈ A, we define N T p G (a) = {b ∈ T | b ∈ N p G (a)}, as the set of agents of type T in the neighborhood of agent a under placement p G .
For any agent a ∈ A in a placement p G , we define agent a's positive neighborhood N + p G (a) as N t(a) p G (a). For agent a's negative neighborhood, we define two different versions, called the one-versus-all and one-versus-one versions. In the one-versus-all version an agent wants a certain fraction of agents of her own type in her neighborhood, regardless of the specific types of neighboring agents with other types, so
N − p G (a) is N p G (a) \ N + p G (a)
. In contrast to this, in the one-versus-one version an agent only compares the number of own-type agents to the number of agents in the largest group of agents with different type in her neighborhood. Thus, we define the negative neighborhood of an agent a under placement p G as the set of neighboring agents of the type T = t(a) that make up the largest proportion among all neighbors, i.e.,
N − p G (a) = N T p G (a) such that T ∈ P (A) \ {t(a)} and |N T p G (a)| ≥ |N T p G (a)| for all T ∈ P (A) \ {t(a)}.
Notice that the one-versus-all and one-versus-one version coincide for k = 2, thus both versions generalize the two type case. If an agent a has no neighboring agents, i.e., N p G (a) = ∅, we say that a is isolated, otherwise a is un-isolated.
Let τ ∈ (0, 1) be the intolerance parameter. Similar to Schelling's model we say that an agent a is content with placement p G if agent a is un-isolated and at least a τ -fraction of the agents in agent a's positive and negative neighborhood under p G are in agent a's positive neighborhood.
Hence, agent a is content if she is un-isolated and
|N + p G (a)| |N + p G (a)|+|N − p G (a)| ≥ τ , otherwise a is discontent with placement p G . We call the ratio pnr p G (a) = |N + p G (a)| |N + p G (a)|+|N −
p G (a)| the positive neighborhood ratio of agent a. An agent's aim is to find a node in the given network where she is content or, if this is not possible, where she has the highest possible positive neighborhood ratio. Therefore, and analogous to [11], we define the cost function of an agent a in a placement p G for network G as follows:
cost p G (a) = max{0, τ − pnr p G (a)}, if a is un-isolated, τ, if a is isolated.
Thus, agent a is content with placement p G , if and only if cost p G (a) = 0. The placement cost, denoted cost p G (A), of a placement p G in a network G is simply the number of all discontent agents:
cost p G (A) = |{a ∈ A | cost p G (a) = 0}|.
The Strategic Games: The strategy space of an agent is the set of all nodes in the network G. An agent can change her strategy either via swapping with another agent who agrees or via jumping to another unoccupied node in network. This yields the Swap Schelling Game (SSG) and the Jump Schelling Game (JSG). For the SSG we will assume that all nodes of G are occupied. A location swap, or swap, of two agents a, b ∈ A under placement p G is to exchange the occupied nodes of both agents. This yields a new placement p G with p G (a) = p G (b), p G (b) = p G (a) and p G (x) = p G (x), for any other agent x ∈ A \ {a, b}. Two agents a, b ∈ A would only agree to such a swap if it strictly decreases the cost of both agents, i.e., cost p G (a) < cost p G (a) and cost p G (b) < cost p G (b). Hence, swapping agents are always of different types. If for some placement p G no improving swap exists, then we say that p G is swap-stable.
In the JSG we assume that there exist empty nodes in the underlying graph and an agent can change her strategy to any currently empty node, which we denote as a jump to that node. An agent will only jump to another empty node, if this strictly decreases her cost. An equilibrium placement in the JSG where no agent can improve via jumping is called jump-stable.
If the game is clear from the context, we will simply say that a placement p G is stable. If we have more than two different agent types we denote the one-versus-all version of the SSG and the JSG as 1-k-SSG and 1-k-JSG, respectively and the one-versus-one version of both games as 1-1-SSG and 1-1-JSG, respectively.
Improving Response Dynamics and Potential Games: We analyze whether improving response dynamics (IRD), i.e., the natural approach for finding equilibrium states where agents sequentially try to change towards better strategies until no agent can further improve, will converge. For showing this we employ ordinal potential functions. Such a function Φ maps placements to real numbers such that if an agent (or a pair of agents) under placement p G can improve by a jump (or a swap) which results in placement p G then Φ(p G ) > Φ(p G ) holds. That is, any improving strategy change also decreases the potential function value. The existence of an ordinal potential function shows that a game is a potential game [29], which guarantees the existence of pure equilibria and that IRD must terminate in an equilibrium. In contrast, an improving response cycle (IRC) is a sequence of improving strategy changes which visits the same state of the game twice. The existence of an IRC directly implies that a potential function cannot exist and thus, that IRD may not terminate. However, even with existing IRCs it is still possible, that from any state of the game there exists a finite sequence of improving strategy-changes which leads to an equilibrium. In this case the game is weakly acyclic [36]. Thus, the strongest possible non-convergence result is a proof that a game is not weakly acyclic.
Our Contribution
Our main contribution is a thorough investigation of the convergence behavior of improving response dynamics in variants of Schelling's model. Previous work, including Schelling's original papers and all the mentioned empirical simulation studies, assume that IRD always converge to an equilibrium. We challenge this basic assumption by precisely mapping the boundary of when IRD are assured to find an equilibrium. We show that IRD behave radically different in the swap version compared to the jump version. Moreover, we show that this contrasting behavior can even be found within these two variants. We demonstrate the extreme cases of guaranteed IRD convergence, i.e., the existence of an ordinal potential function, and the strongest possible non-convergence result, i.e., that even weakly acyclicity is violated. For this, we provide sharp threshold results where for some τ * IRD are guaranteed to convergence for τ ≤ τ * and we have non-weak-acyclicity for τ > τ * , depending on the underlying graph. See Table 1.
In case of IRD convergence, we show that this happens after O(|E|) many jumps/swaps on an underlying graph G = (V, E). We show via experiments that instances with randomly chosen initial placements meet this upper bound.
Besides analyzing IRD, we start a discussion about segregation models with more than two agent types. Besides the simple generalization of differentiating only between own type and other types, i.e., the 1-k-SSG and 1-k-JSG, we propose a more natural alternative, called the 1-1-SSG and the 1-1-JSG, where agents compare the type ratios only with the largest subgroup in their neighborhood. The idea here is that a minority group mainly cares about if there is a dominant other group within the neighborhood.
Moreover, we investigate the influence of the underlying graph on the hardness of computing an optimal placement. We show that computing this is NP-hard for arbitrary underlying graphs if τ = 1 2 or if τ is close to the maximum degree in the graph. In contrast to this, we provide an efficient algorithm for computing the optimum placement on a 2-regular graph with two agent 5
1-k-SSG 1-1-SSG 1-k-JSG 1-1-JSG reg. (Thm.2) (Thm.4) τ ≤ 1 ∆ (Thm.7) τ ≤ 2 ∆ (Thm.10) τ ≤ 1 ∆ o (Thm.5) τ ≥ 6 ∆ o (Thm.8) τ > 2 ∆ o (Thm.11) τ > 2 ∆ arb. [11] k = 2, τ ≤ 1 2 ×(Thm.6) ×(Thm.9) ×(Thm.12)
×(Thm.1&3) ow. Table 1: Results regarding IRD. "reg." stands for ∆-regular graphs, "arb" for arbitrary graphs, which model the residential area. " " denotes that IRD converge to an equilibrium, "o" denotes the existence of an IRC. "×" denotes that the version is not weakly acyclic. If τ is omitted, the result holds for any 0 < τ < 1.
types. The number of agent types also has an influence: we establish NP-hardness even on 2-regular graphs if there are sufficiently many agent types.
Schelling Dynamics for the Swap Schelling Game
In the following section we analyze the convergence behavior of IRD for the strategic segregation process via swaps. Chauhan et al. [11] already proved initial results in this direction, in particular that the SSG for two types of agents converges for the whole range of τ , i.e τ ∈ (0, 1), on ∆regular graphs and for τ ≤ 1 2 on arbitrary graphs. We close the gap and present a matching non-convergence bound in the SSG on arbitrary graphs.
The 1-k-variant seems to be a straightforward generalization of the two type case. An agent simply compares the number of neighbors of her type with the total number of neighbors. Interestingly, our IRD convergence results for the 1-k-SSG with k > 2 for arbitrary networks for τ ≤ 1 2 are in sharp contrast to the results for k = 2: On arbitrary networks with tolerant agents, i.e., with τ ≤ 1 2 , and k > 2 types IRD convergence is no longer guaranteed. For the 1-1-variant an agent compares the number of neighboring agents of her type with the size of the largest group of agents with a different type in her neighborhood. This captures the realistic setting where agents simply try to avoid being in a neighborhood where another group of agents dominates. We will show that even on a ∆-regular network an improving response cycle exists for the 1-1-SSG for sufficiently high τ .
IRD Convergence for the One-versus-All Version
For SSGs with k = 2 on regular networks and arbitrary networks with τ ≤ 1 2 the existence of a potential function was shown before in [11]. We show that this bound is tight, i.e., that for τ > 1 2 IRD may not converge.
Theorem 1. IRD are not guaranteed to converge in the SSG with k = 2 for τ ∈ 1 2 , 1 on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We prove the statement by providing an improving response cycle where in every step exactly one improving swap is possible. The construction is shown in Fig. 2 and we assume that x is sufficiently large, e.g.,
x = max 1 τ −0.5 , 1 2−2τ .
We have orange agents of type T 1 and blue agents of type T 2 . The orange agents in the groups u i and the blue agents in the groups v i , respectively, with 1 ≤ i ≤ 4 are interconnected and form a clique. for τ ∈ 1 2 , 1 . The agents types are marked orange and blue. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
a c d b (1) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (a) Initial placement a c d b (2) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (b) Placement after the first swap a c d b (3) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (c) Placement after the second swap a c d b (4) v 2 v 3 v 4 u 4 u 3 u 2 2x x−2 x+1 2x−2 2x2 x−2 u 1 v 1 (d)
During the whole cycle the agents in u i and v i , respectively, are content. An orange agent z ∈ u i has 4x neighbors and at most one neighbor is blue. Hence, the positive neighborhood ratio of agent z is larger than τ . The same applies for a blue agent y ∈ v i . The agent y has 4x − 3 neighbors and at most one neighbor is orange. Therefore, an agent z ∈ u i and an agent y ∈ v i , respectively, never have an incentive to swap their position with another agent, since they are content.
In the initial placement ( Fig. 2(a)), both agents a and d are discontent. By swapping their positions, agent a can decrease her cost from τ − 1 3 to τ − x−1 2x and agent d decreases her cost from τ − x+1 2x to max 0, τ − 2 3 . This is the only possible swap since neither b nor c have the opportunity to improve their costs via swapping with c, d, and a, b, respectively. However, after the first swap ( Fig. 2(b)) agent a is still not content. Swapping with agent c decreases agent a's cost to τ − 2x−1 4x , and agent c can decrease her cost from τ − 2x+1 4x to τ − x+1 2x . Again, no other swap is possible since agent b would increase her cost by swapping with agent c or d. After this (Fig. 2(c)), agent b and d have the opportunity to swap and decrease their cost from τ − x+1 2x to max 0, τ − 2 3 and τ − 1 3 to τ − x−1 2x , respectively. Once more there is no other valid swap. Agent a does not want to swap with agent d and agent b not with agent c. Finally (Fig. 2(d)), agent a and d swap and both agents decrease their costs to τ − 1 2 . Neither does agent b want to swap with agent c nor can agent c improve by swapping with agent a. After the fourth step the obtained placement is equivalent to the initial placement ( Fig. 2(a)), only the blue agents a and b, and the orange agents c and d, respectively, have exchanged positions.
Since all the executed swaps were the only possible strategy changes, this proves that the SSG is not weakly acyclic, since, starting with the given initial placement, there is no possibility to reach a stable placement via improving swaps.
We now generalize the results from [11] by showing that convergence is guaranteed for the 1-k-SSG for any k ≥ 2.
Theorem 2. IRD are guaranteed to converge in O(|E|) moves for the 1-k-SSG with τ ∈ (0, 1) on any ∆-regular network G = (V, E).
Proof. We show that Φ(p G ) = 1 2 a∈A |N − p G (a)| is an ordinal potential function. An agent a has no incentive to swap if she is content and she will never swap with an agent who has her own type, since this cannot be an improvement for both agents. Therefore, there will only be swaps between discontent agents of different types. Since we consider a ∆-regular network we have |N p G (a)| = |N + p G (a)| + |N − p G (a)| = ∆ for all a ∈ A. A swap between two agents a and b changes the current placement p G only in the locations of the involved agents and yields a new placement p G . Since a swap is an improvement for the agent a who swaps, it holds that
|N + p G (a)| ∆ < |N + p G (a)| ∆ .
The same is true for the other agent b. Thus the following holds for agent a (and agent b likewise)
|N + p G (a)| < |N + p G (a)| ⇐⇒ ∆ − |N − p G (a)| < ∆ − |N − p G (a)| ⇐⇒ |N − p G (a)| < |N − p G (a)|.
It follows that Φ(p G ) − Φ(p G ) > 0 and therefore the potential function value decreases if two agents swap their current position. Since Φ(p G ) ≤ m where m is the number of edges in the underlying network and Φ(p G ) decreases after every swap by at least 1 the IRD find an equilibrium in O(m).
We contrast the above result by showing that guaranteed IRD convergence is impossible for any τ on arbitrary networks. This emphasizes the influence of the number of agent types on the convergence behavior of the IRD.
Theorem 3. IRD are not guaranteed to converge in the 1-k-SSG with k > 2 for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We give an example of an improving response cycle, where in every step exactly one improving swap exists, for any τ ≤ 0.5. Together with the improving response cycle given in Theorem 1 for τ > 0.5 this yields the statement.
Consider Fig. 3 with a sufficiently high x, e.g., x > 3 4τ − 1 and τ ≤ 0.5. We have orange agents of type T 1 , blue agents of type T 2 and gray agents of type T 3 . The agents in one group u i and v j , respectively, with 1 ≤ i ≤ 4 and 1 ≤ j ≤ 2 are interconnected and form a clique.
During the whole cycle the agents in u i and v j , respectively, are content. An agent in u i ∪ v j has at most two neighboring agents of different type and at least two agents of her type. Since τ ≤ 0.5 these agents are content. Therefore they have no incentive to swap. In the initial placement ( Fig. 3(a)), agents a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 4(x+1) while agent d becomes content after the swap. This is the only possible swap. Agent c does not want to swap with agent a or b since she increase her cost, as well agent b cannot improve by swapping with c or d. Then (Fig. 3(b)), agent a is still discontent and willing to swap her position with another agent. Swapping with agent c decreases her cost to τ − 3 8(x+1) while c can improve from τ − 5 8(x+1) to τ − 3 4(x+1) . Again, this is the only possible swap, since d is content and c still doesn't want to swap with b. After this (Fig. 3(c)), agent d has no neighbor of her type, so she swaps with agent b who becomes content. Agent d reduces her cost from τ to τ − 1 4(x+1) . Agent a does not want to swap with d and agent b not with c since both a and b would have no agent of their own type in their neighborhood. Finally (Fig. 3(d)), agents a and d want to swap. Agent d decreases her cost to τ − 1 2(x+1) and agent a decreases her cost from τ − 3 8(x+1) to τ − 1 2(x+1) . No other two agents have any incentive to swap their position, since neither agent c nor d want to swap with agent b since they would not have a neighboring agent of their type. For the same reason agent a is not interested in swapping with c. The resulting placement is equivalent to the initial one, only the blue agents a and b and the orange agents c and d exchanged positions.
Since all swaps are the only ones possible, this shows that the 1-k-SSG is not weakly acyclic as there is no possibility to reach a stable placement. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
IRD Convergence for the One-versus-One Version
Remember, that in the 1-1-SSG and 1-1-JSG, respectively, an agent only considers the largest group of neighboring agents of one type, which is different from her own type. We start with a simple positive result for the 1-1-SSG. Proof. Any agent a of type T who has a neighbor b of the same type is content, since τ ≤ 1 ∆ . Since b has a as a neighbor, b will also be content. Since both agents are content, neither of them will consider to swap positions, and therefore both will remain content.
Any agent a who is discontent can't have a neighbor of the same type, otherwise a would be content. The cost of a must be τ in this case. Since a only considers a swap that decreases her cost, after swapping the cost of a can be at most max(0, τ − 1 ∆ ), which means a is content and will continue to be so, as we showed before.
Since agents are content at least after their first swap, and agents that are content will never swap again, each agent will participate in at most one swap. Therefore, the game converges after at most |A| swaps.
If τ is high enough, then the 1-1-SSG is no longer a potential game.
Theorem 5. IRD are not guaranteed to converge in the 1-1-SSG for τ ≥ 6 ∆ on ∆-regular networks.
Proof. We use a similar instance as in the proof of Theorem 6. Consider Fig. 4
with x > 5(1−τ )
6τ . We omit the edges between the cliques u 1 , u 2 and u 3 , of gray agents. Now, the highest degree in the graph is 6(x + 1). In order to make the graph regular, we insert new nodes filled with agents such that each new agent is the only agent of its type, and connect these new nodes with existing nodes and each other as needed.
In the initial placement ( Fig. 4(a)) agent a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 3x+1 while agent d becomes either content after the swap or, if τ > 1 2 , has costs of τ − 1 2 . Then ( Fig. 4(b)), agent a is still discontent. Swapping with agent c decreases her cost to τ − 2 4x+2 while agent c can improve from τ − 2 4x+2 to τ − 2 3x+2 . In the next step (Fig. 4(c)), agent d has no neighboring agent of her type. Therefore she swaps with agent b who becomes content, if τ ≤ 1 2 , as a result of the swap or has costs equal τ − 1 2 . Agent 9 d reduces her cost from τ to τ − 1 6x+1 . Finally (Fig. 4(d)) agent a and agent d want to swap. Agent d has the possibility to decrease her cost to τ − 1 4x+1 and agent a can decrease her own cost from τ − 3 4x+3 to τ − 5 6x+5 . From x > 5(1−τ ) 6τ as our only limitation and ∆ = 6(x + 1) we obtain τ ≥ 6 ∆ , where equality is reached if x is chosen as low as possible.
The situation is much worse on arbitrary graphs as the following theorem shows.
Theorem 6. IRD are not guaranteed to converge in the 1-1-SSG for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated.
Proof. We show the statement by giving an example for an improving response cycle where in every step exactly one improving swap exists. Consider Fig. 4
with x > max 5(1−τ ) 6τ , τ 1−τ .
We have orange agents of type T 1 , blue agents of type T 2 and gray agents of type T 3 . The agents in one group u i and v i , respectively, with i ∈ {1, 2, 3, 4, 5} are interconnected and form a clique.
During the whole cycle the agents in u i and v i , respectively, are content. Agent v 2 has at most 2 neighbors of any type other than T 1 and at least 3x neighbors of her own type. All the other agents in u i ∪ v i have at most one neighbor of another type and at least x neighboring agents of their own type. Therefore the positive neighborhood ratio pnr of an agent z ∈ u i ∪ v i is larger than τ for x > 1 and z has no incentive to swap. In the initial placement ( Fig. 4(a)) agent a and d are discontent and want to swap. Agent a decreases her cost from τ to τ − 1 3x+1 while agent d becomes content after the swap. This is the only possible swap. Agent c does not want to swap with agent a or b since she would be worse off and agent b cannot improve by swapping with d. Then (Fig. 4(b)), agent a is still discontent. Swapping with agent c decreases her cost to τ − 2 4x+2 while agent c can improve from τ − 2 4x+2 to τ − 2 3x+2 . Again, this is the only possible swap, since d is content and c would not improve by swapping with agent b. In the next step (Fig. 4(c)), agent d has no neighboring agent of her type. Therefore she swaps with agent b who becomes content as a result of the swap. Agent d reduces her cost from τ to τ − 1 6x+1 . Agent a does not want to swap with d since at the new position she wouldn't have a neighboring agent of her own type and agent b not with c since this wouldn't be an improvement for b. Finally (Fig. 4(d)) agent a and agent d want to swap. Agent d has the possibility to decrease her cost to τ − 1 4x+1 and agent a can decrease her own cost from τ − 3 4x+3 to τ − 5 6x+5 . No other two agents have the incentive to swap their position, since agent c does not want to swap with agent a or b. We end up in a placement which is equivalent to the initial one, only the blue agents a and b and the orange agents c and d exchanged positions.
Since all swaps were the only ones possible, this shows that the 1-1-SSG is not weakly acyclic as there is no possibility to reach a stable placement.
Schelling Dynamics for the Jump Schelling Game
We now analyze the convergence behavior of IRD for the strategic segregation process via jumps. Chauhan et al. [11] proved that the JSG converges for τ ∈ (0, 1) on 2-regular graphs. Furthermore they showed that there exists an IRC for τ ∈ 1 3 , 2 3 on a 8-regular grid if the agents have a favorite location, i.e., a node to whom an agent a wants to be as close as possible without increasing her costs. In particular such a favorite location is necessary for their IRC. We show that convergence is not guaranteed even without a favorite location on arbitrary graphs and sharp the threshold for ∆-regular graphs at τ = 2 ∆ . We first turn our focus to the 1-k-JSG, where an agent only distinguishes between own and other types. Hence, an agent simply compares the number of neighbors of her type with the total number of neighbors. for any τ ∈ (0, 1). Agents types are marked orange, blue and gray. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
IRD Convergence for the One-versus-All Version
In [11] only for the JSG on 2-regular graphs the existence of an ordinal potential function was shown. In contrast, we prove a sharp threshold result, with the threshold being at τ = 2 ∆ , for the convergence of IRD for the 1-k-JSG on ∆-regular graphs, for any ∆ ≥ 2. Moreover, we show that the game is not weakly acyclic on arbitrary graphs. Proof. For any ∆-regular network G we define the weight w p G (e) of any edge e = {u, v} ∈ E as:
w p G (e) =
1, if u and v are occupied by agents of different types for p G , c, if either u or v, but not both, are empty for p G , 0, otherwise,
with 1 2 − 1 2∆ < c < 1 2 .
We prove that Φ(p G ) = e∈E w p G (e) is an ordinal potential function. Note that τ is sufficiently small, so that an agent becomes content if she has two neighbors of her type. Therefore, an agent who is willing to jump to another node has at most one neighbor of the same type. Without loss of generality, we assume the existence of a discontent agent y for placement p G . Let p G be a placement that results from a jump of y. Let a = |N + p G (y)|, b = |N − p G (y)| and let ε be the number of empty nodes in the neighborhood of p G (y). Let a = |N + p G (y)| and b = |N − p G (y)| be the number of agents of the same type and of different type, respectively, and let ε be the number of empty nodes in the neighborhood of p G (y). We will show that if an agent jumps, Φ changes it holds that Φ(p G ) − Φ(p G ) = 0a + 1b + cε + ca + cb + 0ε − ca + cb + 0ε + 0a + 1b + cε
= −ca + (1 − c)b + cε + ca + (c − 1)b − cε > 0,
and therefore Φ decreases for every improving jump of an agent.
There is no incentive for agent y to decrease the number of neighbors of the same type because decreasing this number would mean that either a ≥ 2, i.e., agent y is content and does not want to jump, or a = 1 and therefore a = 0 which is never an improvement. Hence, we have to distinguish between two cases:
If a < a , then agent y increases the number of neighbors of the same type. Since we consider a ∆-regular network, we have a + b + ε = ∆ and a + b + ε = ∆, so b = ∆ − a − ε and b = ∆ − a − ε . Hence,
− ca + (1 − c)b + cε + ca + (c − 1)b − cε = − ca + (1 − c)(∆ − a − ε) + cε + ca + (c − 1)(∆ − a − ε ) − cε = − ca + (1 − c)(−a − ε) + cε + ca + (c − 1)(−a − ε ) − cε = − ca − a − ε + ca + cε + cε + ca − ca − cε + a + ε − cε = (2c − 1)ε + (1 − 2c)ε − a + a ≥ (2c − 1)ε − a + a , since 1 − 2c > 0 and ε ≥ 0. If ε = 0, we obtain (2c − 1)ε − a + a = −a + a > 0. If ε > 0, we have (2c − 1)ε − a + a > 2 1 2 − 1 2∆ − 1 ε − a + a = −ε ∆ − a + a ≥ 0, since ε ∆ ≤ 1 ≤ a − a.
If a = a , then the number of same type neighbors of agent y stays the same. Since y improves her positive neighborhood ratio and since a = a the number of different type neighbors of y has to decrease and therefore b < b. We denote the difference as δ with b = b + δ. Therefore it holds that δ > 0. Since we consider a ∆-regular network, it follows that ε = ε + δ. Hence,
− ca + (1 − c)b + cε + ca + (c − 1)b − cε = − ca + (1 − c)(b + δ) + cε + ca + (c − 1)b − c(ε + δ) = − ca + (1 − c)δ + ca − cδ = (1 − c)δ − cδ = (1 − 2c)δ > 0,
where the second to last equality holds since a = a .
Since Φ(p G ) ≤ m where m is the number of edges in the underlying graph and Φ(p G ) decreases after every jump by at least (1 − 2c) the IRD find an equilibrium in O(m). Actually Theorem 7 is tight and convergence is not guaranteed if τ > 2 ∆ .
Theorem 8. The 1-k-JSG for τ > 2 ∆ on ∆-regular graphs is no potential game.
Proof. We prove the statement by providing an improving response cycle. See Fig. 5. If we have more than two types of different agents, all agents of types dissimilar from T 1 and T 2 can be placed outside of the neighborhood of the agents a, b and c who are involved in the IRC. Let τ > 2 ∆ . In the initial placement, agent a is discontent and has cost of τ − 2 ∆ . By jumping next to agent c she becomes content. Because of this jump, agent b becomes isolated. Jumping next to the agents d and y decreases her costs from τ to τ − 1 ∆−1 . After the second step, the obtained placement is equivalent to the initial placement. Only agents a, b, and c changed their roles. Hence, the next two jumps from agents c and a are like the first two: First, agent c jumps next to agent b to become content, then agent a jumps next to the agents c and z to avoid an isolated position. We end up in an equivalent placement to the initial one.
If the underlying network is an arbitrary network the situation is worse.
Theorem 9. IRD are not guaranteed to converge in the 1-k-JSG for τ ∈ (0, 1) on arbitrary networks. Moreover, weak acyclicity is violated. Figure 5: An IRC for the JSG for τ > 2 ∆ on a ∆-regular network. Empty nodes are white, agents of type T 1 are orange, type T 2 agents are blue. Multiple nodes in series represent a clique of ∆ − 2 nodes. An edge between a clique and a single node denotes that each clique node is connected to that single node. An edge between two cliques represents that each clique node as exactly one neighbor in the other clique. With this the network is indeed ∆-regular: Each node is connected to all nodes of exactly one group of size ∆ − 2 and to two other nodes. for any τ ∈ (0, 1). Agents of type T 1 are orange, type T 2 agents are blue. Multiple nodes in a series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected.
Proof. We show the statement by giving an example of an improving response cycle where in every step exactly one agent has exactly one improving jump. Consider Fig. 6. We assume that x is sufficiently high, e.g. x > max 2 τ , 1 1−τ . If we have more than two different types of agents, all agents of types dissimilar to T 1 and T 2 can be placed in cliques outside of the neighborhood of all of the agents involved in the IRC. If these cliques are placed inside network components which are neither connected to the IRC nodes, nor to each other, the agents of these types will never become discontent. Hence, the jumps of the given IRC are the only ones possible. In the construction we have four orange agents, a, b, c, d, of type T 1 and 2x + 1 blue agents in the sets u and v and f of type T 2 and one white empty node. All nodes which are occupied by the blue agents are interconnected and form a clique.
During the whole cycle, all blue agents are content. A blue agent z ∈ T 2 has 2x+2 neighbors of whom at least 2x are of the same type. Hence, the positive neighborhood ratio of an agent z is larger than τ and she has no incentive to jump to another currently empty node. Also the orange agent d remains content during the entire cycle since she is never isolated and has never a neighboring agent of a different type. In the initial placement ( Fig. 6(a)), the orange agent a is discontent, since her only neighboring agent f is blue. Therefore, a jumps to the empty node. Agent b and, depending on the value of τ , agent c are discontent. However, jumping to the empty node next to agent d is not an improvement for them. Now (Fig. 6(b)) agent b is discontent, since x is chosen sufficiently high that the positive neighborhood ratio of b is smaller than τ . Hence, jumping to the empty node next to agent a improves the cost of b from τ − 2 x+2 to max (0, τ − 0.5). Again, this is the only valid jump, since agent c would still have exactly 13 one blue agent and one orange agent in her neighborhood by jumping next to agent a. After two further jumps (Fig. 6(c) and 6 (d)) by agents c and a, which are equivalent to those shown in Fig. 6(a) and Fig. 6(b), restore the initial placement.
Since all executed jumps were the only ones possible, this shows that the JSG is not weakly acyclic as there is no possibility to reach a stable placement via improving jumps.
IRD Convergence for the One-versus-One Version
Now we turn to the 1-1-JSG. By using the same proof as in Theorem 4 with jumps instead of swaps we get the following positive result. The same IRC which proves Theorem 8 for the 1-k-JSG yields the next result.
Theorem 11. IRD may not converge in the 1-1-JSG for τ > 2 ∆ on ∆-regular graphs.
Finally the proof of Theorem 9 works for the following result as well.
Theorem 12. IRD are not guaranteed to converge in the 1-1-JSG for τ ∈ (0, 1) on arbitrary networks and weakly acyclicity is violated.
Computational Hardness of Finding Optimal Placements
Here, we investigate the computational hardness of computing an optimal placement, i.e., a placement where as many agents as possible are content.
Hardness Properties for Two Types
We start with two types of agents and show that finding an optimal placement for the SSG in an arbitrary network G is NP-hard by giving a reduction from the Balanced Satisfactory Problem (BSP), which was introduced in [22,23] and proven to be NP-hard in [4]. This result directly implies that finding an optimal placement for the JSG with no empty nodes is NP-hard as well.
Theorem 13. Finding an optimal placement of agents for the two types SSG in a network G is NP-hard for τ = 1 2 .
Proof. We prove the statement by giving a reduction from the BSP. Given a network G = (V, E) with an even number of nodes. Let v ∈ V and V ⊆ V . We denote by deg V (v) the number of nodes in V which are adjacent to v. A balanced satisfactory partition exists if there is a non-trivial partition
V 1 , V 2 of the nodes V with V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅ and |V 1 | = |V 2 | such that each node v ∈ V i with i ∈ {1, 2} has at least deg V i (v) ≥ deg G (v) 2
, i.e., each node has at least as many neighbors in its own part as in the other. If such a partition exists, we can find it by computing an optimal placement p * G in the network G for two different types of agents of size |V | 2 and τ = 1 2 . The cost of a placement p G is the number of discontent agents. Obviously, a placement p G without discontent agents and thus the placement cost cost p G (A) = 0 is optimal. For a content agent a ∈ A we have pnr p G (a) ≥ 1 2 = τ and thus, if there are no empty nodes we know N + p G (a) ≥ deg G (p G (a)) 2 . If we have a placement where all agents are content we can gather all nodes which are occupied by agents of type T 1 to the subset V 1 and all agents 14 which are occupied by agents of type T 2 to the subset V 2 . It holds for every a ∈ A that
deg V i (p * G (a)) = N + p G (a) ≥ deg G (p G (a)) 2
. Hence, calculating an optimal placement must be NPhard.
The above proof relies on the fact that there are no empty nodes. The computational hardness of the JSG changes if many empty nodes exist. Obviously, it is easy to find an optimal placement if there are enough empty nodes to separate both types of agents completely and a suitable separator is known. Mapping the boundary for the transition from NP-hardness to efficient computation is a challenging question for future work.
Next we show that finding an optimal placement is hard for high τ via a reduction from Minimum Cut Into Equal Size (MCIES) which was proven to be NP-hard in [19].
Theorem 14.
Finding an optimal placement in the SSG on an arbitrary network G = (V, E) with maximum node degree
∆ G = max{deg G (v) | v ∈ V } is NP-hard for τ > 3∆ G 3∆ G +1 . Proof.
We prove the statement by giving a reduction from MCIES. Given a network G = (V, E) and an integer W ∈ N. MCIES is the decision whether there is a non-trivial partition
V 1 , V 2 with V 1 ∪ V 2 = V , V 1 ∩ V 2 = ∅ and |V 1 | = |V 2 | such that |{{v 1 , v 2 } ∈ V | v 1 ∈ V 1 , v 2 ∈ V 2 }| ≤ W ,
i.e., there are at most W edges between the two parts.
Let ∆ G = max{deg G (v) | v ∈ V } be the maximum node degree in G. We create a network G = (V , E ) in which every node v ∈ V is replaced by a clique C v in G of size 3∆ G + 1. Each edge {u, v} ∈ E will be replaced by an edge {u , v } between two nodes u ∈ C u and v ∈ C v such that each node in G has at most one neighbor outside its clique. Therefore, the degree of nodes in G is either 3∆ G or 3∆ G + 1, and so the maximum node degree ∆ G in G is 3∆ G + 1. We have two different agent types, each consisting of |V | 2 agents. Let τ >
∆ G −1 ∆ G = 3∆ G 3∆ G +1 .
Because of this, an agent is content in G if she has no neighbors of a different type. For a placement p G to be optimal, all cliques C have to be uniform, i.e. assign agents of the same type to each node in C. Otherwise another non-uniform clique C has to exist and we can re-assign the agents in both cliques in a placement p G to make C uniform. In p G all agent of both cliques are discontent, while in p G at least 2∆ G + 1 agents in C that have no neighbors outside C are content. Since each clique is only connected to at most ∆ G other nodes, at most 2∆ G agents are discontent in p G that were content in p G . Therefore, p G would not be optimal.
If we have an optimal placement with 2W discontent agents, we can gather all v ∈ V where C v is occupied by agents of type T 1 into V 1 , and similarly into V 2 for T 2 . We then have W edges between the two sets V 1 and V 2 . Hence, a placement with 2W discontent agents correspond to an MCIES with W = W edges between the partitions and vice versa.
For the above theorems, we used a placement cost function which counts the number of discontent agents. However, we remark that even if we change this definition into summing up the cost of all agents, i.e., cost p G (A) = a∈A cost p G (a), like social cost, the above hardness results still hold. This relates to the hardness results from Elkind et al. [15] which hold for the JSG with τ = 1 in the presence of stubborn agents which are unwilling to move.
We contrast the above results by providing an efficient algorithm for computing an optimal placement for the SSG and the JSG on a 2-regular network with two different agent types by employing a well-known dynamic programming algorithm for Subset Sum [14,20].
Theorem 15. Finding an optimal placement of agents of two types in the SSG on a 2-regular network with n nodes can be done in O(n 2 ) for τ > 1 2 . Proof. Let G = (V, E) be a 2-regular network, consisting of m rings. Ring i has r i nodes. Given a partition of the agents P (A) = {T 1 , T 2 } with |T 1 | = n 1 and |T 2 | = n 2 .
For finding a placement that minimizes cost p G (A), we take the multiset r 1 , . . . , r m as elements and n 1 as target sum as an instance of Subset Sum. Which we can solve in O(n 2 ) since n 1 ≤ n. 15
In case of a Yes-instance, we can place the agents of type T 1 on the rings indicated by the selected elements. Thus no agents of different types are on the same ring. If the instance is a No-instance, then in the optimal placement there is exactly one ring with agents of different type. This implies that at least 3 and at most 4 agents are discontent. To check if an optimal placement with 3 discontent agent is possible, we solve the Subset Sum instance with target sum n 1 + 1. If this is possible, then we place the n 1 agents on the respective rings such that exactly one node is empty. Then all empty nodes are filled with type T 2 agents. If the instance with target sum n 1 +1 is a No-instance, we greedily fill the rings with consecutive type T 1 agents such that we get one ring with empty spots. Then we fill all the empty spots with type T 2 agents to obtain exactly 4 discontent agents.
Optimal placements for the JSG can be found with an analogous algorithm.
Hardness Properties for More Types
Compared to the previous subsection we now show that also the number of different agent types has an influence on the computational hardness of finding an optimal placement. We establish NP-hardness even on 2-regular networks if there are sufficiently many agent types by giving a reduction from 3-Partition which was proven to be NP-hard in [20].
Theorem 16. Finding an optimal placement of agents of an arbitrary number of types in the 1-1-SSG and 1-k-SSG on a 2-regular network with τ > 1 2 is NP-hard.
Proof. We prove the statement by giving a polynomial time reduction from 3-Partition. Given a multiset S of 3k positive integers. 3-Partition concerns whether S can be partitioned into k disjoint sets S i with i ∈ {1, . . . , k} of size three, such that the sum of the numbers in each subset is equal, i.e., s i ∈S 1 s i = s i ∈S 2 s i = · · · = s i ∈S k s i . As these sets are disjoint, we already for all s i ∈ S. Based on a 3-Partition instance, we generate a 2-regular graph, containing a ring for each s i ∈ S with s i nodes. Thus our graph has n = s i ∈S s i nodes in total. We can assume s i ≥ 3 for all s i ∈ S, since adding a constant to all elements does not change the existence of a solution. We now take a set of n agents A partitioned into types P (A) = {T 1 , . . . , T k }. Each type consists of n k agents. Assume we find an optimal placement with cost p G (A) = 0 for τ > 1 2 . This means, that there is no ring that contains agents of different types, since an agent is discontent if she has a neighboring agent of different type. Thus, we have a disjoint partitioning of the rings, such that the number of nodes in each partition adds up to n k = s i ∈S s i k . We also assumed that n 4k < s i < n 2k , thus all agents of a type T i have to be placed on exactly three rings. This directly implies a solution for the 3-Partition instance. If the corresponding 3-Partition instance has a solution S 1 , . . . , S k , this produces a partitioning of the rings, such that each partition contains s i ∈S s i k = n k nodes. Placing the agent types according to this partitioning won't produce any ring with agents of different types on it. Such a placement has cost p G (A) = 0, which has to be optimal.
Since our reduction can be done in polynomial time for unary encoded instances of 3partition, this proofs the NP-hardness of finding an optimal placement.
To conclude the section on the computational hardness, we want to emphasize that solving the question whether finding an optimal placement is easy or hard does not allow us to make 16 Figure 7: A network where the optimal placement p * G is not in equilibrium for τ > 0.9. Multiple nodes in series represent a clique of nodes of the stated size. Edges between cliques or between a clique and single nodes represent that all involved nodes are completely interconnected. equivalent statements for computing stable placements. The following example illustrates the rather counter-intuitive fact that an optimal placement is not necessarily stable.
Theorem 17. For the SSG with two different types of agents there is a network G where no optimal placement is stable.
Proof. We prove the statement by giving an example. Consider Fig. 7. The pictured network has two cliques u i and v i with 1 ≤ i ≤ 3 of size ten. Let τ > 0.9. The placement p * G depicted in Fig. 7a has cost p * G (A) = 7, and the placement p G in Fig. 7b has cost p G (A) = 8. The former is optimal since every placement p G other than the given two has to place agents of different types in at least one of the cliques. This would cause all agents in the clique to become discontent and thus yield cost p G (A) ≥ 10. However, the agents a and b want to swap in placement p * G . Hence, the unique optimal placement p * G is not stable.
Simulation
As a final aspect, we enrich our theoretical results with empirical results for the versions where IRD convergence is guaranteed. We find that for the versions with two agent types the IRD starting from uniformly random placements produce an equilibrium in c · m steps, where c is a positive constant and m is the number of edges in the underlying network. See Fig. 8. This meets our upper bound of O(m). Interestingly, IRD convergence is faster on random 8-regular graphs than on 8-regular toroidal grids. This hints that geometry may influence the convergence speed. The details of the simulation can be found in the appendix.
Simulation Set-up.
For our simulations we considered two different network topologies: toroidal grids with the Moore neighborhood, i.e., the nodes have diagonal edges and all inner nodes have degree 8 and random 8-regular networks. We generated grids with 100 × 100 up to 300 × 300 nodes where the grid sides increased in steps of 20. To have comparable random 8-regular graphs we generated them with the same number of nodes. For each configuration we ran the IRD starting from 100 random initial placements do derive the results depicted in Fig. 8.
To get the initial placements, the agents were placed uniformly at random on the nodes of the network and we assumed equal proportions of each agent type. For the jump game we used 6% empty nodes. In each round the discontent agents are activated in a random order and each activated agent iterates randomly over all possible locations for a swap or a jump and chooses the first location which yields an improvement.
Conclusion and Open Questions
We conducted a thorough analysis of the dynamic properties of the game-theoretic version of Schelling's segregation model and provided tight threshold results for the IRD convergence for several versions of the game. Furthermore, we found that the number of agent types and the underlying graph has severe impact on the computational hardness of computing optimal placements.
It remains open whether IRD always converge for the 1-1-SSG with τ ∈ 1 ∆ , 6 ∆ , and for the 1-1-JSG with τ ∈ 1 ∆ , 2 ∆ . Since most versions are not guaranteed to converge via IRD, the existence of stable placements for all graph types is not given. Elkind et al. [15] showed that for the 1-k-JSG that stable placements exist if the underlying network is a star or a graph with maximum degree 2 and τ = 1. Furthermore they proved that if the underlying network is a tree the existence of a stable placement may fail to exist for τ = 1 in the 1-k-JSG. However, in general, it remains an open question in terms of different values of τ and for different underlying networks whether stable placements exist and whether they can be computed efficiently. We conjecture the following: Conjecture 1. Equilibria are not guaranteed to exist in all cases for which we constructed IRCs.
Also the computational hardness of finding optimal placements for some variants deserves further study and this could be extended to study the existence of other interesting states, e.g., stable states with low segregation.
Our IRD convergence results can be straightforwardly adapted to hold for the extended model by Chauhan et al. [11], where agents also have single-peaked preferences over the locations. Moreover, we are positive that also our computational hardness results can be carried over.
Last but no least, we emphasize that there are many possible ways to model Schelling segregation with at least three agent types. For example, types could have preferences over other types which then yields a rich unexplored setting. | 10,462 |
1907.07574 | 2960213605 | In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a , a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions. We also consider the exponential time decay model for @math -median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores @math points where @math is the half-life of the decay function and @math is the aspect ratio of the dataset. Our techniques extend to @math -means clustering and @math -estimators as well. | The first insertion-only streaming algorithm for the @math -median clustering problem was presented in 2000 by Guha, Mishra, Motwani, and O'Callaghan @cite_17 . Their algorithm uses @math space for a @math approximation, for some @math . Subsequently, Charikar al , @cite_0 present an @math -approximation algorithm for @math -means clustering that uses @math space. Their algorithm uses a number of phases, each corresponding to a different guess for the value of the cost of optimal solution. The guesses are then used in the online facility location ( ) algorithm of @cite_45 , which provides a set of centers whose number and cost allows the algorithm to reject or accept the guess. This technique is now one of the standard approaches for handling @math -service problems. Braverman al , @cite_3 improve the space usage of this technique to @math . @cite_20 and @cite_2 develop algorithms for @math -means clustering on sliding windows, in which expired data should not be included in determining the cost of a solution. | {
"abstract": [
"One of the central problems in data-analysis is k-means clustering. In recent years, considerable attention in the literature addressed the streaming variant of this problem, culminating in a series of results (Har-Peled and Mazumdar; Frahling and Sohler; Frahling, Monemizadeh, and Sohler; Chen) that produced a (1 + e)-approximation for k-means clustering in the streaming setting. Unfortunately, since optimizing the k-means objective is Max-SNP hard, all algorithms that achieve a (1 + e)-approximation must take time exponential in k unless P=NP. Thus, to avoid exponential dependence on k, some additional assumptions must be made to guarantee high quality approximation and polynomial running time. A recent paper of Ostrovsky, Rabani, Schulman, and Swamy (FOCS 2006) introduced the very natural assumption of data separability: the assumption closely reflects how k-means is used in practice and allowed the authors to create a high-quality approximation for k-means clustering in the non-streaming setting with polynomial running time even for large values of k. Their work left open a natural and important question: are similar results possible in a streaming setting? This is the question we answer in this paper, albeit using substantially different techniques. We show a near-optimal streaming approximation algorithm for k-means in high-dimensional Euclidean space with sublinear memory and a single pass, under the same data separability assumption. Our algorithm offers significant improvements in both space and running time over previous work while yielding asymptotically best-possible performance (assuming that the running time must be fully polynomial and P ≠ NP). The novel techniques we develop along the way imply a number of additional results: we provide a high-probability performance guarantee for online facility location (in contrast, Meyerson's FOCS 2001 algorithm gave bounds only in expectation); we develop a constant approximation method for the general class of semi-metric clustering problems; we improve (even without σ-separability) by a logarithmic factor space requirements for streaming constant-approximation for k-median; finally we design a \"re-sampling method\" in a streaming setting to convert any constant approximation for clustering to a [1 + O(σ2)]-approximation for σ-separable data.",
"We study clustering problems in the streaming model, where the goal is to cluster a set of points by making one pass (or a few passes) over the data using a small amount of storage space. Our main result is a randomized algorithm for the k--Median problem which produces a constant factor approximation in one pass using storage space O(k poly log n). This is a significant improvement of the previous best algorithm which yielded a 2O(1 e) approximation using O(ne) space. Next we give a streaming algorithm for the k--Median problem with an arbitrary distance function. We also study algorithms for clustering problems with outliers in the streaming model. Here, we give bicriterion guarantees, producing constant factor approximations by increasing the allowed fraction of outliers slightly.",
"",
"We explore clustering problems in the streaming sliding window model in both general metric spaces and Euclidean space. We present the first polylogarithmic space O(1)-approximation to the metric k-median and metric k-means problems in the sliding window model, answering the main open problem posed by Babcock, Datar, Motwani and O'Callaghan [5], which has remained unanswered for over a decade. Our algorithm uses O(k3log6W) space and poly(k, log W) update time, where W is the window size. This is an exponential improvement on the space required by the technique due to Babcock, et al We introduce a data structure that extends smooth histograms as introduced by Braverman and Ostrovsky [11] to operate on a broader class of functions. In particular, we show that using only polylogarithmic space we can maintain a summary of the current window from which we can construct an O(1)-approximate clustering solution. Merge-and-reduce is a generic method in computational geometry for adapting offline algorithms to the insertion-only streaming model. Several well-known coreset constructions are maintainable in the insertion-only streaming model using this method, including well-known coreset techniques for the k-median and k-means problems in both low-and high-dimensional Euclidean spaces [31, 15]. Previous work [27] has adapted coreset techniques to the insertion-deletion model, but translating them to the sliding window model has remained a challenge. We give the first algorithm that, given an insertion-only streaming coreset of space s (maintained using merge-and-reduce method), maintains this coreset in the sliding window model using O(s2e--2 log W) space. For clustering problems, our results constitute the first significant step towards resolving problem number 20 from the List of Open Problems in Sublinear Algorithms [39].",
"In PODS 2003, Babcock, Datar, Motwani and O'Callaghan gave the first streaming solution for the k-median problem on sliding windows using O(frack k tau^4 W^2tau log^2 W) space, with a O(2^O(1 tau)) approximation factor, where W is the window size and tau in (0,1 2) is a user-specified parameter. They left as an open question whether it is possible to improve this to polylogarithmic space. Despite much progress on clustering and sliding windows, this question has remained open for more than a decade. In this paper, we partially answer the main open question posed by Babcock, Datar, Motwani and O'Callaghan. We present an algorithm yielding an exponential improvement in space compared to the previous result given in Babcock, et al In particular, we give the first polylogarithmic space (alpha,beta)-approximation for metric k-median clustering in the sliding window model, where alpha and beta are constants, under the assumption, also made by , that the optimal k-median cost on any given window is bounded by a polynomial in the window size. We justify this assumption by showing that when the cost is exponential in the window size, no sublinear space approximation is possible. Our main technical contribution is a simple but elegant extension of smooth functions as introduced by Braverman and Ostrovsky, which allows us to apply well-known techniques for solving problems in the sliding window model to functions that are not smooth, such as the k-median cost.",
"We study clustering under the data stream model of computation where: given a sequence of points, the objective is to maintain a consistently good clustering of the sequence observed so far, using a small amount of memory and time. The data stream model is relevant to new classes of applications involving massive data sets, such as Web click stream analysis and multimedia data analysis. We give constant-factor approximation algorithms for the k-median problem in the data stream model of computation in a single pass. We also show negative results implying that our algorithms cannot be improved in a certain sense."
],
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_45",
"@cite_2",
"@cite_20",
"@cite_17"
],
"mid": [
"1967187838",
"2091684877",
"",
"2264057010",
"2290182846",
"2100369465"
]
} | 0 |
||
1907.07574 | 2960213605 | In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a , a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions. We also consider the exponential time decay model for @math -median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores @math points where @math is the half-life of the decay function and @math is the aspect ratio of the dataset. Our techniques extend to @math -means clustering and @math -estimators as well. | Another line of approach for @math -service problems is the construction of coresets, in particular when the data points lie in the Euclidean space. Har-Peled and Mazumdar @cite_28 give an insertion-only streaming algorithm for @math -medians and @math -means that provides a @math -approximation, using space @math , where @math is the dimension of the space. Similarly, Chen @cite_40 introduced an algorithm using @math space, with the same approximation guarantees. | {
"abstract": [
"In this paper, we show the existence of small coresets for the problems of computing k-median and k-means clustering for points in low dimension. In other words, we show that given a point set P in Rd, one can compute a weighted set S ⊆ P, of size O(k e-d log n), such that one can compute the k-median means clustering on S instead of on P, and get an (1+e)-approximation. As a result, we improve the fastest known algorithms for (1+e)-approximate k-means and k-median. Our algorithms have linear running time for a fixed k and e. In addition, we can maintain the (1+e)-approximate k-median or k-means clustering of a stream when points are being only inserted, using polylogarithmic space and update time.",
"We present new approximation algorithms for the @math -median and @math -means clustering problems. To this end, we obtain small coresets for @math -median and @math -means clustering in general metric spaces and in Euclidean spaces. In @math , these coresets are of size with polynomial dependency on the dimension @math . This leads to @math -approximation algorithms to the optimal @math -median and @math -means clustering in @math , with running time @math , where @math is the number of points. This improves over previous results. We use those coresets to maintain a @math -approximate @math -median and @math -means clustering of a stream of points in @math , using @math space. These are the first streaming algorithms, for those problems, that have space complexity with polynomial dependency on the dimension."
],
"cite_N": [
"@cite_28",
"@cite_40"
],
"mid": [
"2045964207",
"2094048240"
]
} | 0 |
||
1907.07574 | 2960213605 | In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a , a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions. We also consider the exponential time decay model for @math -median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores @math points where @math is the half-life of the decay function and @math is the aspect ratio of the dataset. Our techniques extend to @math -means clustering and @math -estimators as well. | Cohen and Strauss @cite_47 study problems in time-decaying data streams in 2003. There are a number of results @cite_49 @cite_41 @cite_46 @cite_24 in this line of work, but the most prominent time-decay model is the sliding window model. Datar al , @cite_4 introduced the exponential histogram as a framework in the sliding window for estimating statistics such as count, sum of positive integers, average, and @math norms. This initiated an active line of research, including improvements to count and sum @cite_26 , frequent itemsets @cite_33 @cite_12 , frequency counts and quantiles @cite_32 @cite_44 , rarity and similarity @cite_11 , variance and @math -medians @cite_36 and other geometric and numerical linear algebra problems @cite_10 @cite_25 @cite_18 . | {
"abstract": [
"We initiate the study of numerical linear algebra in the sliding window model, where only the most recent @math updates in the data stream form the underlying set. Although most existing work in the sliding window model uses the smooth histogram framework, most interesting linear-algebraic problems are not smooth; we show that the spectral norm, vector induced matrix norms, generalized regression, and low-rank approximation are not amenable for the smooth histogram framework. To overcome this challenge, we first give a deterministic algorithm that achieves spectral approximation in the sliding window model that can be viewed as a generalization of smooth histograms, using the Loewner ordering of PSD matrices. We then give algorithms for both spectral approximation and low-rank approximation that are space-optimal up to polylogarithmic factors. Our algorithms are based on a new notion of \"reverse online\" leverage scores that account for both how unique and how recent a row is, while preserving sparsity so that both our algorithms run in input sparsity runtime, up to lower order factors. We show that our techniques have applications to linear-algebraic problems in other settings. Specifically, we show that our analysis immediately implies an algorithm for low-rank approximation in the online setting that is space-optimal up to logarithmic factors, as well as nearly input sparsity time. We show our deterministic spectral approximation algorithm can be used to handle @math spectral approximation in the sliding window model under a certain assumption on the bit complexity of the entries. Finally, we show that our downsampling framework can be applied to the problem of approximate matrix multiplication and provide upper and lower bounds that are tight up to @math factors.",
"This paper presents algorithms for estimating aggregate functions over a \"sliding window\" of the N most recent data items in one or more streams. Our results include: For a single stream, we present the first e-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first e for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words. In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires O(N) space. We present the first randomized (e,s)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (e,s)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words. < olOur results are obtained using a novel family of synopsis data structures.",
"We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using @math bits of memory, we can estimate the number of 1's to within a factor of @math . We also give a matching lower bound of @math memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ( @math ) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of @math in memory and a @math factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.",
"This paper considers the problem of mining closed frequent itemsets over a data stream sliding window using limited memory space. We design a synopsis data structure to monitor transactions in the sliding window so that we can output the current closed frequent itemsets at any time. Due to time and memory constraints, the synopsis data structure cannot monitor all possible itemsets. However, monitoring only frequent itemsets will make it impossible to detect new itemsets when they become frequent. In this paper, we introduce a compact data structure, the closed enumeration tree (CET), to maintain a dynamically selected set of itemsets over a sliding window. The selected itemsets contain a boundary between closed frequent itemsets and the rest of the itemsets. Concept drifts in a data stream are reflected by boundary movements in the CET. In other words, a status change of any itemset (e.g., from non-frequent to frequent) must occur through the boundary. Because the boundary is relatively stable, the cost of mining closed frequent itemsets over a sliding window is dramatically reduced to that of mining transactions that can possibly cause boundary movements in the CET. Our experiments show that our algorithm performs much better than representative algorithms for the sate-of-the-art approaches.",
"",
"",
"The sliding window model is useful for discounting stale data in data stream applications. In this model, data elements arrive continually and only the most recent N elements are used when answering queries. We present a novel technique for solving two important and related problems in the sliding window model---maintaining variance and maintaining a k--median clustering. Our solution to the problem of maintaining variance provides a continually updated estimate of the variance of the last N values in a data stream with relative error of at most e using O(1 e 2 log N) memory. We present a constant-factor approximation algorithm which maintains an approximate k--median solution for the last N data points using O(k τ4 N2τ log2 N) memory, where τ < 1 2 is a parameter which trades off the space bound with the approximation factor of O(2O(1 τ)).",
"We consider the problem of maintaining e-approximate counts and quantiles over a stream sliding window using limited space. We consider two types of sliding windows depending on whether the number of elements N in the window is fixed (fixed-size sliding window) or variable (variable-size sliding window). In a fixed-size sliding window, both the ends of the window slide synchronously over the stream. In a variable-size sliding window, an adversary slides the window ends independently, and therefore has the ability to vary the number of elements N in the window.We present various deterministic and randomized algorithms for approximate counts and quantiles. All of our algorithms require O(1 e polylog(1 e, N)) space. For quantiles, this space requirement is an improvement over the previous best bound of O(1 e2 polylog(1 e, N)). We believe that no previous work on space-efficient approximate counts over sliding windows exists.",
"",
"",
"We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values seen from the stream diminishes as time elapses. These types of aggregation were discussed by Cohen and Strauss (J. Algorithms 1(59), 2006), and can be used in many applications in which the relative value of streaming data decreases since the time the data was seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates. In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore and develop algorithms and lower bounds for approximations allowing an additive error in addition to the multiplicative error. We show that in some cases, allowing an additive error can decrease the amount of space required, while in other cases we cannot do any better than a solution without additive error.",
"We formalize the problem of maintaining time-decaying aggregates and statistics of a data stream: the relative contribution of each data item to the aggregate is scaled down by a factor that depends on, and is non-decreasing with, elapsed time. Time-decaying aggregates are used in applications where the significance of data items decreases over time. We develop storage-efficient algorithms, and establish upper and lower bounds. Surprisingly, even though maintaining decayed aggregates have become a widely-used tool, our work seems to be the first both to explore it formally and to provide storage-efficient algorithms for important families of decay functions, including polynomial decay.",
"We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of @math points or a sliding window of size @math , any exact algorithm for diameter requires @math bits of space. We present a simple @math -approximation algorithm for computing the diameter in the streaming model. Our main result is an @math -approximation algorithm that maintains the diameter in two dimensions in the sliding-window model using @math bits of space, where @math is the maximum, over all windows, of the ratio of the diameter to the minimum non-zero distance between any two points in the window.",
"",
"",
"In the windowed data stream model, we observe items coming in over time. At any time t, we consider the window of the last N observations a t -(N-1), a t-(N-2),..., a t , each a i E 1,...,u ; we are required to support queries about the data in the window. A crucial restriction is that we are only allowed o(N) (often polylogarithmic in N) storage space, so not all items within the window can be archived. We study two basic problems in the windowed data stream model. The first is the estimation of the rarity of items in the window. Our second problem is one of estimating similarity between two data stream windows using the Jacard's coefficient. The problems of estimating rarity and similarity have many applications in mining massive data sets. We present novel, simple algorithms for estimating rarity and similarity on windowed data streams, accurate up to factor 1 ± e using space only logarithmic in the window size."
],
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_46",
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_44",
"@cite_49",
"@cite_47",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2801503411",
"1990465412",
"2004110412",
"2044541009",
"",
"",
"2124507579",
"2152637787",
"",
"",
"2046858995",
"2081903609",
"2007025103",
"",
"",
"2600017706"
]
} | 0 |
||
1907.07270 | 2962316770 | This paper proposes a face anti-spoofing user-centered model (FAS-UCM). The major difficulty, in this case, is obtaining fraudulent images from all users to train the models. To overcome this problem, the proposed method is divided in three main parts: generation of new spoof images, based on style transfer and spoof image representation models; training of a Convolutional Neural Network (CNN) for liveness detection; evaluation of the live and spoof testing images for each subject. The generalization of the CNN to perform style transfer has shown promising qualitative results. Preliminary results have shown that the proposed method is capable of distinguishing between live and spoof images on the SiW database, with an average classification error rate of 0.22. | Generative Adversarial Networks (GANs) and Convolutional Neural Networks (CNNs) have been extensively used in the process of generating new images given a specific database as a reference. The modeling of new images can be learned from the probability distribution of any set of images @cite_36 . This process can be perceived in the literature in applications such as the generation of new images @cite_35 , the transfer of styles from one set of images to another @cite_22 , the modeling of new images combining features in the discriminative space @cite_32 , among others. | {
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
],
"cite_N": [
"@cite_36",
"@cite_35",
"@cite_32",
"@cite_22"
],
"mid": [
"2099471712",
"2962770929",
"2173520492",
"2962793481"
]
} | Style Transfer Applied to Face Liveness Detection with User-Centered Models | With the growth of technology and the advance of computer vision techniques and machine learning, facial biometry has been receiving special attention in the last few years [1], [2], [3]. Not far from that, the ease of implementation and integration of facial biometric systems brings the concern with the security of these solutions. More specifically, when we consider facial verification, a major concern arises in regard to authenticity, in which one person tries to obtain access as another person. The problem addressed in this paper is the liveness detection on a face image, which means determining if there is really a living person in front of a camera and not an attempt to identity fraud by presenting a photo or a video in order to obtain improper access. Therefore, it is expected that a face anti-spoofing (FAS) system should be able to distinguish an image that does not have a fraud attempt from one that does have it. Thus, a given solution must be able to receive images captured from various sources (smartphones, webcams, professional cameras, etc.) and perform the classification of these images as authentic or fraudulent, which are defined from now on as spoof images.
Using machine learning algorithms to tackle the problem of face liveness detection requires examples of authentic images as well as fraudulent images. Several benchmark databases have been released in the last years in the context of face liveness detection [4], [5], [6], [7], with the objective of providing training and test data to solve the problem linked to authenticity in facial recognition.
Although there are many spoof databases, they are not always representative enough for a real application. Several face liveness detection methods have been proposed, however, their results hardly beat random classifiers [8]. In addition, it is observed that robust classifiers such as deep neural networks, often learn not only the spoof representation, but also facial characteristics of the subject present on the database. An interesting idea is to circumvent this problem is to create user-centered liveness detection models. However, the major difficulty in this case is obtaining spoof images from all subjects. In a real-world scenario, it is impracticable to ask a subject to provide examples of fraudulent images of himself. Therefore, it is important to create a method for generating these fraudulent images automatically.
In this paper we propose an approach for generating fraudulent face images from authentic ones based on the idea of style transferring, and use both authentic and fraudulent images to build user-centered face liveness detection models based on convolution neural networks (CNNs). For such an aim, we use the CNN-based approach proposed by Gatys et al. [9] that creates artistic images of high perceptual quality. Even if their main purpose is to create artistic images, we adapt their approach to create more secure facial biometric systems. Therefore, we use the style transfer techniques to create dynamically fraudulent images from real subjects. In addition, the idea of making the liveness detection user-centered brings new results in the context of facial biometrics and liveness detection.
The remainder of the paper is organized as follows: In Section II, the theoretical background of style transfer and data augmentation on facial biometrics is given. The proposed method is presented in Section III. Results and concluding remarks are given in Sections IV and V, respectively.
III. USER-CENTERED MODELS
A. Database
The database used in this paper was the Spoof in the Wild (SiW) database [7]. It was introduced in 2018 and consists of 165 subjects in 4478 different videos with 1080P HD resolution. Different sections were recorded to capture the videos, varying the participants as well as the lighting. The following types of presentation attacks are present in the database: photos of printed photos; presentation attack using cell phones; presentation attacks using monitor screens; and presentation attacks using tablet screens. The live videos were captured with two high-quality cameras (Canon EOS T6 and Logitech C920 webcam), in four different sessions: (i) subjects were asked to move their head with varying distances to the camera; (ii) subjects move yaw angle of the head varying from 90 • to -90 • , with different facial expressions; (iii) and (iv) same movements as in (i) and (ii) but with variance in the light source illuminating the subject's face and changing orientation during the video capture, respectively.
For facial extraction, we used the Dlib library [25], with height and width for facial images of 256×256 pixels and a margin of 0.1. Fig 1 shows some samples of the extracted faces from the SiW database. The same face extraction protocol was applied to both live and spoof images. All frames from the videos were used to extract faces. In total, around 1.1M live image faces were obtained and around 1.4M spoof image faces. A higher number of spoof images is due to the different presentation attacks present in the database, given that, for each live video a number of different spoof videos were generated in the SiW database.
B. Proposed Method
The proposed face anti-spoofing method is divided in two main parts: (i) generation of new spoof images based on style
(a) (b) (c) (d) (e) (f) (g) (h)
1) Style Transfer:
We used a CNN to perform style transfer based on just one reference image, following the implementation in [26]. The CNN architecture replicates the VGG19 [27] architecture, with the parametrization, optimization and also the training method proposed in [9], [23], [24], with perceptual loss and instance normalization. One subject of the database was randomly chosen and each one of its spoof distribution perceived was selected to be a reference image of spoof from the SiW database. Fig. 2 shows the 10 reference images of the spoof styles available in SiW. Fig. 3 presents the pipeline to generate spoof images using one image as a reference for each spoof style. First, a VGG19 [27] is used to extract information from the reference image, in order to obtain the stylization of the image. This step is performed for each spoof image representation, generating one model for each spoof style representation. Next, the spoof images are generated using each one of the spoof models. A holdout approach was used in the experiments and the database was split into 70% of the live images for training and the remaining 30% for test. Each training image is used as input to the style transfer CNN and it provides at the output 10 spoof samples, to maintain data balancing, just 10% of the live training images were used during the process of spoof generation.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
2) Face Anti-Spoofing User-Centered Model (FAS-UCM): All generated spoof images and all live images from the training set are subsequently used to train the FAS-UCM to distinguish between live and spoof. Fig. 4 illustrates the training process. We used two CNN architectures for the usercentered models: a MobileNetV2 CNN [28], pre-trained on the ImageNet [29] and the proposed Spoof-ModNet, as can be seen in Table I. While the Spoof-ModNet has a total number of parameters of 148k, the MobileNetV2 [28] has between 1.7M to 6.9M parameters, which shows that the proposed model is substantial less complex. Also, in Table I is possible to see that the first convolutional layer takes as an input a 32x32 image, which significantly reduces the complexity of the neural network and speed-up the training and testing process. In contrast, the MobileNetV2 takes a 224x224 image as an input.
The MobileNetV2 architecture was fine-tuned with a learning rate of 0.01, batch size of 100 and 4000 steps. The proposed Spoof-ModNet was randomly initialized and trained with a learning rate of 0.0001, batch size of 8 and 50 epochs. The performance of both architectures was evaluated in our experiments. In the next section we present the evaluation of the proposed method on the live and real spoof testing images -the spoof images present in the original database -for each subject.
IV. EXPERIMENTS AND RESULTS
The first important result obtained in this paper was the automatic generation of spoof images based on a single reference image. For each training image used as input on the style transfer CNN, 10 new spoof samples were generated. Examples of the generated spoof images can be seen in Fig 5. The qualitative results show that the VGG19 network was able to capture and transfer the style of the spoof images. It is important to note that the network captured all the details present in the spoof images, for example very bright spots, warped lines, change in color distribution throughout the face image and change in the illumination. Another important aspect of the style transfer was the adaptation to subjects wearing glasses or not, given that all spoof image representations had a subject wearing glasses, which did not reflect in the resulting images. Also, it is possible to analyze the good adaptation of the CNN to the gender and race of the subject.
The noise present in some of the generated images, usually close to the mouth and cheeks, can be explained as the CNN trying to transfer the very bright spots -for example, specular reflections captured by the camera -seen in the representative spoof images to an image that is not too similar from the reference image. Not far from that, noise can be perceived in some generated images with warped lines across the face, which is a result of the CNN trying to mimic the moiré pattern.
In order to evaluate the generalization of the CNNs trained with the generated spoof images, a testing protocol was applied using the test images. The classification test was performed for each subject, considering their respective previous trained model. Table II shows the results over the two CNN architectures. The Spoof-ModNet has better performance over the metrics analyzed, with an average classification error rate (ACER) of 0.22, while the MobileNetV2 presented an ACER of 0.26. It is important to note that the Spoof-ModNet has significant less convolutional layers and the input images are also significantly smaller (32×32×3). In Fig. 6 it is possible to analyze the results considering the accuracy reported for each subject in the database. In total, there were 90 subjects in the SiW test database. The minimum accuracy reported with the proposed Spoof-ModNet was 34.69%, and the maximum 99.49%. Also, it is possible to see that more than 50% of the reported accuracy per subject is above 70% accuracy using the Spoof-ModNet, while 50% of the data lies in the range of 60.49% and 93.95% accuracy. On the other hand, the MobileNetV2 presented a minimum Fig. 3. Spoof image generation protocol pipeline. The VGG19 [27] architecture was used to obtain the style of the images. For each spoof representation, one image was used to generate a model capable of doing transfer style. For each subject, a set of spoof images was generated based on each of the spoof representations trained in the step previous explained. of 52.90% and a maximum of 88.53% accuracy, with 50% of the data between an accuracy of 65.07% and 76.28%. From the boxplot chart, it is also possible to observe that the MobileNetV2, despite having a worst average accuracy, had more consistent performance, varying less than the Spoof-ModNet architecture.
For the performance evaluation and comparison with other published methods, besides the ACER metrics, we have also selected two other standardized ISO/IEC 30107-3 metrics [30]: attack presentation classification error rate (APCER) and normal presentation classification error rate (NPCER). We reported in Table III an indirect comparison between our method and other methods [7], [21], [20], [22] with reported results in the SiW database. The results reported in [7], [21], [20], [22] followed the intra-database protocol 1 proposed by Liu et al. [7], with a set of subjects in the training data and a distinct set of subjects in the testing data. However, given that our protocol relies on user-centered models, it is not possible to follow the same protocol and the results are not directly comparable, being used only as a baseline. Although our results yielded a worst perfomance, with error rates higher than the other approaches, it is significantly superior to the random classification and it is feasible for generating user's fraudulent images from real images and use them to train user-centered face liveness detection models. In addition, improvements in the models and methods for choosing the spoof representation images should improve the results.
V. CONCLUSIONS
We have presented a method to use style transfer technique to generate spoof images. We used the VGG19 network, which was able to capture and transfer the style from spoof representations. The generate spoof images were used to train two different architectures for each person to perform liveness detection, the MobileNetV2, and the proposed Spoof-ModNet. The Spoof-ModNet network had better performance, with an ACER 0.22, while the MobileNetV2 presented an ACER of 0.26.
Further work will be done to explore other classifier architectures and even the combination of multiple classifiers. Further analysis will be done in other databases to evaluate the generalization of the proposed method. It will also be evaluated other methods for choosing the spoof representation images. Choosing the spoof representation based on the subject particularities may bring better results than using the same representation for the entire database. | 2,213 |
1907.07270 | 2962316770 | This paper proposes a face anti-spoofing user-centered model (FAS-UCM). The major difficulty, in this case, is obtaining fraudulent images from all users to train the models. To overcome this problem, the proposed method is divided in three main parts: generation of new spoof images, based on style transfer and spoof image representation models; training of a Convolutional Neural Network (CNN) for liveness detection; evaluation of the live and spoof testing images for each subject. The generalization of the CNN to perform style transfer has shown promising qualitative results. Preliminary results have shown that the proposed method is capable of distinguishing between live and spoof images on the SiW database, with an average classification error rate of 0.22. | @cite_1 presented a neural algorithm for style transfer based on the extraction of image style through convolutional layers. The authors showed that the deeper the convolutional layers, the more the content of the image and the artistic style could be separated -- and, as a result, more the artistic style could be extracted from the input image. Similar to this, the higher layers of the CNN can generate more robust, sharp and detailed artistic styles images @cite_1 . @cite_37 brought to light optimization to the neural algorithm proposed by @cite_1 , where the neural feed-forward network was trained with perceptual loss, instead of a per-pixel loss. Such an optimization had similar qualitative results in regards to the artistic style transfer, with three orders of magnitude faster @cite_37 . The optimization proposed by @cite_8 also showed that instance normalization could be applied to the CNN with improved results over batch normalization, in training and testing time. | {
"abstract": [
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.",
"It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096."
],
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_8"
],
"mid": [
"2331128040",
"1924619199",
"2502312327"
]
} | Style Transfer Applied to Face Liveness Detection with User-Centered Models | With the growth of technology and the advance of computer vision techniques and machine learning, facial biometry has been receiving special attention in the last few years [1], [2], [3]. Not far from that, the ease of implementation and integration of facial biometric systems brings the concern with the security of these solutions. More specifically, when we consider facial verification, a major concern arises in regard to authenticity, in which one person tries to obtain access as another person. The problem addressed in this paper is the liveness detection on a face image, which means determining if there is really a living person in front of a camera and not an attempt to identity fraud by presenting a photo or a video in order to obtain improper access. Therefore, it is expected that a face anti-spoofing (FAS) system should be able to distinguish an image that does not have a fraud attempt from one that does have it. Thus, a given solution must be able to receive images captured from various sources (smartphones, webcams, professional cameras, etc.) and perform the classification of these images as authentic or fraudulent, which are defined from now on as spoof images.
Using machine learning algorithms to tackle the problem of face liveness detection requires examples of authentic images as well as fraudulent images. Several benchmark databases have been released in the last years in the context of face liveness detection [4], [5], [6], [7], with the objective of providing training and test data to solve the problem linked to authenticity in facial recognition.
Although there are many spoof databases, they are not always representative enough for a real application. Several face liveness detection methods have been proposed, however, their results hardly beat random classifiers [8]. In addition, it is observed that robust classifiers such as deep neural networks, often learn not only the spoof representation, but also facial characteristics of the subject present on the database. An interesting idea is to circumvent this problem is to create user-centered liveness detection models. However, the major difficulty in this case is obtaining spoof images from all subjects. In a real-world scenario, it is impracticable to ask a subject to provide examples of fraudulent images of himself. Therefore, it is important to create a method for generating these fraudulent images automatically.
In this paper we propose an approach for generating fraudulent face images from authentic ones based on the idea of style transferring, and use both authentic and fraudulent images to build user-centered face liveness detection models based on convolution neural networks (CNNs). For such an aim, we use the CNN-based approach proposed by Gatys et al. [9] that creates artistic images of high perceptual quality. Even if their main purpose is to create artistic images, we adapt their approach to create more secure facial biometric systems. Therefore, we use the style transfer techniques to create dynamically fraudulent images from real subjects. In addition, the idea of making the liveness detection user-centered brings new results in the context of facial biometrics and liveness detection.
The remainder of the paper is organized as follows: In Section II, the theoretical background of style transfer and data augmentation on facial biometrics is given. The proposed method is presented in Section III. Results and concluding remarks are given in Sections IV and V, respectively.
III. USER-CENTERED MODELS
A. Database
The database used in this paper was the Spoof in the Wild (SiW) database [7]. It was introduced in 2018 and consists of 165 subjects in 4478 different videos with 1080P HD resolution. Different sections were recorded to capture the videos, varying the participants as well as the lighting. The following types of presentation attacks are present in the database: photos of printed photos; presentation attack using cell phones; presentation attacks using monitor screens; and presentation attacks using tablet screens. The live videos were captured with two high-quality cameras (Canon EOS T6 and Logitech C920 webcam), in four different sessions: (i) subjects were asked to move their head with varying distances to the camera; (ii) subjects move yaw angle of the head varying from 90 • to -90 • , with different facial expressions; (iii) and (iv) same movements as in (i) and (ii) but with variance in the light source illuminating the subject's face and changing orientation during the video capture, respectively.
For facial extraction, we used the Dlib library [25], with height and width for facial images of 256×256 pixels and a margin of 0.1. Fig 1 shows some samples of the extracted faces from the SiW database. The same face extraction protocol was applied to both live and spoof images. All frames from the videos were used to extract faces. In total, around 1.1M live image faces were obtained and around 1.4M spoof image faces. A higher number of spoof images is due to the different presentation attacks present in the database, given that, for each live video a number of different spoof videos were generated in the SiW database.
B. Proposed Method
The proposed face anti-spoofing method is divided in two main parts: (i) generation of new spoof images based on style
(a) (b) (c) (d) (e) (f) (g) (h)
1) Style Transfer:
We used a CNN to perform style transfer based on just one reference image, following the implementation in [26]. The CNN architecture replicates the VGG19 [27] architecture, with the parametrization, optimization and also the training method proposed in [9], [23], [24], with perceptual loss and instance normalization. One subject of the database was randomly chosen and each one of its spoof distribution perceived was selected to be a reference image of spoof from the SiW database. Fig. 2 shows the 10 reference images of the spoof styles available in SiW. Fig. 3 presents the pipeline to generate spoof images using one image as a reference for each spoof style. First, a VGG19 [27] is used to extract information from the reference image, in order to obtain the stylization of the image. This step is performed for each spoof image representation, generating one model for each spoof style representation. Next, the spoof images are generated using each one of the spoof models. A holdout approach was used in the experiments and the database was split into 70% of the live images for training and the remaining 30% for test. Each training image is used as input to the style transfer CNN and it provides at the output 10 spoof samples, to maintain data balancing, just 10% of the live training images were used during the process of spoof generation.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
2) Face Anti-Spoofing User-Centered Model (FAS-UCM): All generated spoof images and all live images from the training set are subsequently used to train the FAS-UCM to distinguish between live and spoof. Fig. 4 illustrates the training process. We used two CNN architectures for the usercentered models: a MobileNetV2 CNN [28], pre-trained on the ImageNet [29] and the proposed Spoof-ModNet, as can be seen in Table I. While the Spoof-ModNet has a total number of parameters of 148k, the MobileNetV2 [28] has between 1.7M to 6.9M parameters, which shows that the proposed model is substantial less complex. Also, in Table I is possible to see that the first convolutional layer takes as an input a 32x32 image, which significantly reduces the complexity of the neural network and speed-up the training and testing process. In contrast, the MobileNetV2 takes a 224x224 image as an input.
The MobileNetV2 architecture was fine-tuned with a learning rate of 0.01, batch size of 100 and 4000 steps. The proposed Spoof-ModNet was randomly initialized and trained with a learning rate of 0.0001, batch size of 8 and 50 epochs. The performance of both architectures was evaluated in our experiments. In the next section we present the evaluation of the proposed method on the live and real spoof testing images -the spoof images present in the original database -for each subject.
IV. EXPERIMENTS AND RESULTS
The first important result obtained in this paper was the automatic generation of spoof images based on a single reference image. For each training image used as input on the style transfer CNN, 10 new spoof samples were generated. Examples of the generated spoof images can be seen in Fig 5. The qualitative results show that the VGG19 network was able to capture and transfer the style of the spoof images. It is important to note that the network captured all the details present in the spoof images, for example very bright spots, warped lines, change in color distribution throughout the face image and change in the illumination. Another important aspect of the style transfer was the adaptation to subjects wearing glasses or not, given that all spoof image representations had a subject wearing glasses, which did not reflect in the resulting images. Also, it is possible to analyze the good adaptation of the CNN to the gender and race of the subject.
The noise present in some of the generated images, usually close to the mouth and cheeks, can be explained as the CNN trying to transfer the very bright spots -for example, specular reflections captured by the camera -seen in the representative spoof images to an image that is not too similar from the reference image. Not far from that, noise can be perceived in some generated images with warped lines across the face, which is a result of the CNN trying to mimic the moiré pattern.
In order to evaluate the generalization of the CNNs trained with the generated spoof images, a testing protocol was applied using the test images. The classification test was performed for each subject, considering their respective previous trained model. Table II shows the results over the two CNN architectures. The Spoof-ModNet has better performance over the metrics analyzed, with an average classification error rate (ACER) of 0.22, while the MobileNetV2 presented an ACER of 0.26. It is important to note that the Spoof-ModNet has significant less convolutional layers and the input images are also significantly smaller (32×32×3). In Fig. 6 it is possible to analyze the results considering the accuracy reported for each subject in the database. In total, there were 90 subjects in the SiW test database. The minimum accuracy reported with the proposed Spoof-ModNet was 34.69%, and the maximum 99.49%. Also, it is possible to see that more than 50% of the reported accuracy per subject is above 70% accuracy using the Spoof-ModNet, while 50% of the data lies in the range of 60.49% and 93.95% accuracy. On the other hand, the MobileNetV2 presented a minimum Fig. 3. Spoof image generation protocol pipeline. The VGG19 [27] architecture was used to obtain the style of the images. For each spoof representation, one image was used to generate a model capable of doing transfer style. For each subject, a set of spoof images was generated based on each of the spoof representations trained in the step previous explained. of 52.90% and a maximum of 88.53% accuracy, with 50% of the data between an accuracy of 65.07% and 76.28%. From the boxplot chart, it is also possible to observe that the MobileNetV2, despite having a worst average accuracy, had more consistent performance, varying less than the Spoof-ModNet architecture.
For the performance evaluation and comparison with other published methods, besides the ACER metrics, we have also selected two other standardized ISO/IEC 30107-3 metrics [30]: attack presentation classification error rate (APCER) and normal presentation classification error rate (NPCER). We reported in Table III an indirect comparison between our method and other methods [7], [21], [20], [22] with reported results in the SiW database. The results reported in [7], [21], [20], [22] followed the intra-database protocol 1 proposed by Liu et al. [7], with a set of subjects in the training data and a distinct set of subjects in the testing data. However, given that our protocol relies on user-centered models, it is not possible to follow the same protocol and the results are not directly comparable, being used only as a baseline. Although our results yielded a worst perfomance, with error rates higher than the other approaches, it is significantly superior to the random classification and it is feasible for generating user's fraudulent images from real images and use them to train user-centered face liveness detection models. In addition, improvements in the models and methods for choosing the spoof representation images should improve the results.
V. CONCLUSIONS
We have presented a method to use style transfer technique to generate spoof images. We used the VGG19 network, which was able to capture and transfer the style from spoof representations. The generate spoof images were used to train two different architectures for each person to perform liveness detection, the MobileNetV2, and the proposed Spoof-ModNet. The Spoof-ModNet network had better performance, with an ACER 0.22, while the MobileNetV2 presented an ACER of 0.26.
Further work will be done to explore other classifier architectures and even the combination of multiple classifiers. Further analysis will be done in other databases to evaluate the generalization of the proposed method. It will also be evaluated other methods for choosing the spoof representation images. Choosing the spoof representation based on the subject particularities may bring better results than using the same representation for the entire database. | 2,213 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Let us recall that Semantic GP uses the information in the target behavior, i.e., @math , to guide the search. Notably, Krawiec @cite_29 affirmed that aware semantic methods make search algorithms better informed. For example, Nguyen @cite_28 proposed Fitness Sharing, a technique that promotes dispersion and diversity of individuals. Their proposal consisted of calculating the individual fitness as @math , where @math is approximately equal to the number of individuals that behave similarly to individual @math . | {
"abstract": [
"This paper investigates the efficiency of using semantic and syntactic distance metrics in fitness sharing with Genetic Programming (GP). We modify the implementation of fitness sharing to speed up its execution, and used two distance metrics in calculating the distance between individuals in fitness sharing: semantic distance and syntactic distance. We applied fitness sharing with these two distance metrics to a class of real-valued symbolic regression. Experimental results show that using semantic distance in fitness sharing helps to significantly improve the performance of GP more frequently, and results in faster execution times than with the syntactic distance. Moreover, we also analyse the impact of the fitness sharing parameters on GP performance helping to indicate appropriate values for fitness sharing using a semantic distance metric.",
"Semantic genetic programming is a recent, rapidly growing trend in Genetic Programming (GP) that aims at opening the 'black box' of the evaluation function and make explicit use of more information on program behavior in the search. In the most common scenario of evaluating a GP program on a set of input-output examples (fitness cases), the semantic approach characterizes program with a vector of outputs rather than a single scalar value (fitness). The past research on semantic GP has demonstrated that the additional information obtained in this way facilitates designing more effective search operators. In particular, exploiting the geometric properties of the resulting semantic space leads to search operators with attractive properties, which have provably better theoretical characteristics than conventional GP operators. This in turn leads to dramatic improvements in experimental comparisons. The aim of the tutorial is to give a comprehensive overview of semantic methods in genetic programming, illustrate in an accessible way a formal geometric framework for program semantics to design provably good mutation and crossover operators for traditional GP problem domains, and to analyze rigorously their performance (runtime analysis). A number of real-world applications of this framework will be also presented. Other promising emerging approaches to semantics in GP will be reviewed. In particular, the recent developments in the behavioral programming, which aims at characterizing the entire program behavior (and not only program outputs) will be covered as well. Current challenges and future trends in semantic GP will be identified and discussed. Selected methods and concepts will be accompanied with live software demonstrations. Also, efficient implementation of semantic search operators may be challenging. We will illustrate very efficient, concise and elegant implementations of these operators, which are available for download from the web."
],
"cite_N": [
"@cite_28",
"@cite_29"
],
"mid": [
"2113192631",
"2043686990"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Some crossover and mutation operators have been developed with the use of semantics. Beadle and Johnson @cite_40 proposed a crossover operator that measures the semantic equivalence between parents and offsprings; and rejects the offspring that is semantically equivalent to its parents. Quang Uy @cite_42 proposed a semantic crossover and mutation. The crossover operator searches for a crossover point in each parent in such way that subtrees were semantically similar, and the mutation operator allows the replacement of an individual subtree only if the new subtree is semantically similar. Hara @cite_16 proposed the Semantic Control Crossover that uses the semantics to combine individuals where a global search was performed in the first generations and a local search in the last ones. Graff used subtrees semantics and partial derivatives to proposed crossover @cite_18 @cite_49 and mutation @cite_36 operators. | {
"abstract": [
"There is great interest for the development of semantic genetic operators to improve the performance of genetic programming. Semantic genetic operators have traditionally been developed employing experimentally or theoretically-based approaches. Our current work proposes a novel semantic crossover developed amid the two traditional approaches. Our proposed semantic crossover operator is based on the use of the derivative of the error propagated through the tree. This process decides the crossing point of the second parent. The results show that our procedure improves the performance of genetic programming on rational symbolic regression problems.",
"There is a great interest in the Genetic Programming (GP) community to develop semantic genetic operators. Most recent approaches includes the genetic programming framework for symbolic regression called Error Space Alignment GP, the geometric semantic operators, and our previous work the semantic crossover based on the partial derivative error. To the best of our knowledge, there has not been a semantic genetic operator similar to the point mutation. In this contribution, we start filling this gap by proposing a semantic point mutation based on the derivative of the error. This novel operator complements our previous semantic crossover and, as the results show, there is an improvement in performance when this novel operator is used, and, furthermore, the best performance in our setting is the system that uses the semantic crossover and the semantic point mutation.",
"We investigate the effects of semantically-based crossover operators in genetic programming, applied to real-valued symbolic regression problems. We propose two new relations derived from the semantic distance between subtrees, known as semantic equivalence and semantic similarity. These relations are used to guide variants of the crossover operator, resulting in two new crossover operators--semantics aware crossover (SAC) and semantic similarity-based crossover (SSC). SAC, was introduced and previously studied, is added here for the purpose of comparison and analysis. SSC extends SAC by more closely controlling the semantic distance between subtrees to which crossover may be applied. The new operators were tested on some real-valued symbolic regression problems and compared with standard crossover (SC), context aware crossover (CAC), Soft Brood Selection (SBS), and No Same Mate (NSM) selection. The experimental results show on the problems examined that, with computational effort measured by the number of function node evaluations, only SSC and SBS were significantly better than SC, and SSC was often better than SBS. Further experiments were also conducted to analyse the perfomance sensitivity to the parameter settings for SSC. This analysis leads to a conclusion that SSC is more constructive and has higher locality than SAC, NSM and SC; we believe these are the main reasons for the improved performance of SSC.",
"Crossover forms one of the core operations in genetic programming and has been the subject of many different investigations. We present a novel technique, based on semantic analysis of programs, which forces each crossover to make candidate programs take a new step in the behavioural search space. We demonstrate how this technique results in better performance and smaller solutions in two separate genetic programming experiments.",
"",
"Genetic Programming (GP) is an evolutionary method for generating tree structural programs. Normal subtree crossover in GP randomly selects a crossover point in each parental tree, and offspring are created by exchanging the selected subtrees. In the normal crossover, it is difficult to control the global and local search because the similarity between the subtrees is not considered. In this paper, we propose a new crossover operation based on the semantic distance between the subtrees. We call this operation Semantic Control Crossover. By using the Semantic Control Crossover, the global search can be performed in the early stage of search, and the search property can be shifted to the local search as the search proceeds. As the results of experiments, the Semantic Control Crossover showed better performance than the conventional crossover."
],
"cite_N": [
"@cite_18",
"@cite_36",
"@cite_42",
"@cite_40",
"@cite_49",
"@cite_16"
],
"mid": [
"66126747",
"1966140552",
"2061011797",
"2164305954",
"",
"1998837010"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Moraglio @cite_21 @cite_27 proposed Geometric Semantic Genetic Programming (GSGP). Their work called the attention of the GP scientific community because the crossover operator produces an offspring that stands in the segment joining the parents' semantics. Therefore, offspring fitness cannot be worse than the worst fitness of the parents. Given two parents @math and @math , the crossover operator generates an offspring as @math , where @math is a real value between @math and @math . This property transforms the fitness landscape into a cone. Unfortunately, the offspring is always bigger than the sum of the size of its parents; this makes the operator unusable in practice. Later, some operators appear intending to improve Moraglio's GSGP. For example, Approximately Geometric Semantic Crossover (SX) @cite_10 , Deterministic Geometric Semantic Crossover @cite_16 , Locally Geometric Crossover (LGX) @cite_7 @cite_2 and Approximated Geometric Crossover (AGX) @cite_8 , Semantic Crossover and Mutation based on projections @cite_24 @cite_19 and Subtree Semantic Geometric Crossover (SSGX) @cite_47 . | {
"abstract": [
"We propose Locally Geometric Crossover (LGX) for genetic programming. For a pair of homologous loci in the parent solutions, LGX finds a semantically intermediate procedure from a previously prepared library, and uses it as replacement code. The experiments involving six symbolic regression problems show significant increase in search performance when compared to standard subtree-swapping cross-over and other control methods. This suggests that semantically geometric manipulations on subprograms propagate to entire programs and improve their fitness.",
"In genetic programming, a search algorithm is expected to produce a program that achieves the desired final computation state (desired output). To reach that state, an executing program needs to traverse certain intermediate computation states. An evolutionary search process is expected to autonomously discover such states. This can be difficult for nontrivial tasks that require long programs to be solved. The semantic backpropagation algorithm proposed in this paper heuristically inverts the execution of evolving programs to determine the desired intermediate computation states. Two search operators, random desired operator and approximately geometric semantic crossover, use the intermediate states determined by semantic backpropagation to define subtasks of the original programming task, which are then solved using an exhaustive search. The operators outperform the standard genetic search operators and other semantic-aware operators when compared on a suite of symbolic regression and Boolean benchmarks. This result and additional analysis conducted in this paper indicate that semantic backpropagation helps evolution to identify the desired intermediate computation states and makes the search process more efficient.",
"In this paper we give a representation-independent topological defi- nition of crossover that links it tightly to the notion of fitness landscape. Building around this definition, a geometric topological framework for evolu- tionary algorithms is introduced that clarifies the connection between represen- tation, genetic operators, neighbourhood structure and distance in the land- scape. Traditional genetic operators for binary strings are shown to fit the framework. The advantages of this interpretation are discussed.",
"In the Genetic Programming (GP) community there has been a great interest in developing semantic genetic operators. These type of operators use information of the phenotype to create ospring. The most recent approaches of semantic GP include the GP framework based on the alignment of error space, the geometric semantic genetic operators, and backpropagation genetic operators. Our contribution proposes two semantic operators based on projections in the phenotype space. The proposed operators have the characteristic, by construction, that the ospring's tness is as at least as good as the tness of the best parent; using as tness the euclidean distance. The semantic operators proposed increment the learning capabilities of GP. These operators are compared against a traditional GP and Geometric Semantic GP in the Human oral bioavailability regression problem and 13 classication problems. The results show that a GP system with our novel semantic operators has the best performance in the training phase in all the problems tested.",
"",
"",
"This study presents an extensive account of Locally Geometric Semantic Crossover (LGX), a semantically-aware recombination operator for genetic programming (GP). LGX is designed to exploit the semantic properties of programs and subprograms, in particular the geometry of semantic space that results from distance-based fitness functions used predominantly in GP. When applied to a pair of parents, LGX picks in them at random a structurally common (homologous) locus, calculates the semantics of subprograms located at that locus, finds a procedure that is semantically medial with respect to these subprograms, and replaces them with that procedure. The library of procedures is prepared prior to the evolutionary run and indexed by a multidimensional structure (kd-tree) allowing for efficient search. The paper presents the rationale for LGX design and an extensive computational experiment concerning performance, computational cost, impact on program size, and capability of generalization. LGX is compared with six other operators, including conventional tree-swapping crossover, semantic-aware operators proposed in previous studies, and control methods designed to verify the importance of homology and geometry of the semantic space. The overall conclusion is that LGX, thanks to combination of the semantically medial operation with homology, improves the efficiency of evolutionary search, lowers the variance of performance, and tends to be more resistant to overfitting.",
"The semantic geometric crossover (SGX) proposed by has achieved very promising results and received great attention from researchers, but has a significant disadvantage in the exponential growth in size of the solutions. We propose a crossover operator named subtree semantic geometric crossover (SSGX), with the aim of addressing this issue. It is similar to SGX but uses subtree semantic similarity to approximate the geometric property. We compare SSGX to standard crossover (SC), to SGX, and to other recent semantic-based crossover operators, testing on several symbolic regression problems. Overall our new operator out-performs the other operators on test data performance, and reduces computational time relative to most of them. Further analysis shows that while SGX is rather exploitative, and SC rather explorative, SSGX achieves a balance between the two. A simple method of further enhancing SSGX performance is also demonstrated.",
"Genetic Programming (GP) is an evolutionary method for generating tree structural programs. Normal subtree crossover in GP randomly selects a crossover point in each parental tree, and offspring are created by exchanging the selected subtrees. In the normal crossover, it is difficult to control the global and local search because the similarity between the subtrees is not considered. In this paper, we propose a new crossover operation based on the semantic distance between the subtrees. We call this operation Semantic Control Crossover. By using the Semantic Control Crossover, the global search can be performed in the early stage of search, and the search property can be shifted to the local search as the search proceeds. As the results of experiments, the Semantic Control Crossover showed better performance than the conventional crossover.",
"We propose a crossover operator that works with genetic programming trees and is approximately geometric crossover in the semantic space. By defining semantic as program's evaluation profile with respect to a set of fitness cases and constraining to a specific class of metric-based fitness functions, we cause the fitness landscape in the semantic space to have perfect fitness-distance correlation. The proposed approximately geometric semantic crossover exploits this property of the semantic fitness landscape by an appropriate sampling. We demonstrate also how the proposed method may be conveniently combined with hill climbing. We discuss the properties of the methods, and describe an extensive computational experiment concerning logical function synthesis and symbolic regression."
],
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_47",
"@cite_16",
"@cite_10"
],
"mid": [
"2157723426",
"2076402663",
"1779560570",
"2295612807",
"",
"",
"1985901961",
"2294762224",
"1998837010",
"2072918701"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Pawlak @cite_8 proposed the Random Desired Operator (RDO). It propagates the target semantics to calculate the desired semantics in the node selected as mutation point. This desired behavior is used to search in a procedures library for the most similar subtree. Finally, it swaps the mutated node with the subtree. RDO was extended by Szubert @cite_0 introducing the Forward Propagation Mutation (FPM) which uses a combination of forward and back-propagation to find a combination of unitary and binary functions that is the most similar to the desired behavior. | {
"abstract": [
"In recent years, a number of methods have been proposed that attempt to improve the performance of genetic programming by exploiting information about program semantics. One of the most important developments in this area is semantic backpropagation. The key idea of this method is to decompose a program into two parts—a subprogram and a context—and calculate the desired semantics of the subprogram that would make the entire program correct, assuming that the context remains unchanged. In this paper we introduce Forward Propagation Mutation, a novel operator that relies on the opposite assumption—instead of preserving the context, it retains the subprogram and attempts to place it in the semantically right context. We empirically compare the performance of semantic backpropagation and forward propagation operators on a set of symbolic regression benchmarks. The experimental results demonstrate that semantic forward propagation produces smaller programs that achieve significantly higher generalization performance.",
"In genetic programming, a search algorithm is expected to produce a program that achieves the desired final computation state (desired output). To reach that state, an executing program needs to traverse certain intermediate computation states. An evolutionary search process is expected to autonomously discover such states. This can be difficult for nontrivial tasks that require long programs to be solved. The semantic backpropagation algorithm proposed in this paper heuristically inverts the execution of evolving programs to determine the desired intermediate computation states. Two search operators, random desired operator and approximately geometric semantic crossover, use the intermediate states determined by semantic backpropagation to define subtasks of the original programming task, which are then solved using an exhaustive search. The operators outperform the standard genetic search operators and other semantic-aware operators when compared on a suite of symbolic regression and Boolean benchmarks. This result and additional analysis conducted in this paper indicate that semantic backpropagation helps evolution to identify the desired intermediate computation states and makes the search process more efficient."
],
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2508930158",
"2076402663"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Chen @cite_11 proposed the Angle-Driven Selection (ADS) where the first parent is selected using fitness and the second is with an angle-distance defined as @math . One of our selection heuristics is similar to ADS; however, there are significant differences, the first parent is randomly selected whereas the second parent is selected using an equivalent similarity with the difference that the target behavior is not considered in our approach. | {
"abstract": [
"Geometric semantic genetic programming (GP) has recently attracted much attention. The key innovations are inducing a unimodal fitness landscape in the semantic space and providing a theoretical framework for designing geometric semantic operators. The geometric semantic operators aim to manipulate the semantics of programs by making a bounded semantic impact and generating child programs with similar or better behavior than their parents. These properties are shown to be highly related to a notable generalization improvement in GP. However, the potential ineffectiveness and difficulties in bounding the variations in these geometric operators still limits their positive effect on generalization. This paper attempts to further explore the geometry and search space of geometric operators to gain a greater generalization improvement in GP for symbolic regression. To this end, a new angle-driven selection operator and two new angle-driven geometric search operators are proposed. The angle-awareness brings new geometric properties to these geometric operators, which are expected to provide a greater leverage for approximating the target semantics in each operation, and more importantly, be resistant to overfitting. The experiments show that compared with two state-of-the-art geometric semantic operators, our angle-driven geometric operators not only drive the evolutionary process to fit the target semantics more efficiently but also improve the generalization performance. A further comparison between the evolved models shows that the new method generally produces simpler models with a much smaller size and is more likely to evolve toward the correct structure of the target models."
],
"cite_N": [
"@cite_11"
],
"mid": [
"2891680306"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Loveard and Ciesielski @cite_5 proposed different techniques for representing classification problems in GP; one of them assign the class based on a range, there were as many intervals as classes. Muni @cite_51 proposed to evolve a tree for each class following an equivalent strategy of one-vs-all approach. Jaben and Baig @cite_17 developed a two-stage method, the first one evolves a classifier for each class, and the second phases combine these classifiers. | {
"abstract": [
"Five alternative methods are proposed to perform multi-class classification tasks using genetic programming. These methods are: (1) binary decomposition, in which the problem is decomposed into a set of binary problems and standard genetic programming methods are applied; (2) static range selection, where the set of real values returned by a genetic program is divided into class boundaries using arbitrarily-chosen division points; (3) dynamic range selection, in which a subset of training examples are used to determine where, over the set of reals, class boundaries lie; (4) class enumeration, which constructs programs similar in syntactic structure to a decision tree; and (5) evidence accumulation, which allows separate branches of the program to add to the certainty of any given class. The results show that the dynamic range selection method is well-suited to the task of multi-class classification and is capable of producing classifiers that are more accurate than the other methods tried when comparable training times are allowed. The accuracy of the generated classifiers was comparable to alternative approaches over several data sets.",
"We propose a new approach for designing classifiers for a c-class (c spl ges 2) problem using genetic programming (GP). The proposed approach takes an integrated view of all classes when the GP evolves. A multitree representation of chromosomes is used. In this context, we propose a modified crossover operation and a new mutation operation that reduces the destructive nature of conventional genetic operations. We use a new concept of unfitness of a tree to select trees for genetic operations. This gives more opportunity to unfit trees to become fit. A new concept of OR-ing chromosomes in the terminal population is introduced, which enables us to get a classifier with better performance. Finally, a weight-based scheme and some heuristic rules characterizing typical ambiguous situations are used for conflict resolution. The classifier is capable of saying \"don't know\" when faced with unfamiliar examples. The effectiveness of our scheme is demonstrated on several real data sets.",
"Abstract This paper introduces a two-stage strategy for multi-class classification problems. The proposed technique is an advancement of tradition binary decomposition method. In the first stage, the classifiers are trained for each class versus the remaining classes. A modified fitness value is used to select good discriminators for the imbalanced data. In the second stage, the classifiers are integrated and treated as a single chromosome that can classify any of the classes from the dataset. A population of such classifier-chromosomes is created from good classifiers (for individual classes) of the first phase. This population is evolved further, with a fitness that combines accuracy and conflicts. The proposed method encourages the classifier combination with good discrimination among all classes and less conflicts. The two-stage learning has been tested on several benchmark datasets and results are found encouraging."
],
"cite_N": [
"@cite_5",
"@cite_51",
"@cite_17"
],
"mid": [
"1586014638",
"2113834117",
"2047471240"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Ingalalli @cite_13 introduced a GP framework called Multi-dimensional Multi-class Genetic Programming (M2GP). The main idea is to transform the original space into another one using functions evolved with GP, then, a centroid is calculated for each class, and the vectors are assigned to the class which corresponds to the nearest centroid using the Mahalanobis distance. M2GP takes as argument the dimension of the transform space; this parameter is evolved in M3GP @cite_1 by including specialized search operators that can increase or decrease the number of feature dimensions produced by each tree. They extended M3GP and proposed M4GP @cite_12 that uses a stack-based representation in addition to new selection methods, namely lexicase selection, and age-fitness Pareto survival. | {
"abstract": [
"",
"Classification problems are of profound interest for the machine learning community as well as to an array of application fields. However, multi-class classification problems can be very complex, in particular when the number of classes is high. Although very successful in so many applications, GP was never regarded as a good method to perform multi-class classification. In this work, we present a novel algorithm for tree based GP, that incorporates some ideas on the representation of the solution space in higher dimensions. This idea lays some foundations on addressing multi-class classification problems using GP, which may lead to further research in this direction. We test the new approach on a large set of benchmark problems from several different sources, and observe its competitiveness against the most successful state-of-the-art classifiers.",
"Abstract We describe a new multiclass classification method that learns multidimensional feature transformations using genetic programming. This method optimizes models by first performing a transformation of the feature space into a new space of potentially different dimensionality, and then performing classification using a distance function in the transformed space. We analyze a novel program representation for using genetic programming to represent multidimensional features and compare it to other approaches. Similarly, we analyze the use of a distance metric for classification in comparison to simpler techniques more commonly used when applying genetic programming to multiclass classification. Finally, we compare this method to several state-of-the-art classification techniques across a broad set of problems and show that this technique achieves competitive test accuracies while also producing concise models. We also quantify the scalability of the method on problems of varying dimensionality, sample size, and difficulty. The results suggest the proposed method scales well to large feature spaces."
],
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2114299759",
"2797026760"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Naredo @cite_33 use NS for evolving genetic programming classifiers based on M3GP where the difference is the procedure to compute the fitness. Each GP individual is represented as a binary vector whose length is the training set size and each vector element is set to 1 if the classifier assigns the class label correctly and 0 otherwise. Then, they use this binary vectors to measure the sparseness among individuals, and the more the sparseness the higher the fitness value. Their results show that all their NS variants achieve competitive results relative to the traditional objective-based. | {
"abstract": [
"Novelty Search is applied for the first time to supervised classification with GP.Two new variants of NS are proposed, overcoming some of its main shortcomings.NS achieves competitive results compared to objective-based search.Results show bloat control properties in binary tasks for NS. Novelty Search (NS) is a unique approach towards search and optimization, where an explicit objective function is replaced by a measure of solution novelty. However, NS has been mostly used in evolutionary robotics while its usefulness in classic machine learning problems has not been explored. This work presents a NS-based genetic programming (GP) algorithm for supervised classification. Results show that NS can solve real-world classification tasks, the algorithm is validated on real-world benchmarks for binary and multiclass problems. These results are made possible by using a domain-specific behavior descriptor. Moreover, two new versions of the NS algorithm are proposed, Probabilistic NS (PNS) and a variant of Minimal Criteria NS (MCNS). The former models the behavior of each solution as a random vector and eliminates all of the original NS parameters while reducing the computational overhead of the NS algorithm. The latter uses a standard objective function to constrain and bias the search towards high performance solutions. The paper also discusses the effects of NS on GP search dynamics and code growth. Results show that NS can be used as a realistic alternative for supervised classification, and specifically for binary problems the NS algorithm exhibits an implicit bloat control ability."
],
"cite_N": [
"@cite_33"
],
"mid": [
"2467404015"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07066 | 2959448612 | In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG's performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software. | Auto machine learning consists of obtaining automatically a classifier (regressor) that includes the steps of preprocessing, feature selection, classifier selection, and hyperparameters tuning. Feurer @cite_26 developed a robust automated machine learning (AutoML) technique using Bayesian optimization methods. It is based on scikit-learn @cite_23 , using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods; giving rise to a structured hypothesis space with 110 hyperparameters. Olson @cite_52 proposed the use of GP to develop a powerful algorithm that automatically constructs and optimizes machine learning pipelines through a Tree-based Pipeline Optimization Tool (TPOT). On classification, the objective consists of maximizing accuracy score performing a searching of the combinations of 14 preprocessors, five feature selectors, and 11 classifiers; all these techniques implemented on scikit-learn @cite_23 . | {
"abstract": [
"",
"Over the past decade, data science and machine learning has grown from a mysterious art form to a staple tool across a variety of fields in academia, business, and government. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning---pipeline design. We implement a Tree-based Pipeline Optimization Tool (TPOT) and demonstrate its effectiveness on a series of simulated and real-world genetic data sets. In particular, we show that TPOT can build machine learning pipelines that achieve competitive classification accuracy and discover novel pipeline operators---such as synthetic feature constructors---that significantly improve classification accuracy on these data sets. We also highlight the current challenges to pipeline optimization, such as the tendency to produce pipelines that overfit the data, and suggest future research paths to overcome these challenges. As such, this work represents an early step toward fully automating machine learning pipeline design.",
"Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http: scikit-learn.sourceforge.net."
],
"cite_N": [
"@cite_26",
"@cite_52",
"@cite_23"
],
"mid": [
"",
"2951127468",
"2101234009"
]
} | Selection Heuristics on Semantic Genetic Programming for Classification Problems | Classification is a supervised learning problem that consists in finding a function that learns a relation between inputs and outputs, where the outputs are a set of labels. The starting point would be the training set composed of input-output pairs, i.e., X = {( x 1 , y 1 ), . . . , ( x n , y n )}. The training set is used to find a function, h, that minimize a loss function, , that is, h is the function that minimize (x,y)∈X (h(x), y) where the ideal scenario would be ∀ (x,y)∈X h(x) = y, and, also to accurately predict the labels of unseen inputs. By fixing, a priori, an order to X , one can use a notation normally adopted on Semantic Genetic Programming (GP) (e.g., [1]) which is to represented the target behavior as y = (y 1 , . . . , y n ), and, the behavior of function h as h = (h( v 1 ), . . . , h( v n )). Using this notation the search function h is the one whose h is as close as possible to y, where the closeness is measured using the loss function referred, in GP, as the fitness function.
EvoDAG
EvoDAG 1 [13,14] is a python library that implements a steady-state GP system with tournament selection. EvoDAG is inspired by the implementation of GSGP performed by Castelli et al. [47]; where the main idea is to keep track of all the individuals and their behavior leading to an efficient evaluation of the offspring whose complexity depends only on the number of fitness cases.
Let us recall that the offspring, in the geometric semantic crossover, is o = r p 1 +(1−r) p 2 where r is a random function or a constant. In [33], we decided to extend this operation by allowing the offspring to be a linear combination of the parents, that is, o = θ 1 p 1 + θ 2 p 2 , where θ 1 and θ 2 are obtained using ordinary least squares (OLS). Continuing with this line of research, in [13], we investigate the case when the offspring is a linear combination of more than two parents, and, also, to include the possibility that the parents could be combined using a function randomly selected from the function set.
EvoDAG, as customary, uses a function set, F = { 60 , 20 , max 5 , min 5 , √ ·, | · |, sin, tan, atan, tanh, hypot, NB 5 , MN 5 , NC 2 }, and a terminal set, T = {x 1 , . . . , x m }, to create the individuals. The functions, in the function set, are traditional operations where the subscript indicates the number of arguments. It is also included in F classifiers such as Naive Bayes with Gaussian distribution (NB 5 ), with Multinomial distribution (MN 5 ) and Nearest Centroid (NC 2 ).
The initial population starts with
P = {θ 1 x 1 , . . . , θ m x m , NB(x 1 , . . . , x m ), MN(x 1 , . . . , x m ), NC(x 1 , . . . , x m )}, where x i is the i-th input,
and θ i is obtained using OLS. In the case | P | is lower than the population size, the process starts including an individual created by randomly selecting a function from F and the arguments are drawn from the current population P. For example, let hypot be the selected function, and the first and second arguments are θ 2 x 2 , and NB(x 1 , . . . , x m ). Then the individual inserted to P is θ hypot(θ 2 x 2 , NB(x 1 , . . . , x m )), where θ is obtained using OLS. This process continues until the population size is reached; EvoDAG sets population size of 4000.
EvoDAG uses a steady-state evolution; consequently, P is updated by replacing a current individual, selected using a negative selection, with an offspring which can be selected as a parent just after being inserted in P. The evolution process is similar to the one used to create the initial population, and the difference is on the procedure used to select the arguments. That is, function f is selected from F, the arguments are selected from P using tournament selection or any of the heuristics analyzed here, and finally, the parameters associated to f are optimized using either OLS or the procedure used by the classifiers. The addition is defined as i θ i x i , where x i is an individual in P. The rest of the arithmetic functions, trigonometric functions, min and max are defined as θf (x 1 , . . .) where f is the function at hand, and x 1 is an individual in P. The process continues until the stopping criteria are met.
At this point, it is worth to mention that EvoDAG uses one-vs-rest scheme on classification problems. That is, a problem with k different classes is converted into k problems each one assigns 1 to the current class and −1 to the other labels. Instead of evolving one tree per problem, as done, for example, in [45], we decided to use only one tree an optimize k different θ parameters, one for each class. The result is that each node outputs k values, and the class is the one with the highest value. In case of the classifiers used the output is the log-likelihood.
EvoDAG stops the evolutionary process using early stopping. That is, the training set is split into a smaller training set (50% reduction), and a validation set containing the remaining elements. The training set is used to calculate the fitness, and the parameters θ. The validation set is used to perform the early stopping and to keep the individual with the best performance in this set. The evolution stops when the best individual, on the validation set, has not been updated in a defined number of evaluations; EvoDAG sets this as 4000. The final model corresponds to the best individual, in the validation set, found during the whole evolutionary process.
In order to provide an idea of the type of models produced by EvoDAG, Figure 1 i.e., Naive Bayes using Gaussian distribution. The figure helps to understand the role of optimizing the k set of parameters, one for each class, where each node outputs k values; consequently, each node is a classifier.
It is well known that in evolutionary algorithms, there are runs that do not produce an accept-able result, so to improve the stability and also the accuracy we decided to use Bagging [48] in our approach. We implemented Bagging utilizing the characteristic that a bagging estimator can be expected to perform similarly by either drawing n elements from the training set with-replacement or selecting n 2 elements without-replacement (see [49]). In total, we create 30 models by using different seeds in the random function, and the final prediction is the average of the individual predictions.
Selection heuristics
Let us recall that in a steady-state evolution there are two stages where selection takes place, on the one hand, the selection is used to choose the parents, and on the other hand, the selection is applied to decide which individual, in the current population, is replaced with the offspring. We analyzed the behavior of EvoDAG when different selection schemes are used; the first one uses the absolute of the cosine similarity (sim), the second one is the accuracy (acc), and for comparison purposes, the third is the traditional tournament selection (fit), and the fourth is a random selection (rnd). Regarding the negative selection, it is analyzed two schemes, the traditional negative tournament selection (fit), and random selection (rnd).
The selection heuristics proposed here complement the heuristics used in the related work. Novelty Search (NS) [11] measures novelty with a similarity between the k-nearest neighbors, GP with NS [12] uses accuracy, and the Angle-Driven GP [43] uses the relative angle between the parents and the target behavior. Our heuristic uses the angle between parents without considering the target behavior as done in Angle-Driven GP; the accuracy between parents is computed without considering the accuracy between the k-nearest neighbors as done in GP with NS.
The selection mechanism used in the first two heuristics (sim and acc) is the following. The first parent is selected using random selection. The rest of the parents are chosen using tournament selection (tournament size equals 2) where the fitness function is replaced with either cosine similarity or accuracy. The objective is to minimize the similarity between the parent, being selected, and the first parent. Furthermore, we analyzed this procedure in two scenarios; the first one is when it is only applied to a subset of the functions of the function set; these are { 60 , NB 5 , MN 5 , NC 2 }, and, for the rest of the functions, random selection is applied. The second scenario is to use this procedure to all the functions excepts those with one argument.
The cosine similarity between vectors u and v is defined as: cos(θ) = u· v || u|||| v|| the range of the function is [−1, 1] where −1 corresponds to 0 • , 0 is 90 • and 1 is 0 • . The idea of using the absolute is to avoid, as possible, the inclusion of collinear parents which are not useful on the subset of functions selected.
The second heuristic consists of selecting individuals based on the labels predicted by the individual. The similarity used is the accuracy, which counts the number of correct prediction between the target and the classifier. Nonetheless, it is measured the accuracy between the first parent, acting as the target, and the rest of the parents selected. The idea is to choose those parents that present a more significant difference with the first one.
Experiments and Results
This section analyzed the performance of the different selection heuristic proposed and compared it with state-of-the-art classifiers. The classification problems used as benchmarks are 30 datasets taken from the UCI repository [18]. Table 1 shows the dataset information. It can be seen that the datasets are heterogeneous in terms of the number of samples, variables, and classes. Additionally, some of the classification problems are balanced, and others are imbalanced. The table includes Shannon's entropy to indicate the degree of the class-imbalance in the problem, where 1.0 indicates a perfect balance problem.
The performance is measured in a test set, in the repository, some of the problems are already split between a training set and test set. For those problems that this partition is not present, we In order to improve the reading of tables and figures, we use the following notation. The selection scheme used for selecting the parents is followed by the symbol "-", and then, comes the abbreviation of the negative selection scheme. The abbreviations used for selecting parents are sim, acc, fit, and rnd that represent selection based on the absolute value of the cosine distance, based on accuracy, tournament selection, and random selection. Furthermore, the superscript * is used to indicate those systems where the heuristics propose (sim and acc) are used in all the function with more than one argument. In addition, the prefix "EvoDAG" is used when it is compared with other state-of-the-art techniques. Table 2 presents the performance, in terms of macro-F1, of EvoDAG with different selection schemes. The systems are arranged column-wise and sorted by the average rank to facilitate the reading. Each row presents the performance of a classification problem, and the best performance is in boldface. It can be seen that the system with the lowest average rank (the lower is better) is the system with accuracy and random in the negative selection (acc-rnd), this system also presents the highest average macro-F1. Comparing the performance of acc-rnd against all other selection schemes -using the Wilcoxon signed-rank test [50] and adjusting the p-values with Holm-Bonferroni method [51] to consider the multiple comparisons-it is observed a significant statistically (95 % confidence) difference with sim-fit, sim-rnd, fit-fit, fit-rnd, acc-fit * and acc-rnd * ; interesting, fitfit corresponds to tournament selection with a negative tournament as normally performed on a steady-state evolution. Additionally, it can be observed that acc-rnd is not statistically better than the system using random selection in the two stages of selection, i.e., rnd-rnd. Furthermore, rnd-rnd is on the third position based on average rank and second using average macro-F1 being only outperformed by accuracy used to select the parents.
Comparison of the different selection schemes
Comparing the average rank of the selection scheme used to choose the parents; it can be seen that the traditional tournament selection comes at ninth position, in addition, all of our selection heuristics have a better rank than tournament selection. On the other hand, the heuristics applied only to a subset of the function set (i.e., { 60 , NB 5 , MN 5 , NC 2 }) obtained a better rank than the counterpart systems using functions with arity greater than one; moreover, the worst systems correspond to the use of accuracy in this latter configuration. It is also observed that the systems using the absolute cosine similarity are less affected by choice of a subset of functions or to apply it to all the functions; whereas, this decision affects the most to the use of accuracy. Table 2: Comparison of EvoDAG's performance using different selection schemes for selecting the parents and negative selection. The columns are ordered based on the macro-F1 average rank. The symbol * represents that selection heuristic was applied to all functions with arity greater than one. The best performance in each problem is indicated in boldface.
acc-rnd acc-fit rnd-rnd sim-fit rnd-fit sim-fit * sim-rnd sim-rnd * fit-fit fit-rnd acc-fit * acc-rnd Figure 2 shows the evolution of the best individuals found during the evolution in the training and validation sets. We use agaricus-lepiota dataset as an example. The performance, in terms of macro-F1, of the best individual, is recorded during the evolution of thirty independent executions, and, these are presented as boxplots depending on the evaluated individuals. It can be seen, in all cases, the performance of the best individual on the training set is higher than the one obtained in the validation set. Furthermore, it can be observed that the parents' selection scheme based on the accuracy (acc) has slightly bigger values in the first evaluations than tournament selection, this is reflected as outliers in the boxplot. This continues during all the evolution, and, it is reflected in the training and validation set.
Comparison of EvoDAG with other state-of-the-art classifiers
After analyzing the performance of the different selection schemes, it is the moment to compare EvoDAG with different selection schemes against state-of-the-art classifiers. We decided to compare against sixteen classifiers all of them using their default parameters and implemented on the scikit-learn python library [16], specifically these classifiers are Perceptron, MLPClassifier, BernoulliNB, GaussianNB, KNeighborsClassifier, NearestCentroid, LogisticRegression, Lin-earSVC, SVC, SGDClassifier, PassiveAggressiveClassifier, DecisionTreeClassifier, ExtraTreesClassifier, RandomForestClassifier, AdaBoostClassifier and GradientBoostingClassifier. It is also included in the comparison two auto-machine learning libraries: autosklearn [17] and TPOT [9]. Figure 3 presents a boxplot of the ranks (using macro-F1 as performance measure) of stateof-the-art classifiers and EvoDAG with the different selection schemes. In order to facilitate the reading, the boxplots are ordered by the average rank. It is observed from the figure that TPOT is the system with the lowest rank, followed by EvoDAG with accuracy and the random selection, EvoDAG is followed by autosklearn. Comparing the performance of TPOT against the performance of the rest of the classifiers, one can realize that TPOT is not statistically different all EvoDAG systems and the classifiers that have a better average rank than LogisticRegression.
As can be seen from Figure 3, only two classifiers are better than EvoDAG with random selection; these are TPOT and autosklearn, it is essential to note that these are auto-machine learning classifiers. Furthermore, let us consider all the classifiers that have a better rank than EvoDAG fit-rnd, which corresponds to EvoDAG with the lowest position. These are TPOT, autosklearn, GradientBoosting, and ExtraTrees; these classifiers have in common the use of decision trees at some points, that is, these are either a variant of decision trees or include them in their search space. Conversely, EvoDAG that do not use any form of decision trees.
Besides measuring the performance using macro-F1, Figure 4 presents boxplots of the time required in the training phase by the different algorithms. The boxplot is on log-scale, given differences in time between the algorithms, and uses time per sample to take into consideration that the dataset varied in the training set size. It is not surprising that the systems obtaining the best performance are also the slowest systems. As can be seen from the figure, TPOT is the most time-consuming system, followed by autosklearn, and then EvoDAG systems. In average TPOT uses 38.7 seconds per sample, autosklearn requires 7.8 seconds per sample, and EvoDAG utilizes less the one second per sample. Looking at EvoDAG systems, it can be observed that the slowest selection schemes are accuracy, absolute cosine similarity, tournament selection, and random selection. This behavior is expected, given the algorithmic complexity. Accuracy and cosine similarity requires to perform O(n) operations every time a parent is selected; in addition, these systems compute the fitness to perform early stopping or the negative selection. On the other hand, tournament and the random selection, requires O(1) operations to complete the selection, although tournament selection needs to create the tournament, and random selection does not.
One can combine the information presented on Figures 3 and 4 by performing a Pareto analysis. The classifiers that are in the pareto frontier are: TPOT, EvoDAG with acc-rnd, EvoDAG with rnd-rnd, GradientBoosting, ExtraTrees and DecisionTree. From the figures, it can be inferred that the system closest to the elbow is GradientBoosing.
Conclusion
We presented the impact that different selection heuristics have on the performance of a steadystate semantic Genetic Programming system (namely EvoDAG). The selection process takes place in two moments during the evolution; during the selection of the parents and to replace an individual. The selection heuristics studied in the first place are the absolute of the cosine similarity, accuracy, tournament selection, and random selection; and on the second place, it is analyzed negative tournament selection and random selection. The results show that the use of our heuristics, cosine similarity, and accuracy outperforms EvoDAG using tournament selection, i.e., selection based on fitness. Besides, the heuristics that obtained the best performance was accuracy. It is interesting to note that random selection is competitive, achieving the third position among the different combination studied.
The performance of EvoDAG with the selection heuristics is analyzed on 30 classification problems taken from the UCI repository. Also, EvoDAG is compared with 18 state-of-the-art classifiers, 16 of them are implemented in scikit-learn python library and two auto-machine learning algorithms. The result shows that EvoDAG using accuracy and the random selection is competitive, using the average rank (measured with macro-F1) it obtained the second position where the best system is TPOT which was an auto-machine learning algorithm, and the third position was autosklearn. Interesting, EvoDAG's performance is statistically equivalent to the two auto-machine learning algorithms considered in this comparison. However, EvoDAG uses neither feature selection algorithm nor any form of decision trees, as done by the auto-machine learning approaches. We also include in the comparison of the time required in the training phase of the classifiers. The auto-machine learning algorithms were the slowest ones, followed by EvoDAG. Nonetheless, the difference in time is considerable; TPOT uses, on average more than 30 seconds per sample, autosklearn 7, and EvoDAG less than one second per instance. | 3,325 |
1907.07352 | 2960160011 | Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g., system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released ("zero-day") malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from ablation study. | @cite_0 extend a controlled virtual environment called ANUBIS to collect sample's execution trace. An 8-tuple is constructed as the representation, which consists of the system call's name, corresponding objects such as files, and dependencies between these system calls and objects. | {
"abstract": [
"Anti-malware companies receive thousands of malware samples every day. To process this large quantity, a number of automated analysis tools were developed. These tools execute a malicious program in a controlled environment and produce reports that summarize the program’s actions. Of course, the problem of analyzing the reports still remains. Recently, researchers have started to explore automated clustering techniques that help to identify samples that exhibit similar behavior. This allows an analyst to discard reports of samples that have been seen before, while focusing on novel, interesting threats. Unfortunately, previous techniques do not scale well and frequently fail to generalize the observed activity well enough to recognize"
],
"cite_N": [
"@cite_0"
],
"mid": [
"1910686388"
]
} | Dynamic Malware Analysis with Feature Engineering and Feature Learning | Cybersecurity imposes substantial economic cost all over the world. A report (CEA 2018) from the United States government estimates that costs by malicious cyber activities in the U.S. economy lay between $57 billion and $109 billion in 2016. Malicious software (or malware) is one of the major cybersecurity threats that evolves rapidly. It is reported that more than 120 million new malware samples are being discovered every year (AV-TEST 2017). Therefore, the development of malware detection techniques is urgent and necessary.
Researchers have been working on malware detection for decades. The mainstream solutions include static analysis and dynamic analysis. Static analysis methods scan the binary byte-streams of the software to create signatures, such as printable strings, n-gram, instructions, etc (Kruegel et al. 2005). However, the signature-based static analysis might be vulnerable to code obfuscation (Rhode, Burnap, and Jones 2018;Gibert et al. 2018) or inadequate to detect new ("zeroday") malware (Vinod et al. 2009). In contrast, dynamic analysis algorithms execute each software in an isolated environment (e.g., a sandbox) to collect its run-time behaviour information. By using behaviour information, dynamic analysis exerts a higher detection rate and is more robust than static analysis (Damodaran et al. 2017). In this paper, we focus on dynamic analysis.
Among behaviour information, the system API call sequence is the most popular data source as it captures all the operations (including network access, file manipulation operations, etc.) executed by the software. Each API call in the sequence contains two important parts, the API name and the arguments. Each API may have zero or multiple arguments, each of which is represented as a name-value pair. To process behaviour information, a lot of feature engineering methods are proposed. For example, if we consider the API name as a string, then the most N (e.g., 1000) frequent n-gram features can be extracted (n = 1, 2, · · ·) from the sequence. However, it is non-trivial to extract the features from the arguments of heterogeneous types, including strings, integers, addresses, etc.
Recently, researchers have applied deep learning models to dynamic analysis. Deep learning models like convolutional neural network (CNN) and recurrent neural network (RNN) can learn features from the sequential data directly without feature engineering. Nonetheless, the data of traditional deep learning applications like computer vision and natural language processing is homogeneous, e.g., images (or text). It is still challenging to process the heterogeneous API arguments using deep learning models. Therefore, most existing approaches ignore the arguments. There are a few approaches Fang et al. 2017;Agrawal et al. 2018) leveraging API arguments. However, these approaches either treat all arguments as strings Agrawal et al. 2018) or only consider the statistical information of arguments (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2013). They consequently cannot fully exploit the heterogeneous information from different types of arguments.
In this paper, we propose a novel feature engineering method and a new deep learning architecture for malware detection. In particular, for different types of arguments, our feature engineering method leverages hashing approaches to extract the heterogeneous features separately. The features extracted from the API name, category, and the arguments, are further concatenated and fed into the deep learning model. We use multiple gated CNN models (Dauphin et al. 2017) to learn abstract lower dimensional features from the high dimensional hash features for each API call. The output from the gated CNN models is processed by a bidirectional LSTM to extract the sequential correlation of all API calls.
Our solution outperforms all baselines with a large margin. Through extensive ablation study, we find that both feature engineering and model architecture design are crucial for achieving high generalization performance.
The main contributions of this paper include: 1. We propose a novel feature representation for system API arguments. The extracted features from our dataset will be released for public access. 2. We devise a deep neural network architecture to process the extracted features, which combines multiple gated CNNs and a bidirectional LSTM. It outperforms all existing solutions with a large margin. 3. We conduct extensive experiments over a large real dataset 1 . Valuable insights about the feature and model architecture are found through ablation study.
Deep Learning Based Approaches
The previous papers typically ignore arguments. (Huang and Stokes 2016) use a feature representation with three parts, the presence of runnable code in arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation is reduced from 50,000 to 4,000 by a random projection. (Agrawal et al. 2018) propose a feature representation with a one-hot vector from API call name and top N frequent n-gram of the argument strings. The model uses several stacked LSTMs that shows a better performance than (Kolosnjaji et al. 2016). They also claim that multiple LSTMs cannot increase the performance.
System Framework
To collect the run-time API calls, we implement the system shown in Figure 1. The system has three parts, PE files collection, behaviour information collection, and feature extraction as well as model training.
PE Files Collection
The workflow of our system starts from the portable executable (PE) files collection. In this paper, we focus on detecting malware in portable executable (PE) file format in Windows systems, which is the most popular malware file format (AV-TEST 2017). This collection part has been implemented by a local anti-virus company, SecureAge Technology of Singapore. In addition, the company maintains a platform with 12 anti-virus engines to classify the PE files. The classification results are aggregated to get the label of each PE file for model training. Once the model is trained, it will be added into the platform as the 13th anti-virus engine. After the collection, an execution queue is maintained to submit the PE files for execution. It monitors the storage usage and decides whether to execute more PE files.
Behaviour Information Collection
Cuckoo 2 , an open-source software, is used to run the PE files and gather execution logs. It executes PE files inside virtual machines and uses API hooks to monitor the API call trace (i.e., the behaviour information). Besides, Cuckoo simulates some user actions, such as clicking a button, typing some texts, etc. In our system, we maintain dozens of virtual machines on each server. All virtual machines are installed with a 64-bit Windows 7 system and several daily-use software. We leverage the snapshot feature of the virtual machine to roll it back after execution. All generated logs are stored locally on the Cuckoo server.
Feature Extraction and Model Training
The execution logs generated by the sandbox contain detailed runtime information of the PE files, whose size ranges from several KB to hundred GB. We design a feature engineering solution that can run in parallel to extract features from the raw execution logs efficiently. Once the features 2 https://cuckoosandbox.org/ are extracted, we train our deep learning model on a model server with GPUs for malware classification.
Methodology
Feature Engineering
Most previous works (Qiao et al. 2013;Pascanu et al. 2015;Kolosnjaji et al. 2016) neglect the arguments of the API call, and only consider the API name and category. Consequently, some important (discriminative) information is lost (Agrawal et al. 2018). For example, the features of two write operations (API calls) would be exactly the same if the file path argument is ignored. However, the write operation might be benign when the target file is created by the program itself but be malicious if the target file is a system file. A few works (Trinius et al. 2009;Agrawal et al. 2018;Huang and Stokes 2016) that consider the arguments fail to exploit the heterogeneous information from different types of arguments.
We propose to adapt the hash method from (Weinberger et al. 2009) to encode the name, category and arguments of an API separately. As shown in Table 1, our feature representation consists of different types of information. The API name has 8 bins, and the API category has 4 bins. The API arguments part has 90 bins, 16 for the integer arguments and 74 for the string arguments. For the string arguments, several specific types of strings (file path, Dlls, etc.) are processed. Besides, 10 statistical features are extracted from all printable strings. All these features are concatenated to form a 102-dimension feature vector.
API Name and Category Cuckoo sandbox tracks 312 API calls in total which belong to 17 categories. Each API name consists of multiple words with the first letter of each word capitalized, such as "GetFileSize". We split the API name into words and then process these words by applying the feature hashing trick below. For the API category, since the category typically is a single word, for example, "network", we split the word into characters and apply the fea- ture hashing trick. In addition, we compute the MD5 value of the API name, category and arguments to remove any consecutively repeated API calls. We use feature hashing (Weinberger et al. 2009) in Equation 1 to encode a sequence of strings into a fixed-length vector. The random variable x denotes a sequence of elements, where each element is either a string or a character. M denotes the number of bins, i.e., 8 for API name, and 4 for API category. The value of the i-th bin is calculated by:
φ i (x) = j:h(xj )=i ξ(x j )(1)
where h is a hash function that maps an element, e.g., x j , to a natural number m ∈ {1, ..., M } as the bin index; ξ is another hash function that maps an element to {±1}. That is, for each element x j of x whose bin index h(x j ) is i, we add ξ(x j ) into the bin.
API Arguments As for API arguments, there are only two types of values, namely integers and strings. The individual value of an integer is meaningless. The argument name is required to get the meaning of the value. The same integer value might indicate totally different semantics with different argument names. For example, number 22 with the name "port" is different from the one with the name "size".
We adapt the previous feature hashing method to encode the integer's argument name as well as its value, as shown in Equation 2. We use the argument name to locate the hash bin. In particular, we use all the arguments whose names' hash value is i to update the i-th bin via summation. For each such argument, we compute the contribution to the bin as shown in Equation 2, where ξ(x name j ) is a hash function over the argument name and x value j is the value of the integer argument. Because integers may distribute sparsely within a range, we normalize the value using the logarithm to squash the range.
φ i (x) = j:h(x name j )=i ξ(x name j ) log(|x value j | + 1)(2)
where h and ξ are the same hash functions as in Equation 1. For strings of API arguments, their values are more complicated than integers. Some strings starting with '0x' con-tain the address of some objects. And some other may contain the file path, IP address, URL, or plain text. Besides, some API arguments may even contain the content of an entire file. The variety of strings makes it challenging to process them. According to the previous work Islam et al. 2010;Ahmed et al. 2009), the most important strings are the values about file paths, DLLs, registry keys, URLs, and IP addresses. Therefore, we use the feature hashing method in Equation 1 to extract features for these strings.
To capture the hierarchical information contained in the strings, we parse the whole string into several substrings and process them individually. For example, we use "C:\\" to identify a file path. For a path like "C:\\a\\b\\c", four substrings are generated, namely "C:", "C:\\a", "C:\\a\\b", and "C:\\a\\b\\c". All these substrings are processing by Equation 1. The same processing method is applied for DLLs, registry keys and IPs. The DLLs are strings ending with ".dll". The registry keys often start with "HKEY ". IPs are those strings with four numbers (range from 0 to 255) separated by dots. Slightly different for URLs, we only generate substrings from the hostname of the URL. For example, for "https://security.ai.cs.org/", the following substrings will be generated "org", "cs.org", "ai.cs.org" and "security.ai.cs.org". In this way, the domain and organization information will contribute more to the feature.
For lots of other types of strings, based on the previous work (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2010), we extract statistical information from all the printable strings. The printable strings consist of characters ranging from 0x20 to 0x7f. Therefore, all the paths, registry keys, URLs, IPs and some other printable strings are included. One type of strings starting with "MZ" is often a buffer that contains an entire PE file and usually occurs in malicious PE files such as thread injection (Liu et al. 2011). Therefore, we additionally count the occurrences of "MZ" strings. A 10-dimension vector is used to record the number of strings, their average length, the number of characters, the entropy of characters across all printable strings, and the number of paths, DLLs, URLs, registry keys, IPs and "MZ" strings.
We have not handled other arguments such as virtual addresses, structs, et al., which are relatively not so important compared with above types of arguments. Although the proposed feature engineering method is easy to be applied to them using extra bins, we look forward to more targeted researches to explore these arguments.
Model Architecture
We present a deep neural network architecture that leverages the features from the proposed feature engineering step. Figure 2 is an overview of our proposed deep learning model.
Input Module
After feature engineering, we get the input vector whose size is (N, d), where N is the length of the API call sequence, and d (102 bits) is the dimension of each extracted API feature. We first normalize the input by a batch normalization layer (Ioffe and Szegedy 2015). This batch normalization layer normalizes the input values by subtracting the batch mean and dividing by the batch standard devia- Gated-CNNs Module Several gated-CNNs (Dauphin et al. 2017) are applied after the input module. Gated-CNNs allows the selection of important and relevant information making it competitive with recurrent models on language tasks but consuming less resource and less time.
For each gated CNN, the input is fed into two convolution layers respectively. Let X A denotes the output of the first convolution layer, and X B denotes the output of the second one; they are combined by X A ⊗ σ(X B ), which involves an element-wise multiplication operation. Here, σ is the sigmoid function σ(x) = 1 1+e −x . σ(X B ) is regarded as the gate that controls the information from X A passed to the next layer in the model.
Following the idea in (Shen et al. 2014), 1-D convolutional filters are used as n-gram detectors. As Figure 2, we use two gated CNNs whose filter size is 2 and 3 respectively. All convolution layers' filter size is 128, and stride is 1.
Bi-LSTM Module All outputs from Gate CNNs are concatenated together. A batch normalization layer is applied to these outputs to reduce overfitting. We use bidirectional LSTM to learning sequential patterns. The number of units of each LSTM is 100.
LSTM is a recurrent neural network architecture, in which several gates are designed to control the information transmission status so that it is able to capture the long-term context information (Pichotta and Mooney 2016). Bidirectional LSTM is two LSTMs stacking together but with different directional input. Compared to unidirectional LSTM, bidirectional LSTM is able to integrate the information from past and future states simultaneously. Bidirectional LSTM has been proved effective at malware detection by (Agrawal et al. 2018).
Classification Module After learning sequential patterns from Bi-LSTM module, a global max-pooling layer is applied to extract abstract features from the hidden vectors. Instead of using the final activation of the Bi-LSTM, a global max-pooling layer relies on each signal observed throughout the sequence, which helps retain the relevant information learned throughout the sequence.
After the global max-pooling layer, we use a dense layer with units number 64 to reduce the dimension of the intermediate vector to 64. A ReLU activation is applied to this dense layer. Then we use a dropout layer with a rate of 0.5 to reduce overfitting. Finally, a dense layer with units number 1 reduces the dimension to 1. A Sigmoid activation is appended after the dense layer to output the probability.
Our model is supervised with the label associated with each input vector. To measure the loss for training the model, binary cross-entropy function is used as Equation 3.
(X, y) = −(y log(P [Y = 1|X]) + (1 − y)log(P [Y = 0|X]))(3)
In addition, the optimization method we take is Adam, and the learning rate is 0.001.
Experiments
Dataset
As described before, 12 commercial anti-virus engines are set up to classify the PE file. We set a PE file as positive if 4 or more engines agree that it is malicious. And if none of the engines classifies it as malware, we set it as negative. For other cases, we think the results are inconclusive and therefore exclude them from our dataset. The collected data are archived by the date and we pick two months (April and May) data to conduct our experiments. All these PE files are processed by our system (as shown in Figure 1) to collect the API call sequences. Table 2 is a summary of the data, where the row represents the statistics of the data in a month.
Model Evaluation
In order to investigate the performance improvement, we compare the proposed model with three machine learningbased models and three deep learning-based models.
• (Uppal et al. 2014) extract 3-gram vectors from API call names. Then they use the odds ration to select the most important vectors. SVM is applied as the model. • ) use a hash table to indicate the presence of strings. The strings come from both API names and arguments. The generated hash table is then used as features and the classifier is Random Forest. • (Pascanu et al. 2015) train a language model using RNN which can predict the next API call given the previous API calls. Then the RNN model is freezed and the hidden features are extracted for malware detection. The input of the model is a sequence of d-dimensional one-hot vectors whose elements are all zeros except the position (the element value is 1) for the corresponding API call.
• (Kolosnjaji et al. 2016) propose a model which combines stacked CNNs and RNNs. The input is also one-hot vectors for the API call sequence.
• (Agrawal et al. 2018) extract one-hot vectors from the API call sequence and frequent n-gram vectors from the API arguments. The model uses several stacked LSTMs.
All the experiments are conducted against our dataset. We use 4-fold cross-validation (or CV) over the April dataset to train the models and do the testing over the May dataset. Considering that new malware is being generated over time, there could be many PE files for new malware in the May dataset. Therefore, the performance indicates the model's capability for detecting unknown malware in a certain degree.
Three metrics are considered: ROC (receiver operating characteristic curve) AUC (Area Under the Curve) score, ACC (accuracy) and Recall when FP (false positive) rate is 0.1%. The recall is defined as the ratio of the cor-rectly detected malware PE files over all malware PE files. The FP rate is the ratio of benign PE files incorrectly identified as malware. Anti-virus products are required to keep a low false alarm rate to avoid disturbing users frequently (Nicholas 2017). A good model should achieve a high recall rate for a fixed low false positive rate. We provide 95% confidence intervals for all these three metrics. In addition, the inference time per sample, which includes the time for feature processing and model prediction, is also taken into account.
From the experimental results in Table 3, our proposed model achieves the best AUC score, accuracy and recall among all the baseline models at both CV and test dataset. Figure 3 displays the ROC curve of all models. The dashed curves are the ROCs of those traditional machine learning models, while the solid lines are the ROCs of those deep learning models. The experimental results illustrate that the traditional machine learning approaches and deep learning approaches are comparable. It should be noted that the model ) achieves quite good results by using a basic method to extract the string information. This indicates the importance of strings in feature processing. Therefore, we spend a lot of effort on the feature engineering of string data. The results also show that models with argument features generally outperform the ones neglecting arguments. The argument features increase the test AUC score of the traditional machine learning method by 3% and also increased the test AUC score of deep learning by about 1%. Therefore, including API arguments is necessary. Figure 3 shows a margin between the results on validation and test dataset. Since the training dataset is collected before the testing dataset so the test data is likely to include new malware PE file. However, our proposed solution achieves the best performance on the test dataset, which confirms the ability in detecting new and constantly evolving malware. As for the inference time, models with the argument features take a slightly longer time. However, hundreds of milliseconds inference time are relatively small and acceptable, because the data collection using Cuckoo sandbox is timeconsuming, and costs 3-5 minutes per sample. The training takes about 10 minutes per epoch, which could be easily reduced via distributed training (Ooi et al. 2015).
Ablation Study
The proposed model consists of several components that can be flexibly adjusted, e.g., the Gated CNNs, Bi-LSTM and Batch Normalization. In order to explore the effects of different configurations, we employ several sets of comparison experiments by fixing other structures and only changing the testing component. These results of these experiments serve as the basis for the decision of our final model structure.
• Gated CNNs with three sets experiments, the Gated CNNs only with kernel size 2 (2-GatedCNN), two Gated CNNs with kernel size 2 and 3 (2,3-GatedCNN), three Gated CNNs with kernel size 2, 3 and 4 (2,3,4-GatedCNN). • Batch Normalization with four sets experiments, the model without any batch normalization (BN) layer, without the first BN layer (after the input), without the second BN layer (after the Gated CNNs), and with both BN layers. Figure 4 depicts the comparisons for different numbers of Gated CNNs. 2-GatedCNN converges slower although the final performance is very close to the other two models. In addition, increasing the number of gated CNN from 2,3-GatedCNN to 2,3,4-GatedCNN does not bring any performance improvement. The best AUC score of 2-GatedCNN and 2,3-GatedCNN is 98.80% and 98.86% respectively. Therefore, we choose 2,3-GatedCNN in our model. Figure 5 displays the performance with different numbers of batch normalization layers. Although these four curves tend to be closer at later epochs, the curve with both BN layers shows slightly superior performance with the highest AUC score at 98.80%. As for various numbers of Bi-LSTM, Figure 6 shows the performance for each configuration. Obviously, in both figures, the curve of 0-Bi-LSTM is below the other two curves by a large margin, which indicates the Bi-LSTM is vital. The other two curves in both figures are continuously staggered, however, 1-Bi-LSTM is slightly better with the highest point reaching 98.80%. In addition, the computation time of 1-Bi-LSTM is 2 times faster than 2-Bi-LSTM. Thus, we choose 1-Bi-LSTM as the final configuration of the proposed model.
Conclusion
In this work, we propose a novel feature engineering method and a new deep learning architecture for malware detection over the API call sequence. Hashing tricks are applied to process the heterogeneous information from API calls, including the name, category and arguments. A homogeneous and low-cost feature representation is extracted. Then, we use multiple gated-CNNs to transform the high dimensional hash features from each API call, and feed the results into a Bi-LSTM to capture the sequential correlations of API calls within the sequence. The experiments show that our approach outperforms all baselines. Ablation study over multiple architecture variations verify our architecture design decisions.
NCR002-020), and FY2017 SUG Grant. We also thank Se-cureAge Technology of Singapore for sharing the data. | 4,099 |
1907.07352 | 2960160011 | Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g., system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released ("zero-day") malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from ablation study. | @cite_46 introduce a feature representation called Malware Instruction Set (MIST). MIST uses several levels of features to represent a system call. The first level represents the category and name of API calls. The following levels are specified manually for each API call to represent their arguments. Therefore, the feature from the same level but for different APIs could have different semantics. The inconsistency imposes challenges to learn patterns using machine learning models. Qiao, @cite_5 extend the MIST and propose a representation called Byte-based Behavior Instruction Set (BBIS). They claim that only the first level of MIST is efficient and thus BBIS only uses the category and name of API calls. Besides, they propose an algorithm (CARL) to process consecutively repeated API calls. | {
"abstract": [
"Analyzing the usage of Windows Application Program Interface (API) is a common way to understand behaviors of Malicious Software (malware) in either static analysis or dynamic analysis methods. In this work, we focus on the usage of frequent messages in API call sequences, and we hypothesize that frequent itemsets composed of API names and or API arguments could be valuable in the identification of the behavior of malware. For verification, we introduced clustering processes of malware binaries based on their frequent itemsets of API call sequences, and we evaluated the performance of malware clustering. Specific implementation processes for malware clustering, including API calls abstraction, frequent itemsets mining and similarity calculation, are illustrated. The experiment upon a big malware dataset demonstrated that merely using the frequent messages of API call sequences can achieve a high precision for malware clustering while significantly reducing the computation time. This also proves the importance of frequent itemsets in API call sequences for identifying the behavior of malware.",
"We introduce a new representation for monitored behavior of malicious software called Malware Instruction Set (MIST). The representation is optimized for effective and efficient analysis of behavior using data mining and machine learning techniques. It can be obtained automatically during analysis of malware with a behavior monitoring tool or by converting existing behavior reports. The representation is not restricted to a particular monitoring tool and thus can also be used as a meta language to unify behavior reports of different sources."
],
"cite_N": [
"@cite_5",
"@cite_46"
],
"mid": [
"1966917005",
"2121032650"
]
} | Dynamic Malware Analysis with Feature Engineering and Feature Learning | Cybersecurity imposes substantial economic cost all over the world. A report (CEA 2018) from the United States government estimates that costs by malicious cyber activities in the U.S. economy lay between $57 billion and $109 billion in 2016. Malicious software (or malware) is one of the major cybersecurity threats that evolves rapidly. It is reported that more than 120 million new malware samples are being discovered every year (AV-TEST 2017). Therefore, the development of malware detection techniques is urgent and necessary.
Researchers have been working on malware detection for decades. The mainstream solutions include static analysis and dynamic analysis. Static analysis methods scan the binary byte-streams of the software to create signatures, such as printable strings, n-gram, instructions, etc (Kruegel et al. 2005). However, the signature-based static analysis might be vulnerable to code obfuscation (Rhode, Burnap, and Jones 2018;Gibert et al. 2018) or inadequate to detect new ("zeroday") malware (Vinod et al. 2009). In contrast, dynamic analysis algorithms execute each software in an isolated environment (e.g., a sandbox) to collect its run-time behaviour information. By using behaviour information, dynamic analysis exerts a higher detection rate and is more robust than static analysis (Damodaran et al. 2017). In this paper, we focus on dynamic analysis.
Among behaviour information, the system API call sequence is the most popular data source as it captures all the operations (including network access, file manipulation operations, etc.) executed by the software. Each API call in the sequence contains two important parts, the API name and the arguments. Each API may have zero or multiple arguments, each of which is represented as a name-value pair. To process behaviour information, a lot of feature engineering methods are proposed. For example, if we consider the API name as a string, then the most N (e.g., 1000) frequent n-gram features can be extracted (n = 1, 2, · · ·) from the sequence. However, it is non-trivial to extract the features from the arguments of heterogeneous types, including strings, integers, addresses, etc.
Recently, researchers have applied deep learning models to dynamic analysis. Deep learning models like convolutional neural network (CNN) and recurrent neural network (RNN) can learn features from the sequential data directly without feature engineering. Nonetheless, the data of traditional deep learning applications like computer vision and natural language processing is homogeneous, e.g., images (or text). It is still challenging to process the heterogeneous API arguments using deep learning models. Therefore, most existing approaches ignore the arguments. There are a few approaches Fang et al. 2017;Agrawal et al. 2018) leveraging API arguments. However, these approaches either treat all arguments as strings Agrawal et al. 2018) or only consider the statistical information of arguments (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2013). They consequently cannot fully exploit the heterogeneous information from different types of arguments.
In this paper, we propose a novel feature engineering method and a new deep learning architecture for malware detection. In particular, for different types of arguments, our feature engineering method leverages hashing approaches to extract the heterogeneous features separately. The features extracted from the API name, category, and the arguments, are further concatenated and fed into the deep learning model. We use multiple gated CNN models (Dauphin et al. 2017) to learn abstract lower dimensional features from the high dimensional hash features for each API call. The output from the gated CNN models is processed by a bidirectional LSTM to extract the sequential correlation of all API calls.
Our solution outperforms all baselines with a large margin. Through extensive ablation study, we find that both feature engineering and model architecture design are crucial for achieving high generalization performance.
The main contributions of this paper include: 1. We propose a novel feature representation for system API arguments. The extracted features from our dataset will be released for public access. 2. We devise a deep neural network architecture to process the extracted features, which combines multiple gated CNNs and a bidirectional LSTM. It outperforms all existing solutions with a large margin. 3. We conduct extensive experiments over a large real dataset 1 . Valuable insights about the feature and model architecture are found through ablation study.
Deep Learning Based Approaches
The previous papers typically ignore arguments. (Huang and Stokes 2016) use a feature representation with three parts, the presence of runnable code in arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation is reduced from 50,000 to 4,000 by a random projection. (Agrawal et al. 2018) propose a feature representation with a one-hot vector from API call name and top N frequent n-gram of the argument strings. The model uses several stacked LSTMs that shows a better performance than (Kolosnjaji et al. 2016). They also claim that multiple LSTMs cannot increase the performance.
System Framework
To collect the run-time API calls, we implement the system shown in Figure 1. The system has three parts, PE files collection, behaviour information collection, and feature extraction as well as model training.
PE Files Collection
The workflow of our system starts from the portable executable (PE) files collection. In this paper, we focus on detecting malware in portable executable (PE) file format in Windows systems, which is the most popular malware file format (AV-TEST 2017). This collection part has been implemented by a local anti-virus company, SecureAge Technology of Singapore. In addition, the company maintains a platform with 12 anti-virus engines to classify the PE files. The classification results are aggregated to get the label of each PE file for model training. Once the model is trained, it will be added into the platform as the 13th anti-virus engine. After the collection, an execution queue is maintained to submit the PE files for execution. It monitors the storage usage and decides whether to execute more PE files.
Behaviour Information Collection
Cuckoo 2 , an open-source software, is used to run the PE files and gather execution logs. It executes PE files inside virtual machines and uses API hooks to monitor the API call trace (i.e., the behaviour information). Besides, Cuckoo simulates some user actions, such as clicking a button, typing some texts, etc. In our system, we maintain dozens of virtual machines on each server. All virtual machines are installed with a 64-bit Windows 7 system and several daily-use software. We leverage the snapshot feature of the virtual machine to roll it back after execution. All generated logs are stored locally on the Cuckoo server.
Feature Extraction and Model Training
The execution logs generated by the sandbox contain detailed runtime information of the PE files, whose size ranges from several KB to hundred GB. We design a feature engineering solution that can run in parallel to extract features from the raw execution logs efficiently. Once the features 2 https://cuckoosandbox.org/ are extracted, we train our deep learning model on a model server with GPUs for malware classification.
Methodology
Feature Engineering
Most previous works (Qiao et al. 2013;Pascanu et al. 2015;Kolosnjaji et al. 2016) neglect the arguments of the API call, and only consider the API name and category. Consequently, some important (discriminative) information is lost (Agrawal et al. 2018). For example, the features of two write operations (API calls) would be exactly the same if the file path argument is ignored. However, the write operation might be benign when the target file is created by the program itself but be malicious if the target file is a system file. A few works (Trinius et al. 2009;Agrawal et al. 2018;Huang and Stokes 2016) that consider the arguments fail to exploit the heterogeneous information from different types of arguments.
We propose to adapt the hash method from (Weinberger et al. 2009) to encode the name, category and arguments of an API separately. As shown in Table 1, our feature representation consists of different types of information. The API name has 8 bins, and the API category has 4 bins. The API arguments part has 90 bins, 16 for the integer arguments and 74 for the string arguments. For the string arguments, several specific types of strings (file path, Dlls, etc.) are processed. Besides, 10 statistical features are extracted from all printable strings. All these features are concatenated to form a 102-dimension feature vector.
API Name and Category Cuckoo sandbox tracks 312 API calls in total which belong to 17 categories. Each API name consists of multiple words with the first letter of each word capitalized, such as "GetFileSize". We split the API name into words and then process these words by applying the feature hashing trick below. For the API category, since the category typically is a single word, for example, "network", we split the word into characters and apply the fea- ture hashing trick. In addition, we compute the MD5 value of the API name, category and arguments to remove any consecutively repeated API calls. We use feature hashing (Weinberger et al. 2009) in Equation 1 to encode a sequence of strings into a fixed-length vector. The random variable x denotes a sequence of elements, where each element is either a string or a character. M denotes the number of bins, i.e., 8 for API name, and 4 for API category. The value of the i-th bin is calculated by:
φ i (x) = j:h(xj )=i ξ(x j )(1)
where h is a hash function that maps an element, e.g., x j , to a natural number m ∈ {1, ..., M } as the bin index; ξ is another hash function that maps an element to {±1}. That is, for each element x j of x whose bin index h(x j ) is i, we add ξ(x j ) into the bin.
API Arguments As for API arguments, there are only two types of values, namely integers and strings. The individual value of an integer is meaningless. The argument name is required to get the meaning of the value. The same integer value might indicate totally different semantics with different argument names. For example, number 22 with the name "port" is different from the one with the name "size".
We adapt the previous feature hashing method to encode the integer's argument name as well as its value, as shown in Equation 2. We use the argument name to locate the hash bin. In particular, we use all the arguments whose names' hash value is i to update the i-th bin via summation. For each such argument, we compute the contribution to the bin as shown in Equation 2, where ξ(x name j ) is a hash function over the argument name and x value j is the value of the integer argument. Because integers may distribute sparsely within a range, we normalize the value using the logarithm to squash the range.
φ i (x) = j:h(x name j )=i ξ(x name j ) log(|x value j | + 1)(2)
where h and ξ are the same hash functions as in Equation 1. For strings of API arguments, their values are more complicated than integers. Some strings starting with '0x' con-tain the address of some objects. And some other may contain the file path, IP address, URL, or plain text. Besides, some API arguments may even contain the content of an entire file. The variety of strings makes it challenging to process them. According to the previous work Islam et al. 2010;Ahmed et al. 2009), the most important strings are the values about file paths, DLLs, registry keys, URLs, and IP addresses. Therefore, we use the feature hashing method in Equation 1 to extract features for these strings.
To capture the hierarchical information contained in the strings, we parse the whole string into several substrings and process them individually. For example, we use "C:\\" to identify a file path. For a path like "C:\\a\\b\\c", four substrings are generated, namely "C:", "C:\\a", "C:\\a\\b", and "C:\\a\\b\\c". All these substrings are processing by Equation 1. The same processing method is applied for DLLs, registry keys and IPs. The DLLs are strings ending with ".dll". The registry keys often start with "HKEY ". IPs are those strings with four numbers (range from 0 to 255) separated by dots. Slightly different for URLs, we only generate substrings from the hostname of the URL. For example, for "https://security.ai.cs.org/", the following substrings will be generated "org", "cs.org", "ai.cs.org" and "security.ai.cs.org". In this way, the domain and organization information will contribute more to the feature.
For lots of other types of strings, based on the previous work (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2010), we extract statistical information from all the printable strings. The printable strings consist of characters ranging from 0x20 to 0x7f. Therefore, all the paths, registry keys, URLs, IPs and some other printable strings are included. One type of strings starting with "MZ" is often a buffer that contains an entire PE file and usually occurs in malicious PE files such as thread injection (Liu et al. 2011). Therefore, we additionally count the occurrences of "MZ" strings. A 10-dimension vector is used to record the number of strings, their average length, the number of characters, the entropy of characters across all printable strings, and the number of paths, DLLs, URLs, registry keys, IPs and "MZ" strings.
We have not handled other arguments such as virtual addresses, structs, et al., which are relatively not so important compared with above types of arguments. Although the proposed feature engineering method is easy to be applied to them using extra bins, we look forward to more targeted researches to explore these arguments.
Model Architecture
We present a deep neural network architecture that leverages the features from the proposed feature engineering step. Figure 2 is an overview of our proposed deep learning model.
Input Module
After feature engineering, we get the input vector whose size is (N, d), where N is the length of the API call sequence, and d (102 bits) is the dimension of each extracted API feature. We first normalize the input by a batch normalization layer (Ioffe and Szegedy 2015). This batch normalization layer normalizes the input values by subtracting the batch mean and dividing by the batch standard devia- Gated-CNNs Module Several gated-CNNs (Dauphin et al. 2017) are applied after the input module. Gated-CNNs allows the selection of important and relevant information making it competitive with recurrent models on language tasks but consuming less resource and less time.
For each gated CNN, the input is fed into two convolution layers respectively. Let X A denotes the output of the first convolution layer, and X B denotes the output of the second one; they are combined by X A ⊗ σ(X B ), which involves an element-wise multiplication operation. Here, σ is the sigmoid function σ(x) = 1 1+e −x . σ(X B ) is regarded as the gate that controls the information from X A passed to the next layer in the model.
Following the idea in (Shen et al. 2014), 1-D convolutional filters are used as n-gram detectors. As Figure 2, we use two gated CNNs whose filter size is 2 and 3 respectively. All convolution layers' filter size is 128, and stride is 1.
Bi-LSTM Module All outputs from Gate CNNs are concatenated together. A batch normalization layer is applied to these outputs to reduce overfitting. We use bidirectional LSTM to learning sequential patterns. The number of units of each LSTM is 100.
LSTM is a recurrent neural network architecture, in which several gates are designed to control the information transmission status so that it is able to capture the long-term context information (Pichotta and Mooney 2016). Bidirectional LSTM is two LSTMs stacking together but with different directional input. Compared to unidirectional LSTM, bidirectional LSTM is able to integrate the information from past and future states simultaneously. Bidirectional LSTM has been proved effective at malware detection by (Agrawal et al. 2018).
Classification Module After learning sequential patterns from Bi-LSTM module, a global max-pooling layer is applied to extract abstract features from the hidden vectors. Instead of using the final activation of the Bi-LSTM, a global max-pooling layer relies on each signal observed throughout the sequence, which helps retain the relevant information learned throughout the sequence.
After the global max-pooling layer, we use a dense layer with units number 64 to reduce the dimension of the intermediate vector to 64. A ReLU activation is applied to this dense layer. Then we use a dropout layer with a rate of 0.5 to reduce overfitting. Finally, a dense layer with units number 1 reduces the dimension to 1. A Sigmoid activation is appended after the dense layer to output the probability.
Our model is supervised with the label associated with each input vector. To measure the loss for training the model, binary cross-entropy function is used as Equation 3.
(X, y) = −(y log(P [Y = 1|X]) + (1 − y)log(P [Y = 0|X]))(3)
In addition, the optimization method we take is Adam, and the learning rate is 0.001.
Experiments
Dataset
As described before, 12 commercial anti-virus engines are set up to classify the PE file. We set a PE file as positive if 4 or more engines agree that it is malicious. And if none of the engines classifies it as malware, we set it as negative. For other cases, we think the results are inconclusive and therefore exclude them from our dataset. The collected data are archived by the date and we pick two months (April and May) data to conduct our experiments. All these PE files are processed by our system (as shown in Figure 1) to collect the API call sequences. Table 2 is a summary of the data, where the row represents the statistics of the data in a month.
Model Evaluation
In order to investigate the performance improvement, we compare the proposed model with three machine learningbased models and three deep learning-based models.
• (Uppal et al. 2014) extract 3-gram vectors from API call names. Then they use the odds ration to select the most important vectors. SVM is applied as the model. • ) use a hash table to indicate the presence of strings. The strings come from both API names and arguments. The generated hash table is then used as features and the classifier is Random Forest. • (Pascanu et al. 2015) train a language model using RNN which can predict the next API call given the previous API calls. Then the RNN model is freezed and the hidden features are extracted for malware detection. The input of the model is a sequence of d-dimensional one-hot vectors whose elements are all zeros except the position (the element value is 1) for the corresponding API call.
• (Kolosnjaji et al. 2016) propose a model which combines stacked CNNs and RNNs. The input is also one-hot vectors for the API call sequence.
• (Agrawal et al. 2018) extract one-hot vectors from the API call sequence and frequent n-gram vectors from the API arguments. The model uses several stacked LSTMs.
All the experiments are conducted against our dataset. We use 4-fold cross-validation (or CV) over the April dataset to train the models and do the testing over the May dataset. Considering that new malware is being generated over time, there could be many PE files for new malware in the May dataset. Therefore, the performance indicates the model's capability for detecting unknown malware in a certain degree.
Three metrics are considered: ROC (receiver operating characteristic curve) AUC (Area Under the Curve) score, ACC (accuracy) and Recall when FP (false positive) rate is 0.1%. The recall is defined as the ratio of the cor-rectly detected malware PE files over all malware PE files. The FP rate is the ratio of benign PE files incorrectly identified as malware. Anti-virus products are required to keep a low false alarm rate to avoid disturbing users frequently (Nicholas 2017). A good model should achieve a high recall rate for a fixed low false positive rate. We provide 95% confidence intervals for all these three metrics. In addition, the inference time per sample, which includes the time for feature processing and model prediction, is also taken into account.
From the experimental results in Table 3, our proposed model achieves the best AUC score, accuracy and recall among all the baseline models at both CV and test dataset. Figure 3 displays the ROC curve of all models. The dashed curves are the ROCs of those traditional machine learning models, while the solid lines are the ROCs of those deep learning models. The experimental results illustrate that the traditional machine learning approaches and deep learning approaches are comparable. It should be noted that the model ) achieves quite good results by using a basic method to extract the string information. This indicates the importance of strings in feature processing. Therefore, we spend a lot of effort on the feature engineering of string data. The results also show that models with argument features generally outperform the ones neglecting arguments. The argument features increase the test AUC score of the traditional machine learning method by 3% and also increased the test AUC score of deep learning by about 1%. Therefore, including API arguments is necessary. Figure 3 shows a margin between the results on validation and test dataset. Since the training dataset is collected before the testing dataset so the test data is likely to include new malware PE file. However, our proposed solution achieves the best performance on the test dataset, which confirms the ability in detecting new and constantly evolving malware. As for the inference time, models with the argument features take a slightly longer time. However, hundreds of milliseconds inference time are relatively small and acceptable, because the data collection using Cuckoo sandbox is timeconsuming, and costs 3-5 minutes per sample. The training takes about 10 minutes per epoch, which could be easily reduced via distributed training (Ooi et al. 2015).
Ablation Study
The proposed model consists of several components that can be flexibly adjusted, e.g., the Gated CNNs, Bi-LSTM and Batch Normalization. In order to explore the effects of different configurations, we employ several sets of comparison experiments by fixing other structures and only changing the testing component. These results of these experiments serve as the basis for the decision of our final model structure.
• Gated CNNs with three sets experiments, the Gated CNNs only with kernel size 2 (2-GatedCNN), two Gated CNNs with kernel size 2 and 3 (2,3-GatedCNN), three Gated CNNs with kernel size 2, 3 and 4 (2,3,4-GatedCNN). • Batch Normalization with four sets experiments, the model without any batch normalization (BN) layer, without the first BN layer (after the input), without the second BN layer (after the Gated CNNs), and with both BN layers. Figure 4 depicts the comparisons for different numbers of Gated CNNs. 2-GatedCNN converges slower although the final performance is very close to the other two models. In addition, increasing the number of gated CNN from 2,3-GatedCNN to 2,3,4-GatedCNN does not bring any performance improvement. The best AUC score of 2-GatedCNN and 2,3-GatedCNN is 98.80% and 98.86% respectively. Therefore, we choose 2,3-GatedCNN in our model. Figure 5 displays the performance with different numbers of batch normalization layers. Although these four curves tend to be closer at later epochs, the curve with both BN layers shows slightly superior performance with the highest AUC score at 98.80%. As for various numbers of Bi-LSTM, Figure 6 shows the performance for each configuration. Obviously, in both figures, the curve of 0-Bi-LSTM is below the other two curves by a large margin, which indicates the Bi-LSTM is vital. The other two curves in both figures are continuously staggered, however, 1-Bi-LSTM is slightly better with the highest point reaching 98.80%. In addition, the computation time of 1-Bi-LSTM is 2 times faster than 2-Bi-LSTM. Thus, we choose 1-Bi-LSTM as the final configuration of the proposed model.
Conclusion
In this work, we propose a novel feature engineering method and a new deep learning architecture for malware detection over the API call sequence. Hashing tricks are applied to process the heterogeneous information from API calls, including the name, category and arguments. A homogeneous and low-cost feature representation is extracted. Then, we use multiple gated-CNNs to transform the high dimensional hash features from each API call, and feed the results into a Bi-LSTM to capture the sequential correlations of API calls within the sequence. The experiments show that our approach outperforms all baselines. Ablation study over multiple architecture variations verify our architecture design decisions.
NCR002-020), and FY2017 SUG Grant. We also thank Se-cureAge Technology of Singapore for sharing the data. | 4,099 |
1907.07352 | 2960160011 | Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g., system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released ("zero-day") malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from ablation study. | @cite_29 propose a feature representation associating the API call sequence with their arguments. It assigns each argument to bind with its API call to form a new sequence, However, this approach leads to an extremely long feature vector and might lose the pattern of API call sequence. @cite_27 propose another two feature representations. These representations consist of first 200 API calls as well as its argument". However, this argument" only indicates whether this API call is connected with the later one and it might not maintain sufficient information from arguments. | {
"abstract": [
"Malware, i.e., malicious software, represents one of the main cyber security threats today. Over the last decade malware has been evolving in terms of the complexity of malicious software and the diversity of attack vectors. As a result modern malware is characterized by sophisticated obfuscation techniques, which hinder the classical static analysis approach. Furthermore, the increased amount of malware that emerges every day, renders a manual approach inefficient. This study tackles the problem of analyzing, detecting and classifying the vast amount of malware in a scalable, efficient and accurate manner. We propose a novel approach for detecting malware and classifying it to either known or novel, i.e., previously unseen malware family. The approach relies on Random Forests classifier for performing both malware detection and family classification. Furthermore, the proposed approach employs novel feature representations for malware classification, that significantly reduces the feature space, while achieving encouraging predictive performance. The approach was evaluated using behavioral traces of over 270,000 malware samples and 837 samples of benign software. The behavioral traces were obtained using a modified version of Cuckoo sandbox, that was able to harvest behavioral traces of the analyzed samples in a time-efficient manner. The proposed system achieves high malware detection rate and promising predictive performance in the family classification, opening the possibility of coping with the use of obfuscation and the growing number of malware.",
"Since signature based methods cannot identify sophisticated malware quickly and effectively, research is moving toward using samples' runtime behavior. But these methods are often slow and have lower detection rate and are not usually used in antivirus software. In this article we introduce a scalable method that relies on utilizing features other than traditional API calls to obtain higher accuracies. Two feature categories including API names and a combination of API names and their input arguments were extracted to investigate their effect in identifying and distinguishing malware and benign applications. Feature selection techniques are then applied to reduce the number of features and enhance the analysis time. Various classifiers were then utilized along with 10-fold cross validation approach to achieve an accuracy of 98.4 with a false positive rate less than two percent in best case. The small number of extracted features in the proposed technique and the high accuracy achieved makes it an appropriate approach to be used in industrial applications."
],
"cite_N": [
"@cite_27",
"@cite_29"
],
"mid": [
"2307930854",
"2018022926"
]
} | Dynamic Malware Analysis with Feature Engineering and Feature Learning | Cybersecurity imposes substantial economic cost all over the world. A report (CEA 2018) from the United States government estimates that costs by malicious cyber activities in the U.S. economy lay between $57 billion and $109 billion in 2016. Malicious software (or malware) is one of the major cybersecurity threats that evolves rapidly. It is reported that more than 120 million new malware samples are being discovered every year (AV-TEST 2017). Therefore, the development of malware detection techniques is urgent and necessary.
Researchers have been working on malware detection for decades. The mainstream solutions include static analysis and dynamic analysis. Static analysis methods scan the binary byte-streams of the software to create signatures, such as printable strings, n-gram, instructions, etc (Kruegel et al. 2005). However, the signature-based static analysis might be vulnerable to code obfuscation (Rhode, Burnap, and Jones 2018;Gibert et al. 2018) or inadequate to detect new ("zeroday") malware (Vinod et al. 2009). In contrast, dynamic analysis algorithms execute each software in an isolated environment (e.g., a sandbox) to collect its run-time behaviour information. By using behaviour information, dynamic analysis exerts a higher detection rate and is more robust than static analysis (Damodaran et al. 2017). In this paper, we focus on dynamic analysis.
Among behaviour information, the system API call sequence is the most popular data source as it captures all the operations (including network access, file manipulation operations, etc.) executed by the software. Each API call in the sequence contains two important parts, the API name and the arguments. Each API may have zero or multiple arguments, each of which is represented as a name-value pair. To process behaviour information, a lot of feature engineering methods are proposed. For example, if we consider the API name as a string, then the most N (e.g., 1000) frequent n-gram features can be extracted (n = 1, 2, · · ·) from the sequence. However, it is non-trivial to extract the features from the arguments of heterogeneous types, including strings, integers, addresses, etc.
Recently, researchers have applied deep learning models to dynamic analysis. Deep learning models like convolutional neural network (CNN) and recurrent neural network (RNN) can learn features from the sequential data directly without feature engineering. Nonetheless, the data of traditional deep learning applications like computer vision and natural language processing is homogeneous, e.g., images (or text). It is still challenging to process the heterogeneous API arguments using deep learning models. Therefore, most existing approaches ignore the arguments. There are a few approaches Fang et al. 2017;Agrawal et al. 2018) leveraging API arguments. However, these approaches either treat all arguments as strings Agrawal et al. 2018) or only consider the statistical information of arguments (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2013). They consequently cannot fully exploit the heterogeneous information from different types of arguments.
In this paper, we propose a novel feature engineering method and a new deep learning architecture for malware detection. In particular, for different types of arguments, our feature engineering method leverages hashing approaches to extract the heterogeneous features separately. The features extracted from the API name, category, and the arguments, are further concatenated and fed into the deep learning model. We use multiple gated CNN models (Dauphin et al. 2017) to learn abstract lower dimensional features from the high dimensional hash features for each API call. The output from the gated CNN models is processed by a bidirectional LSTM to extract the sequential correlation of all API calls.
Our solution outperforms all baselines with a large margin. Through extensive ablation study, we find that both feature engineering and model architecture design are crucial for achieving high generalization performance.
The main contributions of this paper include: 1. We propose a novel feature representation for system API arguments. The extracted features from our dataset will be released for public access. 2. We devise a deep neural network architecture to process the extracted features, which combines multiple gated CNNs and a bidirectional LSTM. It outperforms all existing solutions with a large margin. 3. We conduct extensive experiments over a large real dataset 1 . Valuable insights about the feature and model architecture are found through ablation study.
Deep Learning Based Approaches
The previous papers typically ignore arguments. (Huang and Stokes 2016) use a feature representation with three parts, the presence of runnable code in arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation is reduced from 50,000 to 4,000 by a random projection. (Agrawal et al. 2018) propose a feature representation with a one-hot vector from API call name and top N frequent n-gram of the argument strings. The model uses several stacked LSTMs that shows a better performance than (Kolosnjaji et al. 2016). They also claim that multiple LSTMs cannot increase the performance.
System Framework
To collect the run-time API calls, we implement the system shown in Figure 1. The system has three parts, PE files collection, behaviour information collection, and feature extraction as well as model training.
PE Files Collection
The workflow of our system starts from the portable executable (PE) files collection. In this paper, we focus on detecting malware in portable executable (PE) file format in Windows systems, which is the most popular malware file format (AV-TEST 2017). This collection part has been implemented by a local anti-virus company, SecureAge Technology of Singapore. In addition, the company maintains a platform with 12 anti-virus engines to classify the PE files. The classification results are aggregated to get the label of each PE file for model training. Once the model is trained, it will be added into the platform as the 13th anti-virus engine. After the collection, an execution queue is maintained to submit the PE files for execution. It monitors the storage usage and decides whether to execute more PE files.
Behaviour Information Collection
Cuckoo 2 , an open-source software, is used to run the PE files and gather execution logs. It executes PE files inside virtual machines and uses API hooks to monitor the API call trace (i.e., the behaviour information). Besides, Cuckoo simulates some user actions, such as clicking a button, typing some texts, etc. In our system, we maintain dozens of virtual machines on each server. All virtual machines are installed with a 64-bit Windows 7 system and several daily-use software. We leverage the snapshot feature of the virtual machine to roll it back after execution. All generated logs are stored locally on the Cuckoo server.
Feature Extraction and Model Training
The execution logs generated by the sandbox contain detailed runtime information of the PE files, whose size ranges from several KB to hundred GB. We design a feature engineering solution that can run in parallel to extract features from the raw execution logs efficiently. Once the features 2 https://cuckoosandbox.org/ are extracted, we train our deep learning model on a model server with GPUs for malware classification.
Methodology
Feature Engineering
Most previous works (Qiao et al. 2013;Pascanu et al. 2015;Kolosnjaji et al. 2016) neglect the arguments of the API call, and only consider the API name and category. Consequently, some important (discriminative) information is lost (Agrawal et al. 2018). For example, the features of two write operations (API calls) would be exactly the same if the file path argument is ignored. However, the write operation might be benign when the target file is created by the program itself but be malicious if the target file is a system file. A few works (Trinius et al. 2009;Agrawal et al. 2018;Huang and Stokes 2016) that consider the arguments fail to exploit the heterogeneous information from different types of arguments.
We propose to adapt the hash method from (Weinberger et al. 2009) to encode the name, category and arguments of an API separately. As shown in Table 1, our feature representation consists of different types of information. The API name has 8 bins, and the API category has 4 bins. The API arguments part has 90 bins, 16 for the integer arguments and 74 for the string arguments. For the string arguments, several specific types of strings (file path, Dlls, etc.) are processed. Besides, 10 statistical features are extracted from all printable strings. All these features are concatenated to form a 102-dimension feature vector.
API Name and Category Cuckoo sandbox tracks 312 API calls in total which belong to 17 categories. Each API name consists of multiple words with the first letter of each word capitalized, such as "GetFileSize". We split the API name into words and then process these words by applying the feature hashing trick below. For the API category, since the category typically is a single word, for example, "network", we split the word into characters and apply the fea- ture hashing trick. In addition, we compute the MD5 value of the API name, category and arguments to remove any consecutively repeated API calls. We use feature hashing (Weinberger et al. 2009) in Equation 1 to encode a sequence of strings into a fixed-length vector. The random variable x denotes a sequence of elements, where each element is either a string or a character. M denotes the number of bins, i.e., 8 for API name, and 4 for API category. The value of the i-th bin is calculated by:
φ i (x) = j:h(xj )=i ξ(x j )(1)
where h is a hash function that maps an element, e.g., x j , to a natural number m ∈ {1, ..., M } as the bin index; ξ is another hash function that maps an element to {±1}. That is, for each element x j of x whose bin index h(x j ) is i, we add ξ(x j ) into the bin.
API Arguments As for API arguments, there are only two types of values, namely integers and strings. The individual value of an integer is meaningless. The argument name is required to get the meaning of the value. The same integer value might indicate totally different semantics with different argument names. For example, number 22 with the name "port" is different from the one with the name "size".
We adapt the previous feature hashing method to encode the integer's argument name as well as its value, as shown in Equation 2. We use the argument name to locate the hash bin. In particular, we use all the arguments whose names' hash value is i to update the i-th bin via summation. For each such argument, we compute the contribution to the bin as shown in Equation 2, where ξ(x name j ) is a hash function over the argument name and x value j is the value of the integer argument. Because integers may distribute sparsely within a range, we normalize the value using the logarithm to squash the range.
φ i (x) = j:h(x name j )=i ξ(x name j ) log(|x value j | + 1)(2)
where h and ξ are the same hash functions as in Equation 1. For strings of API arguments, their values are more complicated than integers. Some strings starting with '0x' con-tain the address of some objects. And some other may contain the file path, IP address, URL, or plain text. Besides, some API arguments may even contain the content of an entire file. The variety of strings makes it challenging to process them. According to the previous work Islam et al. 2010;Ahmed et al. 2009), the most important strings are the values about file paths, DLLs, registry keys, URLs, and IP addresses. Therefore, we use the feature hashing method in Equation 1 to extract features for these strings.
To capture the hierarchical information contained in the strings, we parse the whole string into several substrings and process them individually. For example, we use "C:\\" to identify a file path. For a path like "C:\\a\\b\\c", four substrings are generated, namely "C:", "C:\\a", "C:\\a\\b", and "C:\\a\\b\\c". All these substrings are processing by Equation 1. The same processing method is applied for DLLs, registry keys and IPs. The DLLs are strings ending with ".dll". The registry keys often start with "HKEY ". IPs are those strings with four numbers (range from 0 to 255) separated by dots. Slightly different for URLs, we only generate substrings from the hostname of the URL. For example, for "https://security.ai.cs.org/", the following substrings will be generated "org", "cs.org", "ai.cs.org" and "security.ai.cs.org". In this way, the domain and organization information will contribute more to the feature.
For lots of other types of strings, based on the previous work (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2010), we extract statistical information from all the printable strings. The printable strings consist of characters ranging from 0x20 to 0x7f. Therefore, all the paths, registry keys, URLs, IPs and some other printable strings are included. One type of strings starting with "MZ" is often a buffer that contains an entire PE file and usually occurs in malicious PE files such as thread injection (Liu et al. 2011). Therefore, we additionally count the occurrences of "MZ" strings. A 10-dimension vector is used to record the number of strings, their average length, the number of characters, the entropy of characters across all printable strings, and the number of paths, DLLs, URLs, registry keys, IPs and "MZ" strings.
We have not handled other arguments such as virtual addresses, structs, et al., which are relatively not so important compared with above types of arguments. Although the proposed feature engineering method is easy to be applied to them using extra bins, we look forward to more targeted researches to explore these arguments.
Model Architecture
We present a deep neural network architecture that leverages the features from the proposed feature engineering step. Figure 2 is an overview of our proposed deep learning model.
Input Module
After feature engineering, we get the input vector whose size is (N, d), where N is the length of the API call sequence, and d (102 bits) is the dimension of each extracted API feature. We first normalize the input by a batch normalization layer (Ioffe and Szegedy 2015). This batch normalization layer normalizes the input values by subtracting the batch mean and dividing by the batch standard devia- Gated-CNNs Module Several gated-CNNs (Dauphin et al. 2017) are applied after the input module. Gated-CNNs allows the selection of important and relevant information making it competitive with recurrent models on language tasks but consuming less resource and less time.
For each gated CNN, the input is fed into two convolution layers respectively. Let X A denotes the output of the first convolution layer, and X B denotes the output of the second one; they are combined by X A ⊗ σ(X B ), which involves an element-wise multiplication operation. Here, σ is the sigmoid function σ(x) = 1 1+e −x . σ(X B ) is regarded as the gate that controls the information from X A passed to the next layer in the model.
Following the idea in (Shen et al. 2014), 1-D convolutional filters are used as n-gram detectors. As Figure 2, we use two gated CNNs whose filter size is 2 and 3 respectively. All convolution layers' filter size is 128, and stride is 1.
Bi-LSTM Module All outputs from Gate CNNs are concatenated together. A batch normalization layer is applied to these outputs to reduce overfitting. We use bidirectional LSTM to learning sequential patterns. The number of units of each LSTM is 100.
LSTM is a recurrent neural network architecture, in which several gates are designed to control the information transmission status so that it is able to capture the long-term context information (Pichotta and Mooney 2016). Bidirectional LSTM is two LSTMs stacking together but with different directional input. Compared to unidirectional LSTM, bidirectional LSTM is able to integrate the information from past and future states simultaneously. Bidirectional LSTM has been proved effective at malware detection by (Agrawal et al. 2018).
Classification Module After learning sequential patterns from Bi-LSTM module, a global max-pooling layer is applied to extract abstract features from the hidden vectors. Instead of using the final activation of the Bi-LSTM, a global max-pooling layer relies on each signal observed throughout the sequence, which helps retain the relevant information learned throughout the sequence.
After the global max-pooling layer, we use a dense layer with units number 64 to reduce the dimension of the intermediate vector to 64. A ReLU activation is applied to this dense layer. Then we use a dropout layer with a rate of 0.5 to reduce overfitting. Finally, a dense layer with units number 1 reduces the dimension to 1. A Sigmoid activation is appended after the dense layer to output the probability.
Our model is supervised with the label associated with each input vector. To measure the loss for training the model, binary cross-entropy function is used as Equation 3.
(X, y) = −(y log(P [Y = 1|X]) + (1 − y)log(P [Y = 0|X]))(3)
In addition, the optimization method we take is Adam, and the learning rate is 0.001.
Experiments
Dataset
As described before, 12 commercial anti-virus engines are set up to classify the PE file. We set a PE file as positive if 4 or more engines agree that it is malicious. And if none of the engines classifies it as malware, we set it as negative. For other cases, we think the results are inconclusive and therefore exclude them from our dataset. The collected data are archived by the date and we pick two months (April and May) data to conduct our experiments. All these PE files are processed by our system (as shown in Figure 1) to collect the API call sequences. Table 2 is a summary of the data, where the row represents the statistics of the data in a month.
Model Evaluation
In order to investigate the performance improvement, we compare the proposed model with three machine learningbased models and three deep learning-based models.
• (Uppal et al. 2014) extract 3-gram vectors from API call names. Then they use the odds ration to select the most important vectors. SVM is applied as the model. • ) use a hash table to indicate the presence of strings. The strings come from both API names and arguments. The generated hash table is then used as features and the classifier is Random Forest. • (Pascanu et al. 2015) train a language model using RNN which can predict the next API call given the previous API calls. Then the RNN model is freezed and the hidden features are extracted for malware detection. The input of the model is a sequence of d-dimensional one-hot vectors whose elements are all zeros except the position (the element value is 1) for the corresponding API call.
• (Kolosnjaji et al. 2016) propose a model which combines stacked CNNs and RNNs. The input is also one-hot vectors for the API call sequence.
• (Agrawal et al. 2018) extract one-hot vectors from the API call sequence and frequent n-gram vectors from the API arguments. The model uses several stacked LSTMs.
All the experiments are conducted against our dataset. We use 4-fold cross-validation (or CV) over the April dataset to train the models and do the testing over the May dataset. Considering that new malware is being generated over time, there could be many PE files for new malware in the May dataset. Therefore, the performance indicates the model's capability for detecting unknown malware in a certain degree.
Three metrics are considered: ROC (receiver operating characteristic curve) AUC (Area Under the Curve) score, ACC (accuracy) and Recall when FP (false positive) rate is 0.1%. The recall is defined as the ratio of the cor-rectly detected malware PE files over all malware PE files. The FP rate is the ratio of benign PE files incorrectly identified as malware. Anti-virus products are required to keep a low false alarm rate to avoid disturbing users frequently (Nicholas 2017). A good model should achieve a high recall rate for a fixed low false positive rate. We provide 95% confidence intervals for all these three metrics. In addition, the inference time per sample, which includes the time for feature processing and model prediction, is also taken into account.
From the experimental results in Table 3, our proposed model achieves the best AUC score, accuracy and recall among all the baseline models at both CV and test dataset. Figure 3 displays the ROC curve of all models. The dashed curves are the ROCs of those traditional machine learning models, while the solid lines are the ROCs of those deep learning models. The experimental results illustrate that the traditional machine learning approaches and deep learning approaches are comparable. It should be noted that the model ) achieves quite good results by using a basic method to extract the string information. This indicates the importance of strings in feature processing. Therefore, we spend a lot of effort on the feature engineering of string data. The results also show that models with argument features generally outperform the ones neglecting arguments. The argument features increase the test AUC score of the traditional machine learning method by 3% and also increased the test AUC score of deep learning by about 1%. Therefore, including API arguments is necessary. Figure 3 shows a margin between the results on validation and test dataset. Since the training dataset is collected before the testing dataset so the test data is likely to include new malware PE file. However, our proposed solution achieves the best performance on the test dataset, which confirms the ability in detecting new and constantly evolving malware. As for the inference time, models with the argument features take a slightly longer time. However, hundreds of milliseconds inference time are relatively small and acceptable, because the data collection using Cuckoo sandbox is timeconsuming, and costs 3-5 minutes per sample. The training takes about 10 minutes per epoch, which could be easily reduced via distributed training (Ooi et al. 2015).
Ablation Study
The proposed model consists of several components that can be flexibly adjusted, e.g., the Gated CNNs, Bi-LSTM and Batch Normalization. In order to explore the effects of different configurations, we employ several sets of comparison experiments by fixing other structures and only changing the testing component. These results of these experiments serve as the basis for the decision of our final model structure.
• Gated CNNs with three sets experiments, the Gated CNNs only with kernel size 2 (2-GatedCNN), two Gated CNNs with kernel size 2 and 3 (2,3-GatedCNN), three Gated CNNs with kernel size 2, 3 and 4 (2,3,4-GatedCNN). • Batch Normalization with four sets experiments, the model without any batch normalization (BN) layer, without the first BN layer (after the input), without the second BN layer (after the Gated CNNs), and with both BN layers. Figure 4 depicts the comparisons for different numbers of Gated CNNs. 2-GatedCNN converges slower although the final performance is very close to the other two models. In addition, increasing the number of gated CNN from 2,3-GatedCNN to 2,3,4-GatedCNN does not bring any performance improvement. The best AUC score of 2-GatedCNN and 2,3-GatedCNN is 98.80% and 98.86% respectively. Therefore, we choose 2,3-GatedCNN in our model. Figure 5 displays the performance with different numbers of batch normalization layers. Although these four curves tend to be closer at later epochs, the curve with both BN layers shows slightly superior performance with the highest AUC score at 98.80%. As for various numbers of Bi-LSTM, Figure 6 shows the performance for each configuration. Obviously, in both figures, the curve of 0-Bi-LSTM is below the other two curves by a large margin, which indicates the Bi-LSTM is vital. The other two curves in both figures are continuously staggered, however, 1-Bi-LSTM is slightly better with the highest point reaching 98.80%. In addition, the computation time of 1-Bi-LSTM is 2 times faster than 2-Bi-LSTM. Thus, we choose 1-Bi-LSTM as the final configuration of the proposed model.
Conclusion
In this work, we propose a novel feature engineering method and a new deep learning architecture for malware detection over the API call sequence. Hashing tricks are applied to process the heterogeneous information from API calls, including the name, category and arguments. A homogeneous and low-cost feature representation is extracted. Then, we use multiple gated-CNNs to transform the high dimensional hash features from each API call, and feed the results into a Bi-LSTM to capture the sequential correlations of API calls within the sequence. The experiments show that our approach outperforms all baselines. Ablation study over multiple architecture variations verify our architecture design decisions.
NCR002-020), and FY2017 SUG Grant. We also thank Se-cureAge Technology of Singapore for sharing the data. | 4,099 |
1907.07352 | 2960160011 | Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g., system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released ("zero-day") malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from ablation study. | David and Netanyahu @cite_36 treat the sandbox report as an entire text string, and then split all strings by any special character. They count the frequency of each string and keep only top 20,000 frequent ones by using a 20,000-bit vector to represent it. Their model is a deep belief network (DBN) which consists of eight layers, from 20,000-sized vectors to 30-sized vectors. They append a softmax layer after the top layer. Cross-entropy loss is used to train the model, which attains 98.6 @cite_48 propose a two-stage approach, a feature learning stage and a classification stage. At the first stage, they use the Recurrent Neural Networks (RNNs) to predict the next possible API call based on the previous API call sequence. For the classification stage, they freeze the RNNs, and feed the outputs into a max-pooling layer aggregate the features for classification. They attain 71.71 @cite_10 propose an approach which combines convolution neural network (CNN) with RNNs. Their approach stacks two CNN layers, and each CNN layer uses a 3-sized filter to simulate the 3-grams approach. A Long short-term memory (LSTM) with a 100-sized hidden vector is appended to handle the time-series sequence. | {
"abstract": [
"This paper presents a novel deep learning based method for automatic malware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major malware families, our method achieves 98.6 classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, websites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate malware signatures.",
"Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3 at a false positive rate of 0.1 .",
"The increase in number and variety of malware samples amplifies the need for improvement in automatic detection and classification of the malware variants. Machine learning is a natural choice to cope with this increase, because it addresses the need of discovering underlying patterns in large-scale datasets. Nowadays, neural network methodology has been grown to the state that can surpass limitations of previous machine learning methods, such as Hidden Markov Models and Support Vector Machines. As a consequence, neural networks can now offer superior classification accuracy in many domains, such as computer vision or natural language processing. This improvement comes from the possibility of constructing neural networks with a higher number of potentially diverse layers and is known as Deep Learning."
],
"cite_N": [
"@cite_36",
"@cite_48",
"@cite_10"
],
"mid": [
"1666731339",
"1545528966",
"2557513839"
]
} | Dynamic Malware Analysis with Feature Engineering and Feature Learning | Cybersecurity imposes substantial economic cost all over the world. A report (CEA 2018) from the United States government estimates that costs by malicious cyber activities in the U.S. economy lay between $57 billion and $109 billion in 2016. Malicious software (or malware) is one of the major cybersecurity threats that evolves rapidly. It is reported that more than 120 million new malware samples are being discovered every year (AV-TEST 2017). Therefore, the development of malware detection techniques is urgent and necessary.
Researchers have been working on malware detection for decades. The mainstream solutions include static analysis and dynamic analysis. Static analysis methods scan the binary byte-streams of the software to create signatures, such as printable strings, n-gram, instructions, etc (Kruegel et al. 2005). However, the signature-based static analysis might be vulnerable to code obfuscation (Rhode, Burnap, and Jones 2018;Gibert et al. 2018) or inadequate to detect new ("zeroday") malware (Vinod et al. 2009). In contrast, dynamic analysis algorithms execute each software in an isolated environment (e.g., a sandbox) to collect its run-time behaviour information. By using behaviour information, dynamic analysis exerts a higher detection rate and is more robust than static analysis (Damodaran et al. 2017). In this paper, we focus on dynamic analysis.
Among behaviour information, the system API call sequence is the most popular data source as it captures all the operations (including network access, file manipulation operations, etc.) executed by the software. Each API call in the sequence contains two important parts, the API name and the arguments. Each API may have zero or multiple arguments, each of which is represented as a name-value pair. To process behaviour information, a lot of feature engineering methods are proposed. For example, if we consider the API name as a string, then the most N (e.g., 1000) frequent n-gram features can be extracted (n = 1, 2, · · ·) from the sequence. However, it is non-trivial to extract the features from the arguments of heterogeneous types, including strings, integers, addresses, etc.
Recently, researchers have applied deep learning models to dynamic analysis. Deep learning models like convolutional neural network (CNN) and recurrent neural network (RNN) can learn features from the sequential data directly without feature engineering. Nonetheless, the data of traditional deep learning applications like computer vision and natural language processing is homogeneous, e.g., images (or text). It is still challenging to process the heterogeneous API arguments using deep learning models. Therefore, most existing approaches ignore the arguments. There are a few approaches Fang et al. 2017;Agrawal et al. 2018) leveraging API arguments. However, these approaches either treat all arguments as strings Agrawal et al. 2018) or only consider the statistical information of arguments (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2013). They consequently cannot fully exploit the heterogeneous information from different types of arguments.
In this paper, we propose a novel feature engineering method and a new deep learning architecture for malware detection. In particular, for different types of arguments, our feature engineering method leverages hashing approaches to extract the heterogeneous features separately. The features extracted from the API name, category, and the arguments, are further concatenated and fed into the deep learning model. We use multiple gated CNN models (Dauphin et al. 2017) to learn abstract lower dimensional features from the high dimensional hash features for each API call. The output from the gated CNN models is processed by a bidirectional LSTM to extract the sequential correlation of all API calls.
Our solution outperforms all baselines with a large margin. Through extensive ablation study, we find that both feature engineering and model architecture design are crucial for achieving high generalization performance.
The main contributions of this paper include: 1. We propose a novel feature representation for system API arguments. The extracted features from our dataset will be released for public access. 2. We devise a deep neural network architecture to process the extracted features, which combines multiple gated CNNs and a bidirectional LSTM. It outperforms all existing solutions with a large margin. 3. We conduct extensive experiments over a large real dataset 1 . Valuable insights about the feature and model architecture are found through ablation study.
Deep Learning Based Approaches
The previous papers typically ignore arguments. (Huang and Stokes 2016) use a feature representation with three parts, the presence of runnable code in arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation is reduced from 50,000 to 4,000 by a random projection. (Agrawal et al. 2018) propose a feature representation with a one-hot vector from API call name and top N frequent n-gram of the argument strings. The model uses several stacked LSTMs that shows a better performance than (Kolosnjaji et al. 2016). They also claim that multiple LSTMs cannot increase the performance.
System Framework
To collect the run-time API calls, we implement the system shown in Figure 1. The system has three parts, PE files collection, behaviour information collection, and feature extraction as well as model training.
PE Files Collection
The workflow of our system starts from the portable executable (PE) files collection. In this paper, we focus on detecting malware in portable executable (PE) file format in Windows systems, which is the most popular malware file format (AV-TEST 2017). This collection part has been implemented by a local anti-virus company, SecureAge Technology of Singapore. In addition, the company maintains a platform with 12 anti-virus engines to classify the PE files. The classification results are aggregated to get the label of each PE file for model training. Once the model is trained, it will be added into the platform as the 13th anti-virus engine. After the collection, an execution queue is maintained to submit the PE files for execution. It monitors the storage usage and decides whether to execute more PE files.
Behaviour Information Collection
Cuckoo 2 , an open-source software, is used to run the PE files and gather execution logs. It executes PE files inside virtual machines and uses API hooks to monitor the API call trace (i.e., the behaviour information). Besides, Cuckoo simulates some user actions, such as clicking a button, typing some texts, etc. In our system, we maintain dozens of virtual machines on each server. All virtual machines are installed with a 64-bit Windows 7 system and several daily-use software. We leverage the snapshot feature of the virtual machine to roll it back after execution. All generated logs are stored locally on the Cuckoo server.
Feature Extraction and Model Training
The execution logs generated by the sandbox contain detailed runtime information of the PE files, whose size ranges from several KB to hundred GB. We design a feature engineering solution that can run in parallel to extract features from the raw execution logs efficiently. Once the features 2 https://cuckoosandbox.org/ are extracted, we train our deep learning model on a model server with GPUs for malware classification.
Methodology
Feature Engineering
Most previous works (Qiao et al. 2013;Pascanu et al. 2015;Kolosnjaji et al. 2016) neglect the arguments of the API call, and only consider the API name and category. Consequently, some important (discriminative) information is lost (Agrawal et al. 2018). For example, the features of two write operations (API calls) would be exactly the same if the file path argument is ignored. However, the write operation might be benign when the target file is created by the program itself but be malicious if the target file is a system file. A few works (Trinius et al. 2009;Agrawal et al. 2018;Huang and Stokes 2016) that consider the arguments fail to exploit the heterogeneous information from different types of arguments.
We propose to adapt the hash method from (Weinberger et al. 2009) to encode the name, category and arguments of an API separately. As shown in Table 1, our feature representation consists of different types of information. The API name has 8 bins, and the API category has 4 bins. The API arguments part has 90 bins, 16 for the integer arguments and 74 for the string arguments. For the string arguments, several specific types of strings (file path, Dlls, etc.) are processed. Besides, 10 statistical features are extracted from all printable strings. All these features are concatenated to form a 102-dimension feature vector.
API Name and Category Cuckoo sandbox tracks 312 API calls in total which belong to 17 categories. Each API name consists of multiple words with the first letter of each word capitalized, such as "GetFileSize". We split the API name into words and then process these words by applying the feature hashing trick below. For the API category, since the category typically is a single word, for example, "network", we split the word into characters and apply the fea- ture hashing trick. In addition, we compute the MD5 value of the API name, category and arguments to remove any consecutively repeated API calls. We use feature hashing (Weinberger et al. 2009) in Equation 1 to encode a sequence of strings into a fixed-length vector. The random variable x denotes a sequence of elements, where each element is either a string or a character. M denotes the number of bins, i.e., 8 for API name, and 4 for API category. The value of the i-th bin is calculated by:
φ i (x) = j:h(xj )=i ξ(x j )(1)
where h is a hash function that maps an element, e.g., x j , to a natural number m ∈ {1, ..., M } as the bin index; ξ is another hash function that maps an element to {±1}. That is, for each element x j of x whose bin index h(x j ) is i, we add ξ(x j ) into the bin.
API Arguments As for API arguments, there are only two types of values, namely integers and strings. The individual value of an integer is meaningless. The argument name is required to get the meaning of the value. The same integer value might indicate totally different semantics with different argument names. For example, number 22 with the name "port" is different from the one with the name "size".
We adapt the previous feature hashing method to encode the integer's argument name as well as its value, as shown in Equation 2. We use the argument name to locate the hash bin. In particular, we use all the arguments whose names' hash value is i to update the i-th bin via summation. For each such argument, we compute the contribution to the bin as shown in Equation 2, where ξ(x name j ) is a hash function over the argument name and x value j is the value of the integer argument. Because integers may distribute sparsely within a range, we normalize the value using the logarithm to squash the range.
φ i (x) = j:h(x name j )=i ξ(x name j ) log(|x value j | + 1)(2)
where h and ξ are the same hash functions as in Equation 1. For strings of API arguments, their values are more complicated than integers. Some strings starting with '0x' con-tain the address of some objects. And some other may contain the file path, IP address, URL, or plain text. Besides, some API arguments may even contain the content of an entire file. The variety of strings makes it challenging to process them. According to the previous work Islam et al. 2010;Ahmed et al. 2009), the most important strings are the values about file paths, DLLs, registry keys, URLs, and IP addresses. Therefore, we use the feature hashing method in Equation 1 to extract features for these strings.
To capture the hierarchical information contained in the strings, we parse the whole string into several substrings and process them individually. For example, we use "C:\\" to identify a file path. For a path like "C:\\a\\b\\c", four substrings are generated, namely "C:", "C:\\a", "C:\\a\\b", and "C:\\a\\b\\c". All these substrings are processing by Equation 1. The same processing method is applied for DLLs, registry keys and IPs. The DLLs are strings ending with ".dll". The registry keys often start with "HKEY ". IPs are those strings with four numbers (range from 0 to 255) separated by dots. Slightly different for URLs, we only generate substrings from the hostname of the URL. For example, for "https://security.ai.cs.org/", the following substrings will be generated "org", "cs.org", "ai.cs.org" and "security.ai.cs.org". In this way, the domain and organization information will contribute more to the feature.
For lots of other types of strings, based on the previous work (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2010), we extract statistical information from all the printable strings. The printable strings consist of characters ranging from 0x20 to 0x7f. Therefore, all the paths, registry keys, URLs, IPs and some other printable strings are included. One type of strings starting with "MZ" is often a buffer that contains an entire PE file and usually occurs in malicious PE files such as thread injection (Liu et al. 2011). Therefore, we additionally count the occurrences of "MZ" strings. A 10-dimension vector is used to record the number of strings, their average length, the number of characters, the entropy of characters across all printable strings, and the number of paths, DLLs, URLs, registry keys, IPs and "MZ" strings.
We have not handled other arguments such as virtual addresses, structs, et al., which are relatively not so important compared with above types of arguments. Although the proposed feature engineering method is easy to be applied to them using extra bins, we look forward to more targeted researches to explore these arguments.
Model Architecture
We present a deep neural network architecture that leverages the features from the proposed feature engineering step. Figure 2 is an overview of our proposed deep learning model.
Input Module
After feature engineering, we get the input vector whose size is (N, d), where N is the length of the API call sequence, and d (102 bits) is the dimension of each extracted API feature. We first normalize the input by a batch normalization layer (Ioffe and Szegedy 2015). This batch normalization layer normalizes the input values by subtracting the batch mean and dividing by the batch standard devia- Gated-CNNs Module Several gated-CNNs (Dauphin et al. 2017) are applied after the input module. Gated-CNNs allows the selection of important and relevant information making it competitive with recurrent models on language tasks but consuming less resource and less time.
For each gated CNN, the input is fed into two convolution layers respectively. Let X A denotes the output of the first convolution layer, and X B denotes the output of the second one; they are combined by X A ⊗ σ(X B ), which involves an element-wise multiplication operation. Here, σ is the sigmoid function σ(x) = 1 1+e −x . σ(X B ) is regarded as the gate that controls the information from X A passed to the next layer in the model.
Following the idea in (Shen et al. 2014), 1-D convolutional filters are used as n-gram detectors. As Figure 2, we use two gated CNNs whose filter size is 2 and 3 respectively. All convolution layers' filter size is 128, and stride is 1.
Bi-LSTM Module All outputs from Gate CNNs are concatenated together. A batch normalization layer is applied to these outputs to reduce overfitting. We use bidirectional LSTM to learning sequential patterns. The number of units of each LSTM is 100.
LSTM is a recurrent neural network architecture, in which several gates are designed to control the information transmission status so that it is able to capture the long-term context information (Pichotta and Mooney 2016). Bidirectional LSTM is two LSTMs stacking together but with different directional input. Compared to unidirectional LSTM, bidirectional LSTM is able to integrate the information from past and future states simultaneously. Bidirectional LSTM has been proved effective at malware detection by (Agrawal et al. 2018).
Classification Module After learning sequential patterns from Bi-LSTM module, a global max-pooling layer is applied to extract abstract features from the hidden vectors. Instead of using the final activation of the Bi-LSTM, a global max-pooling layer relies on each signal observed throughout the sequence, which helps retain the relevant information learned throughout the sequence.
After the global max-pooling layer, we use a dense layer with units number 64 to reduce the dimension of the intermediate vector to 64. A ReLU activation is applied to this dense layer. Then we use a dropout layer with a rate of 0.5 to reduce overfitting. Finally, a dense layer with units number 1 reduces the dimension to 1. A Sigmoid activation is appended after the dense layer to output the probability.
Our model is supervised with the label associated with each input vector. To measure the loss for training the model, binary cross-entropy function is used as Equation 3.
(X, y) = −(y log(P [Y = 1|X]) + (1 − y)log(P [Y = 0|X]))(3)
In addition, the optimization method we take is Adam, and the learning rate is 0.001.
Experiments
Dataset
As described before, 12 commercial anti-virus engines are set up to classify the PE file. We set a PE file as positive if 4 or more engines agree that it is malicious. And if none of the engines classifies it as malware, we set it as negative. For other cases, we think the results are inconclusive and therefore exclude them from our dataset. The collected data are archived by the date and we pick two months (April and May) data to conduct our experiments. All these PE files are processed by our system (as shown in Figure 1) to collect the API call sequences. Table 2 is a summary of the data, where the row represents the statistics of the data in a month.
Model Evaluation
In order to investigate the performance improvement, we compare the proposed model with three machine learningbased models and three deep learning-based models.
• (Uppal et al. 2014) extract 3-gram vectors from API call names. Then they use the odds ration to select the most important vectors. SVM is applied as the model. • ) use a hash table to indicate the presence of strings. The strings come from both API names and arguments. The generated hash table is then used as features and the classifier is Random Forest. • (Pascanu et al. 2015) train a language model using RNN which can predict the next API call given the previous API calls. Then the RNN model is freezed and the hidden features are extracted for malware detection. The input of the model is a sequence of d-dimensional one-hot vectors whose elements are all zeros except the position (the element value is 1) for the corresponding API call.
• (Kolosnjaji et al. 2016) propose a model which combines stacked CNNs and RNNs. The input is also one-hot vectors for the API call sequence.
• (Agrawal et al. 2018) extract one-hot vectors from the API call sequence and frequent n-gram vectors from the API arguments. The model uses several stacked LSTMs.
All the experiments are conducted against our dataset. We use 4-fold cross-validation (or CV) over the April dataset to train the models and do the testing over the May dataset. Considering that new malware is being generated over time, there could be many PE files for new malware in the May dataset. Therefore, the performance indicates the model's capability for detecting unknown malware in a certain degree.
Three metrics are considered: ROC (receiver operating characteristic curve) AUC (Area Under the Curve) score, ACC (accuracy) and Recall when FP (false positive) rate is 0.1%. The recall is defined as the ratio of the cor-rectly detected malware PE files over all malware PE files. The FP rate is the ratio of benign PE files incorrectly identified as malware. Anti-virus products are required to keep a low false alarm rate to avoid disturbing users frequently (Nicholas 2017). A good model should achieve a high recall rate for a fixed low false positive rate. We provide 95% confidence intervals for all these three metrics. In addition, the inference time per sample, which includes the time for feature processing and model prediction, is also taken into account.
From the experimental results in Table 3, our proposed model achieves the best AUC score, accuracy and recall among all the baseline models at both CV and test dataset. Figure 3 displays the ROC curve of all models. The dashed curves are the ROCs of those traditional machine learning models, while the solid lines are the ROCs of those deep learning models. The experimental results illustrate that the traditional machine learning approaches and deep learning approaches are comparable. It should be noted that the model ) achieves quite good results by using a basic method to extract the string information. This indicates the importance of strings in feature processing. Therefore, we spend a lot of effort on the feature engineering of string data. The results also show that models with argument features generally outperform the ones neglecting arguments. The argument features increase the test AUC score of the traditional machine learning method by 3% and also increased the test AUC score of deep learning by about 1%. Therefore, including API arguments is necessary. Figure 3 shows a margin between the results on validation and test dataset. Since the training dataset is collected before the testing dataset so the test data is likely to include new malware PE file. However, our proposed solution achieves the best performance on the test dataset, which confirms the ability in detecting new and constantly evolving malware. As for the inference time, models with the argument features take a slightly longer time. However, hundreds of milliseconds inference time are relatively small and acceptable, because the data collection using Cuckoo sandbox is timeconsuming, and costs 3-5 minutes per sample. The training takes about 10 minutes per epoch, which could be easily reduced via distributed training (Ooi et al. 2015).
Ablation Study
The proposed model consists of several components that can be flexibly adjusted, e.g., the Gated CNNs, Bi-LSTM and Batch Normalization. In order to explore the effects of different configurations, we employ several sets of comparison experiments by fixing other structures and only changing the testing component. These results of these experiments serve as the basis for the decision of our final model structure.
• Gated CNNs with three sets experiments, the Gated CNNs only with kernel size 2 (2-GatedCNN), two Gated CNNs with kernel size 2 and 3 (2,3-GatedCNN), three Gated CNNs with kernel size 2, 3 and 4 (2,3,4-GatedCNN). • Batch Normalization with four sets experiments, the model without any batch normalization (BN) layer, without the first BN layer (after the input), without the second BN layer (after the Gated CNNs), and with both BN layers. Figure 4 depicts the comparisons for different numbers of Gated CNNs. 2-GatedCNN converges slower although the final performance is very close to the other two models. In addition, increasing the number of gated CNN from 2,3-GatedCNN to 2,3,4-GatedCNN does not bring any performance improvement. The best AUC score of 2-GatedCNN and 2,3-GatedCNN is 98.80% and 98.86% respectively. Therefore, we choose 2,3-GatedCNN in our model. Figure 5 displays the performance with different numbers of batch normalization layers. Although these four curves tend to be closer at later epochs, the curve with both BN layers shows slightly superior performance with the highest AUC score at 98.80%. As for various numbers of Bi-LSTM, Figure 6 shows the performance for each configuration. Obviously, in both figures, the curve of 0-Bi-LSTM is below the other two curves by a large margin, which indicates the Bi-LSTM is vital. The other two curves in both figures are continuously staggered, however, 1-Bi-LSTM is slightly better with the highest point reaching 98.80%. In addition, the computation time of 1-Bi-LSTM is 2 times faster than 2-Bi-LSTM. Thus, we choose 1-Bi-LSTM as the final configuration of the proposed model.
Conclusion
In this work, we propose a novel feature engineering method and a new deep learning architecture for malware detection over the API call sequence. Hashing tricks are applied to process the heterogeneous information from API calls, including the name, category and arguments. A homogeneous and low-cost feature representation is extracted. Then, we use multiple gated-CNNs to transform the high dimensional hash features from each API call, and feed the results into a Bi-LSTM to capture the sequential correlations of API calls within the sequence. The experiments show that our approach outperforms all baselines. Ablation study over multiple architecture variations verify our architecture design decisions.
NCR002-020), and FY2017 SUG Grant. We also thank Se-cureAge Technology of Singapore for sharing the data. | 4,099 |
1907.07352 | 2960160011 | Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g., system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released ("zero-day") malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from ablation study. | The previous 3 papers only use the API call sequence but ignore the arguments. Huang and Jack @cite_1 uses a feature representation consisting of three parts, the presence of unpacked code fragments in the arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation with 50,000 features which is reduced to 4,000 by a random projection. They claim for the first time the deep learning model (i.e., RNN) outperforms a shallow architecture proposed by @cite_13 . @cite_43 also use the API call sequence and the arguments. Their feature representation consisting of a one-hot vector from API call name and top N frequent n-grams of the argument strings. The model uses several stacked LSTMs and shows a better performance than @cite_10 . They also claim more LSTMs cannot increase the performance. | {
"abstract": [
"Automatically generated malware is a significant problem for computer users. Analysts are able to manually investigate a small number of unknown files, but the best large-scale defense for detecting malware is automated malware classification. Malware classifiers often use sparse binary features, and the number of potential features can be on the order of tens or hundreds of millions. Feature selection reduces the number of features to a manageable number for training simpler algorithms such as logistic regression, but this number is still too large for more complex algorithms such as neural networks. To overcome this problem, we used random projections to further reduce the dimensionality of the original input space. Using this architecture, we train several very large-scale neural network systems with over 2.6 million labeled samples thereby achieving classification results with a two-class error rate of 0.49 for a single neural network and 0.42 for an ensemble of neural networks.",
"Sequential models which analyze system API calls have shown promise for detecting unknown malware. Athiwaratkun and Stokes recently proposed a two-stage model which uses a long short-term memory (LSTM) model for learning a set of features which are then input to a second classifier. , first use a convolutional neural network followed by an LSTM to predict unknown malware. However, neither of these models consider the parameters which are input to the system API calls. These input parameters offer significant information regarding malicious intent. In this paper, we extend Athiwaratkun's model to include each system API call's two most input parameters. We then show that the proposed model dominates these previously proposed models in terms of the receiver operating characteristic (ROC) curve.",
"In this paper, we propose a new multi-task, deep learning architecture for malware classification for the binary i.e. malware versus benign malware classification task. All models are trained with data extracted from dynamic analysis of malicious and benign files. For the first time, we see improvements using multiple layers in a deep neural network architecture for malware classification. The system is trained on 4.5 million files and tested on a holdout test set of 2 million files which is the largest study to date. To achieve a binary classification error rate of 0.358i¾? , the objective functions for the binary classification task and malware family classification task are combined in the multi-task architecture. In addition, we propose a standard i.e. non multi-task malware family classification architecture which also achieves a malware family classification error rate of 2.94i¾? .",
"The increase in number and variety of malware samples amplifies the need for improvement in automatic detection and classification of the malware variants. Machine learning is a natural choice to cope with this increase, because it addresses the need of discovering underlying patterns in large-scale datasets. Nowadays, neural network methodology has been grown to the state that can surpass limitations of previous machine learning methods, such as Hidden Markov Models and Support Vector Machines. As a consequence, neural networks can now offer superior classification accuracy in many domains, such as computer vision or natural language processing. This improvement comes from the possibility of constructing neural networks with a higher number of potentially diverse layers and is known as Deep Learning."
],
"cite_N": [
"@cite_13",
"@cite_43",
"@cite_1",
"@cite_10"
],
"mid": [
"1966948031",
"2890092828",
"2476429474",
"2557513839"
]
} | Dynamic Malware Analysis with Feature Engineering and Feature Learning | Cybersecurity imposes substantial economic cost all over the world. A report (CEA 2018) from the United States government estimates that costs by malicious cyber activities in the U.S. economy lay between $57 billion and $109 billion in 2016. Malicious software (or malware) is one of the major cybersecurity threats that evolves rapidly. It is reported that more than 120 million new malware samples are being discovered every year (AV-TEST 2017). Therefore, the development of malware detection techniques is urgent and necessary.
Researchers have been working on malware detection for decades. The mainstream solutions include static analysis and dynamic analysis. Static analysis methods scan the binary byte-streams of the software to create signatures, such as printable strings, n-gram, instructions, etc (Kruegel et al. 2005). However, the signature-based static analysis might be vulnerable to code obfuscation (Rhode, Burnap, and Jones 2018;Gibert et al. 2018) or inadequate to detect new ("zeroday") malware (Vinod et al. 2009). In contrast, dynamic analysis algorithms execute each software in an isolated environment (e.g., a sandbox) to collect its run-time behaviour information. By using behaviour information, dynamic analysis exerts a higher detection rate and is more robust than static analysis (Damodaran et al. 2017). In this paper, we focus on dynamic analysis.
Among behaviour information, the system API call sequence is the most popular data source as it captures all the operations (including network access, file manipulation operations, etc.) executed by the software. Each API call in the sequence contains two important parts, the API name and the arguments. Each API may have zero or multiple arguments, each of which is represented as a name-value pair. To process behaviour information, a lot of feature engineering methods are proposed. For example, if we consider the API name as a string, then the most N (e.g., 1000) frequent n-gram features can be extracted (n = 1, 2, · · ·) from the sequence. However, it is non-trivial to extract the features from the arguments of heterogeneous types, including strings, integers, addresses, etc.
Recently, researchers have applied deep learning models to dynamic analysis. Deep learning models like convolutional neural network (CNN) and recurrent neural network (RNN) can learn features from the sequential data directly without feature engineering. Nonetheless, the data of traditional deep learning applications like computer vision and natural language processing is homogeneous, e.g., images (or text). It is still challenging to process the heterogeneous API arguments using deep learning models. Therefore, most existing approaches ignore the arguments. There are a few approaches Fang et al. 2017;Agrawal et al. 2018) leveraging API arguments. However, these approaches either treat all arguments as strings Agrawal et al. 2018) or only consider the statistical information of arguments (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2013). They consequently cannot fully exploit the heterogeneous information from different types of arguments.
In this paper, we propose a novel feature engineering method and a new deep learning architecture for malware detection. In particular, for different types of arguments, our feature engineering method leverages hashing approaches to extract the heterogeneous features separately. The features extracted from the API name, category, and the arguments, are further concatenated and fed into the deep learning model. We use multiple gated CNN models (Dauphin et al. 2017) to learn abstract lower dimensional features from the high dimensional hash features for each API call. The output from the gated CNN models is processed by a bidirectional LSTM to extract the sequential correlation of all API calls.
Our solution outperforms all baselines with a large margin. Through extensive ablation study, we find that both feature engineering and model architecture design are crucial for achieving high generalization performance.
The main contributions of this paper include: 1. We propose a novel feature representation for system API arguments. The extracted features from our dataset will be released for public access. 2. We devise a deep neural network architecture to process the extracted features, which combines multiple gated CNNs and a bidirectional LSTM. It outperforms all existing solutions with a large margin. 3. We conduct extensive experiments over a large real dataset 1 . Valuable insights about the feature and model architecture are found through ablation study.
Deep Learning Based Approaches
The previous papers typically ignore arguments. (Huang and Stokes 2016) use a feature representation with three parts, the presence of runnable code in arguments, the combination of the API call name with one of its arguments (selected manually), and the 3-gram of API call sequence. This feature representation is reduced from 50,000 to 4,000 by a random projection. (Agrawal et al. 2018) propose a feature representation with a one-hot vector from API call name and top N frequent n-gram of the argument strings. The model uses several stacked LSTMs that shows a better performance than (Kolosnjaji et al. 2016). They also claim that multiple LSTMs cannot increase the performance.
System Framework
To collect the run-time API calls, we implement the system shown in Figure 1. The system has three parts, PE files collection, behaviour information collection, and feature extraction as well as model training.
PE Files Collection
The workflow of our system starts from the portable executable (PE) files collection. In this paper, we focus on detecting malware in portable executable (PE) file format in Windows systems, which is the most popular malware file format (AV-TEST 2017). This collection part has been implemented by a local anti-virus company, SecureAge Technology of Singapore. In addition, the company maintains a platform with 12 anti-virus engines to classify the PE files. The classification results are aggregated to get the label of each PE file for model training. Once the model is trained, it will be added into the platform as the 13th anti-virus engine. After the collection, an execution queue is maintained to submit the PE files for execution. It monitors the storage usage and decides whether to execute more PE files.
Behaviour Information Collection
Cuckoo 2 , an open-source software, is used to run the PE files and gather execution logs. It executes PE files inside virtual machines and uses API hooks to monitor the API call trace (i.e., the behaviour information). Besides, Cuckoo simulates some user actions, such as clicking a button, typing some texts, etc. In our system, we maintain dozens of virtual machines on each server. All virtual machines are installed with a 64-bit Windows 7 system and several daily-use software. We leverage the snapshot feature of the virtual machine to roll it back after execution. All generated logs are stored locally on the Cuckoo server.
Feature Extraction and Model Training
The execution logs generated by the sandbox contain detailed runtime information of the PE files, whose size ranges from several KB to hundred GB. We design a feature engineering solution that can run in parallel to extract features from the raw execution logs efficiently. Once the features 2 https://cuckoosandbox.org/ are extracted, we train our deep learning model on a model server with GPUs for malware classification.
Methodology
Feature Engineering
Most previous works (Qiao et al. 2013;Pascanu et al. 2015;Kolosnjaji et al. 2016) neglect the arguments of the API call, and only consider the API name and category. Consequently, some important (discriminative) information is lost (Agrawal et al. 2018). For example, the features of two write operations (API calls) would be exactly the same if the file path argument is ignored. However, the write operation might be benign when the target file is created by the program itself but be malicious if the target file is a system file. A few works (Trinius et al. 2009;Agrawal et al. 2018;Huang and Stokes 2016) that consider the arguments fail to exploit the heterogeneous information from different types of arguments.
We propose to adapt the hash method from (Weinberger et al. 2009) to encode the name, category and arguments of an API separately. As shown in Table 1, our feature representation consists of different types of information. The API name has 8 bins, and the API category has 4 bins. The API arguments part has 90 bins, 16 for the integer arguments and 74 for the string arguments. For the string arguments, several specific types of strings (file path, Dlls, etc.) are processed. Besides, 10 statistical features are extracted from all printable strings. All these features are concatenated to form a 102-dimension feature vector.
API Name and Category Cuckoo sandbox tracks 312 API calls in total which belong to 17 categories. Each API name consists of multiple words with the first letter of each word capitalized, such as "GetFileSize". We split the API name into words and then process these words by applying the feature hashing trick below. For the API category, since the category typically is a single word, for example, "network", we split the word into characters and apply the fea- ture hashing trick. In addition, we compute the MD5 value of the API name, category and arguments to remove any consecutively repeated API calls. We use feature hashing (Weinberger et al. 2009) in Equation 1 to encode a sequence of strings into a fixed-length vector. The random variable x denotes a sequence of elements, where each element is either a string or a character. M denotes the number of bins, i.e., 8 for API name, and 4 for API category. The value of the i-th bin is calculated by:
φ i (x) = j:h(xj )=i ξ(x j )(1)
where h is a hash function that maps an element, e.g., x j , to a natural number m ∈ {1, ..., M } as the bin index; ξ is another hash function that maps an element to {±1}. That is, for each element x j of x whose bin index h(x j ) is i, we add ξ(x j ) into the bin.
API Arguments As for API arguments, there are only two types of values, namely integers and strings. The individual value of an integer is meaningless. The argument name is required to get the meaning of the value. The same integer value might indicate totally different semantics with different argument names. For example, number 22 with the name "port" is different from the one with the name "size".
We adapt the previous feature hashing method to encode the integer's argument name as well as its value, as shown in Equation 2. We use the argument name to locate the hash bin. In particular, we use all the arguments whose names' hash value is i to update the i-th bin via summation. For each such argument, we compute the contribution to the bin as shown in Equation 2, where ξ(x name j ) is a hash function over the argument name and x value j is the value of the integer argument. Because integers may distribute sparsely within a range, we normalize the value using the logarithm to squash the range.
φ i (x) = j:h(x name j )=i ξ(x name j ) log(|x value j | + 1)(2)
where h and ξ are the same hash functions as in Equation 1. For strings of API arguments, their values are more complicated than integers. Some strings starting with '0x' con-tain the address of some objects. And some other may contain the file path, IP address, URL, or plain text. Besides, some API arguments may even contain the content of an entire file. The variety of strings makes it challenging to process them. According to the previous work Islam et al. 2010;Ahmed et al. 2009), the most important strings are the values about file paths, DLLs, registry keys, URLs, and IP addresses. Therefore, we use the feature hashing method in Equation 1 to extract features for these strings.
To capture the hierarchical information contained in the strings, we parse the whole string into several substrings and process them individually. For example, we use "C:\\" to identify a file path. For a path like "C:\\a\\b\\c", four substrings are generated, namely "C:", "C:\\a", "C:\\a\\b", and "C:\\a\\b\\c". All these substrings are processing by Equation 1. The same processing method is applied for DLLs, registry keys and IPs. The DLLs are strings ending with ".dll". The registry keys often start with "HKEY ". IPs are those strings with four numbers (range from 0 to 255) separated by dots. Slightly different for URLs, we only generate substrings from the hostname of the URL. For example, for "https://security.ai.cs.org/", the following substrings will be generated "org", "cs.org", "ai.cs.org" and "security.ai.cs.org". In this way, the domain and organization information will contribute more to the feature.
For lots of other types of strings, based on the previous work (Ahmed et al. 2009;Tian et al. 2010;Islam et al. 2010), we extract statistical information from all the printable strings. The printable strings consist of characters ranging from 0x20 to 0x7f. Therefore, all the paths, registry keys, URLs, IPs and some other printable strings are included. One type of strings starting with "MZ" is often a buffer that contains an entire PE file and usually occurs in malicious PE files such as thread injection (Liu et al. 2011). Therefore, we additionally count the occurrences of "MZ" strings. A 10-dimension vector is used to record the number of strings, their average length, the number of characters, the entropy of characters across all printable strings, and the number of paths, DLLs, URLs, registry keys, IPs and "MZ" strings.
We have not handled other arguments such as virtual addresses, structs, et al., which are relatively not so important compared with above types of arguments. Although the proposed feature engineering method is easy to be applied to them using extra bins, we look forward to more targeted researches to explore these arguments.
Model Architecture
We present a deep neural network architecture that leverages the features from the proposed feature engineering step. Figure 2 is an overview of our proposed deep learning model.
Input Module
After feature engineering, we get the input vector whose size is (N, d), where N is the length of the API call sequence, and d (102 bits) is the dimension of each extracted API feature. We first normalize the input by a batch normalization layer (Ioffe and Szegedy 2015). This batch normalization layer normalizes the input values by subtracting the batch mean and dividing by the batch standard devia- Gated-CNNs Module Several gated-CNNs (Dauphin et al. 2017) are applied after the input module. Gated-CNNs allows the selection of important and relevant information making it competitive with recurrent models on language tasks but consuming less resource and less time.
For each gated CNN, the input is fed into two convolution layers respectively. Let X A denotes the output of the first convolution layer, and X B denotes the output of the second one; they are combined by X A ⊗ σ(X B ), which involves an element-wise multiplication operation. Here, σ is the sigmoid function σ(x) = 1 1+e −x . σ(X B ) is regarded as the gate that controls the information from X A passed to the next layer in the model.
Following the idea in (Shen et al. 2014), 1-D convolutional filters are used as n-gram detectors. As Figure 2, we use two gated CNNs whose filter size is 2 and 3 respectively. All convolution layers' filter size is 128, and stride is 1.
Bi-LSTM Module All outputs from Gate CNNs are concatenated together. A batch normalization layer is applied to these outputs to reduce overfitting. We use bidirectional LSTM to learning sequential patterns. The number of units of each LSTM is 100.
LSTM is a recurrent neural network architecture, in which several gates are designed to control the information transmission status so that it is able to capture the long-term context information (Pichotta and Mooney 2016). Bidirectional LSTM is two LSTMs stacking together but with different directional input. Compared to unidirectional LSTM, bidirectional LSTM is able to integrate the information from past and future states simultaneously. Bidirectional LSTM has been proved effective at malware detection by (Agrawal et al. 2018).
Classification Module After learning sequential patterns from Bi-LSTM module, a global max-pooling layer is applied to extract abstract features from the hidden vectors. Instead of using the final activation of the Bi-LSTM, a global max-pooling layer relies on each signal observed throughout the sequence, which helps retain the relevant information learned throughout the sequence.
After the global max-pooling layer, we use a dense layer with units number 64 to reduce the dimension of the intermediate vector to 64. A ReLU activation is applied to this dense layer. Then we use a dropout layer with a rate of 0.5 to reduce overfitting. Finally, a dense layer with units number 1 reduces the dimension to 1. A Sigmoid activation is appended after the dense layer to output the probability.
Our model is supervised with the label associated with each input vector. To measure the loss for training the model, binary cross-entropy function is used as Equation 3.
(X, y) = −(y log(P [Y = 1|X]) + (1 − y)log(P [Y = 0|X]))(3)
In addition, the optimization method we take is Adam, and the learning rate is 0.001.
Experiments
Dataset
As described before, 12 commercial anti-virus engines are set up to classify the PE file. We set a PE file as positive if 4 or more engines agree that it is malicious. And if none of the engines classifies it as malware, we set it as negative. For other cases, we think the results are inconclusive and therefore exclude them from our dataset. The collected data are archived by the date and we pick two months (April and May) data to conduct our experiments. All these PE files are processed by our system (as shown in Figure 1) to collect the API call sequences. Table 2 is a summary of the data, where the row represents the statistics of the data in a month.
Model Evaluation
In order to investigate the performance improvement, we compare the proposed model with three machine learningbased models and three deep learning-based models.
• (Uppal et al. 2014) extract 3-gram vectors from API call names. Then they use the odds ration to select the most important vectors. SVM is applied as the model. • ) use a hash table to indicate the presence of strings. The strings come from both API names and arguments. The generated hash table is then used as features and the classifier is Random Forest. • (Pascanu et al. 2015) train a language model using RNN which can predict the next API call given the previous API calls. Then the RNN model is freezed and the hidden features are extracted for malware detection. The input of the model is a sequence of d-dimensional one-hot vectors whose elements are all zeros except the position (the element value is 1) for the corresponding API call.
• (Kolosnjaji et al. 2016) propose a model which combines stacked CNNs and RNNs. The input is also one-hot vectors for the API call sequence.
• (Agrawal et al. 2018) extract one-hot vectors from the API call sequence and frequent n-gram vectors from the API arguments. The model uses several stacked LSTMs.
All the experiments are conducted against our dataset. We use 4-fold cross-validation (or CV) over the April dataset to train the models and do the testing over the May dataset. Considering that new malware is being generated over time, there could be many PE files for new malware in the May dataset. Therefore, the performance indicates the model's capability for detecting unknown malware in a certain degree.
Three metrics are considered: ROC (receiver operating characteristic curve) AUC (Area Under the Curve) score, ACC (accuracy) and Recall when FP (false positive) rate is 0.1%. The recall is defined as the ratio of the cor-rectly detected malware PE files over all malware PE files. The FP rate is the ratio of benign PE files incorrectly identified as malware. Anti-virus products are required to keep a low false alarm rate to avoid disturbing users frequently (Nicholas 2017). A good model should achieve a high recall rate for a fixed low false positive rate. We provide 95% confidence intervals for all these three metrics. In addition, the inference time per sample, which includes the time for feature processing and model prediction, is also taken into account.
From the experimental results in Table 3, our proposed model achieves the best AUC score, accuracy and recall among all the baseline models at both CV and test dataset. Figure 3 displays the ROC curve of all models. The dashed curves are the ROCs of those traditional machine learning models, while the solid lines are the ROCs of those deep learning models. The experimental results illustrate that the traditional machine learning approaches and deep learning approaches are comparable. It should be noted that the model ) achieves quite good results by using a basic method to extract the string information. This indicates the importance of strings in feature processing. Therefore, we spend a lot of effort on the feature engineering of string data. The results also show that models with argument features generally outperform the ones neglecting arguments. The argument features increase the test AUC score of the traditional machine learning method by 3% and also increased the test AUC score of deep learning by about 1%. Therefore, including API arguments is necessary. Figure 3 shows a margin between the results on validation and test dataset. Since the training dataset is collected before the testing dataset so the test data is likely to include new malware PE file. However, our proposed solution achieves the best performance on the test dataset, which confirms the ability in detecting new and constantly evolving malware. As for the inference time, models with the argument features take a slightly longer time. However, hundreds of milliseconds inference time are relatively small and acceptable, because the data collection using Cuckoo sandbox is timeconsuming, and costs 3-5 minutes per sample. The training takes about 10 minutes per epoch, which could be easily reduced via distributed training (Ooi et al. 2015).
Ablation Study
The proposed model consists of several components that can be flexibly adjusted, e.g., the Gated CNNs, Bi-LSTM and Batch Normalization. In order to explore the effects of different configurations, we employ several sets of comparison experiments by fixing other structures and only changing the testing component. These results of these experiments serve as the basis for the decision of our final model structure.
• Gated CNNs with three sets experiments, the Gated CNNs only with kernel size 2 (2-GatedCNN), two Gated CNNs with kernel size 2 and 3 (2,3-GatedCNN), three Gated CNNs with kernel size 2, 3 and 4 (2,3,4-GatedCNN). • Batch Normalization with four sets experiments, the model without any batch normalization (BN) layer, without the first BN layer (after the input), without the second BN layer (after the Gated CNNs), and with both BN layers. Figure 4 depicts the comparisons for different numbers of Gated CNNs. 2-GatedCNN converges slower although the final performance is very close to the other two models. In addition, increasing the number of gated CNN from 2,3-GatedCNN to 2,3,4-GatedCNN does not bring any performance improvement. The best AUC score of 2-GatedCNN and 2,3-GatedCNN is 98.80% and 98.86% respectively. Therefore, we choose 2,3-GatedCNN in our model. Figure 5 displays the performance with different numbers of batch normalization layers. Although these four curves tend to be closer at later epochs, the curve with both BN layers shows slightly superior performance with the highest AUC score at 98.80%. As for various numbers of Bi-LSTM, Figure 6 shows the performance for each configuration. Obviously, in both figures, the curve of 0-Bi-LSTM is below the other two curves by a large margin, which indicates the Bi-LSTM is vital. The other two curves in both figures are continuously staggered, however, 1-Bi-LSTM is slightly better with the highest point reaching 98.80%. In addition, the computation time of 1-Bi-LSTM is 2 times faster than 2-Bi-LSTM. Thus, we choose 1-Bi-LSTM as the final configuration of the proposed model.
Conclusion
In this work, we propose a novel feature engineering method and a new deep learning architecture for malware detection over the API call sequence. Hashing tricks are applied to process the heterogeneous information from API calls, including the name, category and arguments. A homogeneous and low-cost feature representation is extracted. Then, we use multiple gated-CNNs to transform the high dimensional hash features from each API call, and feed the results into a Bi-LSTM to capture the sequential correlations of API calls within the sequence. The experiments show that our approach outperforms all baselines. Ablation study over multiple architecture variations verify our architecture design decisions.
NCR002-020), and FY2017 SUG Grant. We also thank Se-cureAge Technology of Singapore for sharing the data. | 4,099 |
1907.07307 | 2961578905 | We present a data-driven framework for incorporating side information in dynamic optimization under uncertainty. Specifically, our approach uses predictive machine learning methods (such as k-nearest neighbors, kernel regression, and random forests) to weight the relative importance of various data-driven uncertainty sets in a robust optimization formulation. Through a novel measure concentration result for local machine learning methods, we prove that the proposed framework is asymptotically optimal for stochastic dynamic optimization with covariates. We also describe a general-purpose approximation for the proposed framework, based on overlapping linear decision rules, which is computationally tractable and produces high-quality solutions for dynamic problems with many stages. Across a variety of examples in shipment planning, inventory management, and finance, our method achieves improvements of up to 15 over alternatives and requires less than one minute of computation time on problems with twelve stages. | This paper follows a recent body of literature on data-driven optimization under uncertainty in operations research and management science. Much of this work has focused on the paradigm of distributionally robust optimization, in which the optimal solution is that which performs best in expectation over a worst-case probability distribution from an ambiguity set. Motivated by probabilistic guarantees, distributionally robust optimization has found particular applicability in data-driven settings in which the ambiguity set is constructed using historical data, such as @cite_7 @cite_9 @cite_5 @cite_14 . In particular, the final steps in our convergence result () draw heavily from similar techniques from @cite_5 and @cite_10 . In contrast to previous work, this paper develops a new measure concentration result for the weighted empirical distribution () which enables machine learning and covariates to be incorporated into sample robust optimization and Wasserstein-based distributionally robust optimization for the first time. | {
"abstract": [
"We study stochastic programs where the decision-maker cannot observe the distribution of the exogenous uncertainties but has access to a finite set of independent samples from this distribution. In this setting, the goal is to find a procedure that transforms the data to an estimate of the expected cost function under the unknown data-generating distribution, i.e., a predictor, and an optimizer of the estimated cost function that serves as a near-optimal candidate decision, i.e., a prescriptor. As functions of the data, predictors and prescriptors constitute statistical estimators. We propose a meta-optimization problem to find the least conservative predictors and prescriptors subject to constraints on their out-of-sample disappointment. The out-of-sample disappointment quantifies the probability that the actual expected cost of the candidate decision under the unknown true distribution exceeds its predicted cost. Leveraging tools from large deviations theory, we prove that this meta-optimization problem admits a unique solution: The best predictor-prescriptor pair is obtained by solving a distributionally robust optimization problem over all distributions within a given relative entropy distance from the empirical distribution of the data.",
"Stochastic programming can effectively describe many decision-making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix). We demonstrate that for a wide range of cost functions the associated distributionally robust (or min-max) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a practical example of portfolio selection, where our framework leads to better-performing policies on the “true” distribution underlying the daily returns of financial assets.",
"Motivated by data-driven decision making and sampling problems, we investigate probabilistic interpretations of robust optimization (RO). We establish a connection between RO and distributionally robust stochastic programming (DRSP), showing that the solution to any RO problem is also a solution to a DRSP problem. Specifically, we consider the case where multiple uncertain parameters belong to the same fixed dimensional space and find the set of distributions of the equivalent DRSP problem. The equivalence we derive enables us to construct RO formulations for sampled problems (as in stochastic programming and machine learning) that are statistically consistent, even when the original sampled problem is not. In the process, this provides a systematic approach for tuning the uncertainty set. The equivalence further provides a probabilistic explanation for the common shrinkage heuristic, where the uncertainty set used in an RO problem is a shrunken version of the original uncertainty set.",
"We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.",
""
],
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_5",
"@cite_10"
],
"mid": [
"2606985488",
"1968355947",
"2100072100",
"2963450292",
"2966701082"
]
} | Dynamic Optimization with Side Information | Dynamic decision making under uncertainty forms the foundation for numerous fundamental problems in operations research and management science. In these problems, a decision maker attempts to minimize an uncertain objective over time, as information incrementally becomes available. For example, consider a retailer with the goal of managing the inventory of a new short life cycle product. Each week, the retailer must decide an ordering quantity to replenish its inventory. Future demand for the product is unknown, but the retailer can base its ordering decisions on the remaining inventory level, which depends on the realized demands in previous weeks. A risk-averse investor faces a similar problem when constructing and adjusting a portfolio of assets in order to achieve a desirable risk-return tradeoff over a horizon of many months. Additional examples abound in energy planning, airline routing, and ride sharing, as well as in many other areas.
To make high quality decisions in dynamic environments, the decision maker must accurately model future uncertainty. Often, practitioners have access to side information or auxiliary covariates, which can help predict that uncertainty. For a retailer, although the future demand for a newly introduced clothing item is unknown, data on the brand, style, and color of the item, as well as data on market trends and social media, can help predict it. For a risk-averse investor, while the returns of the assets in future stages are uncertain, recent asset returns and prices of relevant options can provide crucial insight into upcoming volatility. Consequently, organizations across many industries are continuing to prioritize the use of predictive analytics in order to leverage vast quantities of data to understand future uncertainty and make better operational decisions.
A recent body of work has aimed to leverage predictive analytics in decision making under uncertainty. For example, Hannah et al. (2010), Ban and Rudin (2018), Bertsimas and Kallus (2014) and Ho and Hanasusanto (2019) investigate prescriptive approaches, based on sample average approximation, that use local machine learning to assign weights to the historical data based on covariates. Bertsimas and Van Parys (2017) propose adding robustness to those weights to achieve optimal asymptotic budget guarantees. Elmachtoub and Grigas (2017) develop an approach for linear optimization problems in which a machine learning model is trained to minimize the decision cost. All of these approaches are specialized for single-stage or two-stage optimization problems, and do not readily generalize to problems with many stages. For a class of dynamic inventory problems, propose a data-driven approach by fitting the stochastic process and covariates to a parametric regression model, which is asymptotically optimal when the model is correctly specified. Bertsimas and McCord (2019) propose a different approach based on dynamic programming that uses nonparametric machine learning methods to handle auxiliary covariates.
However, these dynamic approaches require scenario tree enumeration and suffer from the curse of dimensionality. To the best of our knowledge, no previous work leverages machine learning in a computationally tractable, data-driven framework for decision making in dynamic environments with covariates.
Recently, Bertsimas et al. (2018a) developed a data-driven approach for dynamic optimization under uncertainty that they call sample robust optimization (SRO). Their SRO framework solves a robust optimization problem in which an uncertainty set is constructed around each historical sample path. They show this data-driven framework enjoys nonparametric out-of-sample performance guarantees for a class of dynamic linear optimization problems without covariates and show that this framework can be approximated using decision rule techniques from robust optimization.
Contributions
In this paper, we present a new framework for leveraging side information in dynamic optimization. Specifically, we propose combining local machine learning methods with the sample robust optimization framework. Through a new measure concentration result, we show that the proposed sample robust optimization with covariates framework is asymptotically optimal, providing the assurance that the resulting decisions are nearly optimal in the presence of big data. We also demonstrate the tractability of the approach via an approximation algorithm based on overlapping linear decision rules. To the best of our knowledge, our method is the first nonparametric approach for tractably solving dynamic optimization problems with covariates, offering practitioners a general-purpose tool for better decision making with predictive analytics. We summarize our main contributions as follows:
• We present a general-purpose framework for leveraging machine learning in data-driven dynamic optimization with covariates. Our approach extends the sample robust optimization framework by assigning weights to the uncertainty sets based on covariates. The weights are computed using machine learning methods such as k-nearest neighbor regression, kernel regression, and random forest regression.
• We provide theoretical justification for the proposed framework in the big data setting. First, we develop a new measure concentration result for local machine learning methods (Theorem 2), which shows that the weighted empirical distribution produced by local predictors converges quickly to the true conditional distribution. To the best of our knowledge, such a result for local machine learning is the first of its kind. We use Theorem 2 to establish that the proposed framework is asymptotically optimal for dynamic optimization with covariates without any parametric assumptions (Theorem 1).
• To find high quality solutions for problems with many stages in practical computation times, we present an approximation scheme based on overlapping linear decision rules. Specifically, we propose using separate linear decision rules for each uncertainty set to approximate the costs incurred in each stage. We show that the approximation is computationally tractable, both with respect to the number of stages and size of the historical dataset.
• By using all available data, we show that our method produces decisions that achieve improved out-of-sample performance. Specifically, in a variety of examples (shipment planning, inventory management, and finance), across a variety of time horizons, our proposed method outperforms alternatives, in a statistically significant manner, achieving up to 15% improvement in average out-of-sample cost. Moreover, our algorithm is practical and scalable, requiring less than one minute on examples with up to twelve stages.
The paper is organized as follows. Section 2 introduces the problem setting and notation. Section 3 proposes the new framework for incorporating machine learning into dynamic optimization. Section 4 develops theoretical guarantees on the proposed framework. Section 5 presents the general multi-policy approximation scheme for dynamic optimization with covariates. Section 6 presents a detailed investigation and computational simulations of the proposed methodology in shipment planning, inventory management, and finance. We conclude in Section 7.
Problem Setting
We consider finite-horizon discrete-time stochastic dynamic optimization problems. The uncertain quantities observed in each stage are denoted by random variables
ξ 1 ∈ Ξ 1 ⊆ R d 1 ξ , . . . , ξ T ∈ Ξ T ⊆ R d T ξ . The decisions made in each stage are denoted by x 1 ∈ X 1 ⊆ R d 1 x , . . . , x T ∈ X T ⊆ R d T x .
Given realizations of the uncertain quantities and decisions, we incur a cost of
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) ∈ R.
A decision rule π = (π 1 , . . . , π T ) is a collection of measurable functions π t : Ξ 1 × · · · × Ξ t−1 → X t which specify what decision to make in stage t based of the information observed up to that point.
Given realizations of the uncertain quantities and choice of decision rules, the resulting cost is c π ξ 1 , . . . , ξ T , ) := c(ξ 1 , . . . , ξ T , π 1 , . . . , π T (ξ 1 , . . . , ξ T −1 ) .
Before selecting the decision rules, we observe auxiliary covariates γ ∈ Γ ⊆ R dγ . For example, in the aforementioned fashion setting, the auxiliary covariates may information on the brand, style, and color of a new clothing item and the remaining uncertainties representing the demand for the product in each week of the lifecycle. Given a realization of the covariates γ =γ, our goal is to find decision rules which minimize the conditional expected cost:
v * (γ) := minimize π∈Π E c π (ξ 1 , . . . , ξ T ) γ =γ .(1)
We refer to (1) as dynamic optimization with covariates. The optimization takes place over a collection Π which is any subset of the space of all non-anticipative decision rules.
In this paper, we assume that the joint distribution of the covariates and uncertain quantities (γ, ξ 1 , . . . , ξ T ) is unknown, and our knowledge consists of historical data of the form
(γ 1 , ξ 1 1 , . . . , ξ 1 T ), . . . , (γ N , ξ N 1 , . . . , ξ N T ),
where each of these tuples consists of a realization of the auxiliary covariates and the following realization of the random variables over the stages. For example, in the aforementioned fashion setting, each tuple corresponds to the covariates of a past fashion item as well as its demand over its lifecycle. We will not assume any parametric structure on the relationship between the covariates and future uncertainty.
The goal of this paper is a general-purpose, computationally tractable, data-driven approach for approximately solving dynamic optimization with covariates. In the following sections, we propose and analyze a new framework which leverages nonparametric machine learning, trained from historical data, to predict future uncertainty from covariates in a way that leads to near-optimal decision rules to (1).
Notation
The joint probability distribution of the covariates γ and uncertain quantities ξ = (ξ 1 , . . . , ξ T ) is denoted by P. For the purpose of proving theorems, we assume throughout this paper that the historical data are independent and identically distributed (i.i.d.) samples from this distribution P. In other words, we assume that the historical data satisfies
((γ 1 , ξ 1 ), . . . , (γ N , ξ N )) ∼ P N ,
where P N := P × · · · × P is the product measure. The set of all probability distributions supported on Ξ := Ξ 1 × · · · × Ξ T ⊆ R d ξ is denoted by P(Ξ). For each of the covariatesγ ∈ Γ, we assume that its conditional probability distribution satisfies Pγ ∈ P(Ξ), where Pγ(·) is shorthand for P(· | γ = γ). We sometimes use subscript notation for expectations to specify the underlying probability distribution; for example, the following two expressions are equivalent:
E ξ∼Pγ [f (ξ 1 , . . . , ξ T )] ≡ E [f (ξ 1 , . . . , ξ T ) | γ =γ] .
Finally, we say that the cost function resulting from a policy π is upper semicontinuous if lim sup ζ→ζ c π (ζ 1 , . . . , ζ T ) ≤ c π (ζ 1 , . . . ,ζ T ) for allζ ∈ Ξ.
Sample Robust Optimization with Covariates
In this section, we present our approach for incorporating machine learning in dynamic optimization. We first review sample robust optimization, and then we introduce our new sample robust optimization with covariates framework.
Preliminary: sample robust optimization
Consider a stochastic dynamic optimization problem of the form (1) in which there are no auxiliary covariates. The underlying joint distribution of the random variables ξ ≡ (ξ 1 , . . . , ξ T ) is unknown, but we have data consisting of sample paths, ξ 1 ≡ (ξ 1 1 , . . . , ξ 1 T ), . . . , ξ N ≡ (ξ N 1 , . . . , ξ N T ). For this setting, sample robust optimization can be used to find approximate solutions in stochastic dynamic optimization. To apply the framework, one constructs an uncertainty set around each sample path in the training data and then chooses the decision rules that optimize the average of the worstcase realizations of the cost. Formally, this framework results in the following robust optimization problem:
minimize π∈Π N i=1 1 N sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(2)
where U i N ⊆ Ξ is an uncertainty set around ξ i . Intuitively speaking, (2) chooses the decision rules by averaging over the historical sample paths which are adversarially perturbed. Under mild probabilistic assumptions on the underlying joint distribution and appropriately constructed uncertainty sets, Bertsimas et al. (2018a) show that sample robust optimization converges asymptotically to the underlying stochastic problem and that (2) is amenable to approximations similar to dynamic robust optimization.
Incorporating covariates into sample robust optimization
We now present our new framework, based on sample robust optimization, for solving dynamic optimization with covariates. In the proposed framework, we first train a machine learning algorithm on the historical data to predict future uncertainty (ξ 1 , . . . , ξ T ) as a function of the covariates.
From the trained learner, we obtain weight functions w i N (γ), for i = 1, . . . , N , each of which captures the relevance of the ith training sample to the new covariates,γ. We incorporate the weights into sample robust optimization by multiplying the cost associated with each training example by the corresponding weight function. The resulting sample robust optimization with covariates framework is as follows:
v N (γ) := minimize π∈Π N i=1 w i N (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(3)
where the uncertainty sets are defined
U i N := ζ ∈ Ξ : ζ − ξ i ≤ N ,
and · is some p norm with p ≥ 1.
The above framework provides the flexibility for the practitioner to construct weights from a variety of machine learning algorithms. We focus in this paper on weight functions which come from nonparametric machine learning methods. Examples of viable predictive models include k-nearest neighbors (kNN), kernel regression, classification and regression trees (CART), and random forests (RF). We describe these four classes of weight functions.
Definition 1. The k-nearest neighbor weight functions are given by:
w i N,kNN (γ) := 1 k N , if γ i is a k N -nearest neighbor ofγ, 0, otherwise. Formally, γ i is a k N -nearest neighbor ofγ if |{j ∈ {1, . . . , N } \ i : γ j −γ < γ i −γ }| < k N .
For more technical details, we refer the reader to Biau and Devroye (2015).
Definition 2. The kernel regression weight functions are given by:
w i N,KR (γ) := K( γ i −γ /h N ) N j=1 K( γ j −γ /h N ) ,
where K(·) is the kernel function and h N is the bandwidth parameter. Examples of kernel functions include the Gaussian kernel, K(u) = 1 √ 2π e −u 2 /2 , the triangular kernel, K(u) = (1 − u)1{u ≤ 1}, and the Epanechnikov kernel, K(u) = 3 4 (1 − u 2 )1{u ≤ 1}. For more information on kernel regression, see Friedman et al. (2001, Chapter 6).
The next two types of weight functions we present are based on classification and regression trees (Breiman et al. 1984) and random forests (Breiman 2001). We refer the reader to Bertsimas and Kallus (2014) for technical implementation details.
Definition 3. The classification and regression tree weight functions are given by:
w i N,CART (γ) := 1 |l N (γ)| , i ∈ l N (γ), 0, otherwise,
where l N (γ) is the set of indices i such that γ i is contained in the same leaf of the tree asγ.
Definition 4. The random forest weight functions are given by:
w i N,RF (γ) := 1 B B b=1 w i,b N,CART (γ),
where B is the number of trees in the ensemble, and w i,b N,CART (γ) refers to the weight function of the bth tree in the ensemble.
All of the above weight functions come from nonparametric machine learning methods. They are highly effective as predictive methods because they can learn complex relationships between the covariates and the response variable without requiring the practitioner to state an explicit parametric form. Similarly, as we prove in Section 4, solutions to (3) with these weight functions are asymptotically optimal for (1) without any parametric restrictions on the relationship between γ and ξ. In other words, incorporating covariates into sample robust optimization via (3) leads to better decisions asymptotically, even without specific knowledge of how the covariates affect the uncertainty.
Asymptotic Optimality
In this section, we establish asymptotic optimality guarantees for sample robust optimization with auxiliary covariates. We prove that, under mild conditions, (3) converges to (1) as the number of training samples goes to infinity. Thus, as the amount of data grows, sample robust optimization with covariates becomes an optimal approximation of the underlying stochastic dynamic optimization problem. Crucially, our convergence guarantee does not require parametric restrictions on the space of decision rules (e.g., linearity) or parametric restrictions on the joint distribution of the covariates and uncertain quantities. These theoretical results are consistent with empirical experiments in Section 6.
Main result
We begin by presenting our main result. The proof of the result depends on some technical assumptions and concepts from distributionally robust optimization. For simplicity, we defer the statement and discussion of technical assumptions regarding the underlying probability distribution and cost until Sections 4.3 and 4.4, and first discuss what is needed to apply the method in practice. The practitioner needs to select a weight function, parameters associated with that weight function, and the radius, N , of the uncertainty sets. While these may be selected by cross validation, we show that the method will in general converge if the parameters are selected to satisfy the following: Assumption 1. The weight functions and uncertainty set radius satisfy one of the following:
1. {w i N (·)} are k-nearest neighbor weight functions with k N = min( k 3 N δ , N − 1) for constants k 3 > 0 and δ ∈ ( 1 2 , 1), and N = k 1 N p for constants k 1 > 0 and 0 < p < min 1−δ dγ , 2δ−1 d ξ +2 . 2. {w i N (·)
} are kernel regression weight functions with the Gaussian, triangular, or Epanechnikov kernel function and h N = k 4 N −δ for constants k 4 > 0 and δ ∈ 0, 1 2dγ , and N = k 1 N p for constants k 1 > 0 and 0 < p < min δ,
1−δdγ 2+d ξ .
Given Assumption 1, our main result is the following.
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
The theorem says that objective value of (3) converge almost surely to the optimal value of the fullinformation problem, (1), as N goes to infinity. The assumptions of the theorem require that the joint distribution and the feasible decision rules are well behaved. We will discuss these technical assumptions in more detail in the following sections.
In order to prove the asymptotic optimality of sample robust optimization with covariates, we view (3) through the more general lens of Wasserstein-based distributionally robust optimization.
We first review some properties of the Wasserstein metric and then prove a key intermediary result, from which our main result follows.
Review of the Wasserstein metric
The Wasserstein metric provides a distance function between probability distributions. In particular, given two probability distributions Q, Q ∈ P(Ξ), the type-1 Wasserstein distance is defined as the optimal objective value of a minimization problem:
d 1 (Q, Q ) := inf E (ξ,ξ )∼Π ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively . The Wasserstein metric is particularly appealing because a distribution with finite support can have a finite distance to a continuous distribution. This allows us to construct a Wasserstein ball around an empirical distribution that includes continuous distributions, which cannot be done with other popular measures such as the Kullback-Leilbler divergence (Kullback and Leibler 1951).
We remark that the 1-Wasserstein metric satisfies the axioms of a metric, including the triangle inequality (Clement and Desch 2008):
d 1 (Q 1 , Q 2 ) ≤ d 1 (Q 1 , Q 3 ) + d 1 (Q 3 , Q 2 ), ∀Q 1 , Q 2 , Q 3 ∈ P(Ξ).
Important to this paper, the 1-Wasserstein metric admits a dual form, as shown by Kantorovich and Rubinstein (1958),
d 1 (Q, Q ) = sup Lip(h)≤1 |E ξ∼Q [h(ξ)] − E ξ∼Q [h(ξ)]| ,
where the supremum is taken over all 1-Lipschitz functions. Note that the absolute value is optional in the dual form of the metric, and the space of Lipschitz functions can be restricted to those which satisfy h(0) = 0 without loss of generality. Finally, we remark that Fournier and Guillin (2015) prove under a light-tailed assumption that the 1-Wasserstein distance between the empirical distribution and its underlying distribution concentrates around zero with high probability. Theorem 2 in the following section extends this concentration result to the setting with auxiliary covariates.
Concentration of the weighted empirical measure
Given a local predictive method, let the corresponding weighted empirical measure be defined aŝ
P N γ := N i=1 w i N (γ)δ ξ i ,
where δ ξ denotes the Dirac probability distribution which places point mass at ξ. In this section,
we prove under mild assumptions that the weighted empirical measureP N γ concentrations quickly to Pγ with respect to the 1-Wasserstein metric. We introduce the following assumptions on the underlying joint probability distribution:
Assumption 2 (Conditional Subgaussianity). There exists a parameter σ > 0 such that
P ( ξ − E[ ξ | γ =γ] > t | γ =γ) ≤ exp − t 2 2σ 2 ∀t > 0,γ ∈ Γ.
Assumption 3 (Lipschitz Continuity). There exists 0 < L < ∞ such that
d 1 (Pγ, Pγ ) ≤ L γ −γ , ∀γ,γ ∈ Γ.
Assumption 4 (Smoothness of Auxiliary Covariates). The set Γ is compact, and there exists g > 0 such that
P( γ −γ ≤ ) ≥ g dγ , ∀ > 0,γ ∈ Γ.
With these assumptions, we are ready to prove the concentration result, which is proved using a novel technique that relies on the dual form of the Wasserstein metric and a discrete approximation of the space of 1-Lipschitz functions.
Theorem 2. Suppose the weight function and uncertainty sets satisfy Assumption 1 and the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4. Then, for everyγ ∈ Γ,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. = 0.
Proof. Without loss of generality, we assume throughout the proof that all norms · refer to the ∞ norm. 1 Fix anyγ ∈ Γ. It follows from Assumption 1 that
{w i N (γ)} are not functions of ξ 1 , . . . , ξ N ; (4) N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N; (5) N = k 1 N p , ∀N ∈ N,(6)
for constants k 1 , p > 0. Moreover, Assumption 1 also implies that there exists constants k 2 > 0 and
η > p(2 + d ξ ) such that lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely; (7) E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
The proof of the the above statements under Assumption 1 is found in Appendix EC.1. Now,
choose any fixed q ∈ (0, η/(2 + d ξ ) − p), and let b N := N q , B N := ζ ∈ R d ξ : ζ ≤ b N , I N := 1 ξ 1 , . . . , ξ N ∈ B N .
Finally, we define the following intermediary probability distributions:
Q N γ := N i=1 w i N (γ)P γ i ,Q N γ|B N := N i=1 w i N (γ)P γ i |B N , where P γ i |B N (·) is shorthand for P(· | γ = γ i , ξ ∈ B N ).
Applying the triangle inequality for the 1-Wasserstein metric and the union bound,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. ≤ P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. + P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. + P ∞ d 1 (Q N γ|B N ,P N γ ) > N 3 i.o. .
We now proceed to bound each of the above terms.
1 To see why this is without loss of generality, consider any other p norm where p ≥ 1. In this case,
ξ − ξ p ≤ d 1/p ξ ξ − ξ ∞.
By the definition of the 1-Wasserstein metric, this implies
d p 1 (Pγ ,P N γ ) ≤ d 1/p ξ d ∞ 1 (Pγ ,P N γ ),
where d p 1 refers to the 1-Wasserstein metric with the p norm. If N satisfies Assumption 1, N /d 1/p ξ also satisfies Assumption 1, so the result for all other choices of p norms follows from the result with the ∞ norm.
Term 1: d 1 (Pγ,Q N γ ): By the dual form of the 1-Wasserstein metric,
d 1 (Pγ,Q N γ ) = sup Lip(h)≤1 E[h(ξ)|γ =γ] − N i=1 w i N (γ)E[h(ξ)|γ = γ i ] ,
where the supremum is taken over all 1-Lipschitz functions. By (5) and Jensen's inequality, we can upper bound this by
d 1 (Pγ,Q N γ ) ≤ N i=1 w i N (γ) sup Lip(h)≤1 E[h(ξ)|γ =γ] − E[h(ξ)|γ = γ i ] = N i=1 w i N (γ)d 1 Pγ, P γ i ≤ L N i=1 w i N (γ) γ − γ i ,
where the final inequality follows from Assumption 3. Therefore, it follows from (7) that
P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. = 0. (9) Term 2: d 1 (Q N γ ,Q N γ|B N ): Consider any Lipschitz function Lip(h) ≤ 1 for which h(0) = 0, and let N ∈ N satisfy bN ≥ σ + supγ ∈Γ E[ ξ |γ =γ] (which is finite because of Assumption 4). Then, for all N ≥N , and allγ ∈ Γ, E[h(ξ)|γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ)1{ξ ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ) | γ =γ , ξ ∈ B N ]P (ξ ∈ B N | γ =γ ) − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]P(ξ / ∈ B N | γ =γ ) ≤ E[ ξ 1{ξ / ∈ B N } | γ =γ ] + b N P(ξ / ∈ B N | γ =γ ) = ∞ b N P ( ξ > t | γ =γ ) dt + b N P ( ξ ≥ b N | γ =γ ) ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 .
The first inequality follows because |h(ξ)| ≤ b N for all ξ ∈ B N and |h(ξ)| ≤ ξ otherwise. For the second inequality, we used the Gaussian tail inequality ∞ x e −t 2 /2 dt ≤ e −x 2 /2 for x ≥ 1 (Vershynin 2018) along with Assumption 2. Because this bound holds uniformly over all h, and allγ ∈ Γ, it follows that
d 1 (Q N γ ,Q N γ|B N ) = sup Lip(h)≤1,h(0)=0 N i=1 w i N (γ) E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ N i=1 w i N (γ) sup Lip(h)≤1,h(0)=0 E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ sup γ ∈Γ sup Lip(h)≤1,h(0)=0 |E[h(ξ) | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]| ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 ,
for all N ≥N . It is easy to see that the right hand side above divided by N /3 goes to 0 as N goes to infinity, so
P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. = 0. Term 3: d 1 (Q N γ|B N ,P N γ )
: By the law of total probability,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 ≤ P N (I N = 0) + P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 .
We now show that each of the above terms have finite summations. First,
∞ N =1 P N (I N = 0) ≤ ∞ N =1 N sup γ ∈Γ P(ξ / ∈ B N | γ =γ ) ≤ ∞ N =1 N sup γ ∈Γ exp − (b N − E [ ξ | γ =γ ]) 2 2σ 2 < ∞.
The first inequality follows from the union bound, the second inequality follows from Assumption 2, and the final inequality follows because supγ ∈Γ E[ ξ |γ =γ ] < ∞ and the definition of b N .
Second, for each l ∈ N, we define several quantities. Let P l be the partitioning of
B N = [−b N , b N ] d ξ into 2 ld ξ translations of (−b N 2 −l , b N 2 −l ] d ξ .
Let H l be the set of piecewise constant functions which are constant on each region of the partition P l , taking values on {kb N 2 −l : k ∈ {0, ±1, ±2, ±3, . . . , ±2 l }}. Note that |H l | = (2 l+1 + 1) 2 ld ξ . Then, we observe that for all Lipschitz functions Lip(h) ≤ 1 which satisfy h(0) = 0, there exists aĥ ∈ H l such that
sup ζ∈B N |h(ζ) −ĥ(ζ)| ≤ b N 2 −l+1 .
Indeed, within each region of the partition, h can vary by no more than b N 2 −l+1 . The possible function values forĥ are separated by b N 2 −l . Because h is bounded by ±b N , this implies the existence ofĥ ∈ H l such thatĥ has a value within b N 2 −l+1 of h everywhere within that region. The identical reasoning holds for all other regions of the partition.
Therefore, for every l ∈ N,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 = P N sup Lip(h)≤1 h(0)=0 N i=1 w i N (γ) h(ξ i ) − E[h(ξ) | γ = γ i , ξ ∈ B N ] > N 3 I N = 1 ≤ P N sup h∈H l N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − 2 · b N 2 −l+1 I N = 1 ≤ |H l | sup h∈H l P N N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − b N 2 −l+2 I N = 1 ,
where the final inequality follows from the union bound. We choose l = 2 + log 2
6b N N , in which case N 3 − b N 2 −l+2 ≥ N 6 .
Furthermore, for all sufficiently large N ,
|H l | = (2 l+1 + 1) 2 ld ξ ≤ 96 b N N 24 d ξ (b N / N ) d ξ = exp 24 d ξ b N N d ξ log 96b N N .
Applying Hoeffding's inequality, and noting |ĥ(ξ i )| is bounded by b N when ξ i ∈ B N , we have the following for allĥ ∈ H l :
P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1 = E P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1, γ 1 , . . . , γ N I N = 1 ≤ E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N = 1 = E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N 1 P N (I N = 1) ≤ 2E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N ≤ 2 exp − k 2 N η 2 N 72b 2 N ,
for N sufficiently large that P(I N = 1) ≥ 1/2 and 2 N /72b 2 N < 1. Note that (8) was used for the final inequality. Combining these results, we have
P N d 1 (P N γ ,Q N γ|B N ) > N /3 I N = 1 ≤ 2 exp 24 d ξ b N N d ξ log 96b N N − k 2 2 N N η 72N b 2 N ,
for N sufficiently large. For some constants c 1 , c 2 > 0, and sufficiently large N , this is upper bounded by 2 exp −c 1 N η−2(p+q) + c 2 N d ξ (q+p) log N .
Since 0 < d ξ (p + q) < η − 2(p + q), we can conduct a limit comparison test with 1/N 2 to see that this term has a finite sum over N , which completes the proof.
Proof of main result
Theorem 2 provides the key ingredient for the proof of the main consistency result. We state one final assumption, which requires that the objective function of (1) is upper semicontinuous and bounded by linear functions of the uncertainty.
Assumption 5. For all π ∈ Π, c π (ζ 1 , . . . , ζ T ) is upper semicontinuous in ζ and |c(ζ, x)| ≤ C(1 + ζ ) for all ζ ∈ Ξ and some C > 0.
Under this assumption, the proof of Theorem 1 follows from Theorem 2 via arguments similar to those used by Esfahani and Kuhn (2018) and Bertsimas et al. (2018a). We state it fully in Appendix EC.2.
Tractable Approximations
In the previous sections, we presented the new framework of sample robust optimization with covariates and established its asymptotic optimality without any significant structural restrictions on the space of decision rules. In this section, we focus on tractable methods for approximately solving the robust optimization problems that result from this proposed framework. Specifically, we develop a formulation which uses auxiliary decision rules to approximate the cost function.
In combination with linear decision rules, this approach enables us to find high-quality decisions for real-world problems with more than ten stages in less than one minute, as we demonstrate in Section 6.
We focus in this section on dynamic optimization problems with cost functions of the form
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) = T t=1 f t x t + g t ξ t + min y t ∈R d t y h t y t : t s=1 A t,s x s + t s=1 B t,s ξ s + C t y t ≤ d t .(10)
Such cost functions appear frequently in applications such as inventory management and supply chain networks. Unfortunately, it is well known that these cost functions are convex in the uncertainty ξ 1 , . . . , ξ T . Thus, even evaluating the worst-case cost over a convex uncertainty set is computationally demanding in general, as it requires the maximization of a convex function.
As an intermediary step towards developing an approximation scheme for (3) with the above cost function, we consider the following optimization problem:
v N (γ) := minimize π∈Π, y i t ∈R t ∀i,t N i=1 w i N (γ) sup ζ∈U i N T t=1 f t π t (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h t y i t (ζ 1 , . . . , ζ t ) subject to t s=1 A t,s π s (ζ 1 , . . . , ζ s−1 ) + t s=1 B t,s ζ s + C t y i t (ζ 1 , . . . , ζ t ) ≤ d t ∀ζ ∈ U i N , i ∈ {1, . . . , N }, t ∈ {1, . . . , T },(11)
where R t is the set of all functions y : Ξ 1 × · · · × Ξ t → R d t y . In this problem, we have introduced auxiliary decision rules which capture the minimization portion of (10) in each stage. We refer to (11) as a multi-policy approach, as it involves different auxiliary decision rules for each uncertainty set. The following theorem shows that (11) is equivalent to (3).
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. See Appendix EC.3.
We observe that (11) involves optimizing over decision rules, and thus is computationally challenging to solve in general. Nonetheless, we can obtain a tractable approximation of (11) by further restricting the space of primary and auxiliary decision rules. For instance, we can restrict all primary and auxiliary decision rules as linear decision rules of the form
π t ζ 1 , . . . , ζ t−1 = x t,0 + t−1 s=1 X t,s ζ s , y i t (ζ 1 , . . . , ζ t ) = y i t,0 + t s=1 Y i t,s ζ s .
One can alternatively elect to use a richer class of decision rules, such as lifted linear decision rules (Chen andZhang 2009, Georghiou et al. 2015). In all cases, feasible approximations that restrict the space of decision rules of (11) provide an upper bound on the costv N (γ) and produce decision rules that are feasible for (11).
The key benefit of the multi-policy approximation scheme is that it offers many degrees of freedom in approximating the nonlinear cost function. Specifically, in (11), a separate auxiliary decision rule y i t captures the value of the cost function for each uncertainty set in each stage. We approximate each y i t with a linear decision rule, which only needs to be locally accurate, i.e., accurate for realizations in the corresponding uncertainty set. As a result, (11) with linear decision rules results in significantly tighter approximations of (3) compared to using a single linear decision rule, y t , for all uncertainty sets in each stage. Moreover, these additional degrees of freedom come with only a mild increase in computation cost, and we substantiate these claims via computational experiments in Section 6.2. In Appendix EC.4, we provide the reformulation of the multi-policy approximation scheme with linear decision rules into a deterministic optimization problem using standard techniques from robust optimization.
Computational Experiments
We perform computational experiments to assess the out-of-sample performance and computational tractability of the proposed methodologies across several applications. These examples are twostage shipment planning (Section 6.1), dynamic inventory management (Section 6.2), and portfolio optimization (Section 6.3). Table 1 Relationship of four methods. We compare several methods using different machine learning models. These methods include the proposed sample robust optimization with covariates, sample average approximation (SAA), the predictions to prescriptions (PtP) approach of Bertsimas and Kallus (2014), and sample robust optimization without covariates (SRO). In Table 1, we show that each of the above methods are particular instances of (3) from Section 3. The methods in the left column ignore covariates by assigning equal weights to each uncertainty set, and the methods in the right column incorporate covariates by choosing the weights based on predictive machine learning. The methods in the top row do not incorporate any robustness, and the methods in the bottom row incorporate robustness via a positive N in the uncertainty sets. In addition, for the dynamic inventory management example, we also implement and compare to the residual tree algorithm described in Ban et al.
(2018). In each experiment, the relevant methods are applied to the same training datasets, and their solutions are evaluated against a common testing dataset. Further details are provided in each of the following sections.
Shipment planning
We first consider a two-stage shipment planning problem in which a decision maker seeks to satisfy demand in several locations from several production facilities while minimizing production and transportation costs. Our problem setting closely follows Bertsimas and Kallus (2014), in which the decision maker has access to auxiliary covariates (promotions, social media, market trends), which may be predictive of future sales in each retail location. Additionally, after observing demand, the decision maker has the opportunity to produce additional units y f ≥ 0 in each facility at a cost of p 2 > p 1 per unit. The fulfillment of each unit of demand generates r > 0 in revenue. Given the above notation and dynamics, the cost incurred by the decision maker is
c(ξ, x) = f ∈F p 1 x f − ∈L rξ + minimize s∈R L×F + , y∈R F + f ∈F p 2 y f + f ∈F ∈L c f s f subject to f ∈F s f ≥ ξ ∀ ∈ L ∈L s f ≤ x f + y f ∀f ∈ F.
Experiments. We perform computational experiments using the same parameters and data generation procedure as Bertsimas and Kallus (2014). Specifically, we consider an instance with |F| = 4, |L| = 12, p 1 = 5, p 2 = 100, and r = 90. The network topology, transportation costs, and the joint distribution of the covariates γ ∈ R 3 and demands ξ ∈ R 12 are the same as Bertsimas and Kallus (2014), with the exception that we generate the covariates as i.i.d. samples as opposed to an ARMA process (but with the same marginal distribution).
In our experiments, we compare sample robust optimization with covariates, sample average approximation, sample robust optimization, and predictions to prescriptions. For the robust approaches (bottom row of Table 1), we construct the uncertainty sets from Section 3 using the 1 norm and Ξ = R 12 + , solve these problems using the multi-policy approximation with linear decision rules described in Section 5, and consider uncertainty sets with radius ∈ {100, 500}. For the approaches using covariates (right column of Table 1), we used the k N -nearest neighbors with parameter k N = 2N 5 . All solutions were evaluated on a test set of size 100 and the results were averaged over 100 independent training sets.
Results. In Figure 1, we present the average out-of-sample profits of the various methods. The results show that the best out-of-sample average profit is attained when using the proposed sample robust optimization with covariates. Interestingly, we observe no discernible differences between sample average approximation and sample robust optimization in Figure 1, suggesting the value gained by incorporating covariates in this example. Compared to the approach of Bertsimas and Kallus (2014), sample robust optimization with covariates achieves a better out-of-sample average performance for each choice of . Table 2 shows that these differences are statistically significant.
This example demonstrates that, in addition to enjoying asymptotic optimality guarantees, sample robust optimization with covariates provides meaningful value across various values of N .
Dynamic inventory management
We next consider a dynamic inventory control problem over the first T = 12 weeks of a new product. In each week, a retailer observes demand for the product and can replenish inventory Out-of-sample profit for the shipment planning example. The p-values from the Wilcoxon signed rank test for comparison with the predictive to prescriptive analytics method (PtP-kNN) and sample robust optimization with covariates (SRO-kNN). After adjusting for multiple hypothesis testing, all results are significant at the α = 0.05 significance level because all p-values are less than Problem Description. In each stage t ∈ {1, . . . , T }, the retailer procures inventory from multiple suppliers to satisfy demand for a single product. The demands for the product across stages are denoted by ξ 1 , . . . , ξ T ≥ 0. In each stage t, and before the demand ξ t is observed, the retailer places procurement orders at various suppliers indexed by J = {1, . . . , |J |}. Each supplier j ∈ J has perunit order cost of c tj ≥ 0 and a lead time of j stages. At the end of each stage, the firm incurs a per-unit holding cost of h t and a backorder cost of b t . Inventory is fully backlogged and the firm starts with zero initial inventory. The cost incurred by the firm over the time horizon is captured by c(ξ 1 , . . . , ξ T , x 1 , . . . ,
x T ) = T t=1 j∈J c tj x tj + minimize y t ∈R y t subject to y t ≥ h t j∈J t− j s=1 x sj − t s=1 ξ s y t ≥ −b t j∈J t− j s=1 x sj − t s=1 ξ s .
Experiments. The parameters of the procurement problem were chosen based on Ban et al.
(2018). Specifically, we consider the case of two suppliers where c t1 = 1.0, c t2 = 0.5, h t = 0.25, and b t = 11 for each stage. The first supplier has no lead time and the second supplier has a lead time of one stage. We generate training and test data from the same distribution as the shipment planning problem in Section 6.1. In this case, the demands produced by this process are interpreted as the demands over the T = 12 stages. We perform computational experiments comparing the proposed sample robust optimization with covariates and the residual tree algorithm proposed by . In particular, we compare sample robust optimization with covariates with the multi-policy approximation as well as without the multi-policy approximation (in which we use a single auxiliary linear decision rule for y t for all uncertainty sets in each stage). The uncertainty sets from Section 3 are defined with the 2 norm and Ξ = R 12 + . The out-of-sample cost resulting from the decision rules were averaged over 100 training sets of size N = 40 and 100 testing points, and sample robust optimization with covariates used k-nearest neighbors with varying choices of k and radius ≥ 0 of the uncertainty sets. Table 3, we show the average out-of-sample cost resulting from sample robust optimization with covariates using linear decision rules, with and without the multi-policy approximation from Section 5. In both settings, we used k-nearest neighbors as the machine learning method and evaluated the out-of-sample performance by applying the linear decision rules for the ordering quantities. The results of these computational experiments in Table 3 demonstrate that significant improvements in average out-of-sample performance are found when combining the multi-policy approximation with covariates via k-nearest neighbors. We show in Table 4 that these results are statistically significant. For comparison, we also implemented the residual tree algorithm from Ban et al. (2018). When using their algorithm with a binning of B = 2 in each stage, their approach resulted in an average out-of-sample cost of 27142. We were unable to run with a binning of B = 3 Table 3 Average out-of-sample cost for dynamic procurement problem. Average out-of-sample cost for the dynamic procurement problem using sample robust optimization with N = 40. For each uncertainty set radius and parameter k, average was taken over 100 training sets and 100 test points. Optimal is indicated in bold. The residual tree algorithm with a binning of B = 2 in each stage gave an average out-of-sample cost of 27142. Table 4 Statistical significance for dynamic procurement problem.
Results. In
Method k 0 100 200 300 400 500 600 700 Sample robust optimization Linear decision rules no covariates * * * * * * * * k-nearest neighbors 26 * * * * * * * * 20 * * * * * * * * 13 * * * * * * * * Linear decision rules with multi-policy no covariates * * * * * * * * k-nearest neighbors 26 * * * * 1.4 × 10 −5 * * * 20 * * * * -* * * 13 * * * * 5.8 × 10 −3 1 × 10 −3 * *
The p-values of the Wilcoxon signed rank test for comparison with sample robust optimization using linear decision rules with multi-policy, k = 20, and = 400. An asterisk denotes that the p-value was less than 10 −8 . After adjusting for multiple hypothesis testing, each result is significant at the α = 0.05 significance level if its p-value is less than 0.05 63 ≈ 7.9 × 10 −4 . Table 5.
Portfolio optimization
Finally, we consider a single-stage portfolio optimization problem in which we wish to find an allocation of a fixed budget to n assets. Our goal is to simultaneously maximize the expected return while minimizing the the conditional value at risk (cVaR) of the portfolio. Before selecting our portfolio, we observe auxiliary covariates which include general market indicators such as index performance as well as macroeconomic numbers released by the US Bureau of Labor Statistics. Problem Description. We denote the portfolio allocation among the assets by x ∈ X := {x ∈ R n + :
n j=1 x j = 1}, and the returns of the assets by the random variables ξ ∈ R n . The conditional value at risk at the α ∈ (0, 1) level measures the expected loss of the portfolio, conditional on losses being above the 1 − α quantile of the loss distribution. Rockafellar and Uryasev (2000) showed that the cVaR of a portfolio can be computed as the optimal objective value of a convex minimization problem. Therefore, our portfolio optimization problem can be expressed as a convex optimization problem with an auxiliary decision variable, β ∈ R. Thus, given an observationγ of the auxiliary covariates, our goal is to solve minimize x∈X , β∈R
E β + 1 α max(0, −x ξ − β) − λx ξ γ =γ ,(12)
where λ ∈ R + is a trade-off parameter that balances the risk and return objectives. Table 1), we construct the uncertainty sets from Section 3 using the 1 norm. For each training sample size, we compute the out-of-sample objective on a test set of size 1000, and we average the results over 100 instances of training data.
In order to select N and other tuning parameters associated with the machine learning weight functions, we first split the data into a training and validation set. We then train the weight Figure 2 Out-of-sample objective for the portfolio optimization example.
functions using the training set, compute decisions for each of the instances in the validation set, and compute the out-of-sample cost on the validation set. We repeat this for a variety of parameter values and select the combination that achieves the best cost on the validation set.
Following a similar reformulation approach as Esfahani and Kuhn (2018), we solve the robust approaches exactly by observing that
minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N β + 1 α max{0, −x ζ − β} − λx ζ = minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N max β − λx ζ, 1 α + λ x ζ = minimize x∈X , β∈R N i=1 w i N (γ) max sup ζ∈U i N {β − λx ζ} , sup ζ∈U i N 1 α + λ x ζ , = minimize x∈X , β∈R,v∈R N N i=1 w i N (γ)v i subject to v i ≥ β − λx ζ v i ≥ 1 α + λ x ζ ∀ζ ∈ U i N , i ∈ {1, . . . , N }.
The final expression can be reformulated as a deterministic optimization problem by reformulating the robust constraints.
Results. In Figure 2, we show the average out-of-sample objective values using the various methods. Consistent with the computational results of Esfahani and Kuhn (2018) and Bertsimas and Van Parys (2017), the results underscore the importance of robustness in preventing overfitting and achieving good out-of-sample performance in the small data regime. Indeed, we observe that the sample average approximation, which ignores the auxiliary data, outperforms PtP-kNN and PtP-CART when the amount of training data is limited. We believe this is due to the fact the latter methods both throw out training examples, so the methods overfit when the training data is limited, leading to poor out-of-sample performance. In contrast, our methods (SRO-kNN and SRO-CART) typically achieve the strongest out-of-sample performance, even though the amount of training data is limited.
Conclusion
In this paper, we introduced sample robust optimization with covariates, a new framework for solving dynamic optimization problems with side information. Through three computational examples, we demonstrated that our method achieves significantly better out-of-sample performance than scenario-based alternatives. We complemented these empirical observations with theoretical analysis, showing our nonparametric method is asymptotically optimal via a new concentration measure result for local learning methods. Finally, we showed our approach inherits the tractability of robust optimization, scaling to problems with many stages via the multi-policy approximation scheme. Xin Chen and Yuhan Zhang. Uncertain linear programs: extended affinely adjustable robust counterparts.
Operations Research, 57 (6)
N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N.(4)
Moreover, there exists constants k 2 > 0 and η > p(2 + d ξ ) such that
lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely;(7)E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
Proof. We observe that (4) and (5) follow directly from the definitions of the weight functions.
The proofs of (7) and (8) are split into two parts, one for the k-nearest neighbor weights and one for kernel regression weights.
k-Nearest Neighbors: For the proof of (7), we note
N i=1 w i N (γ) γ i −γ ≤ γ (k N ) (γ) −γ ,
where γ (k N ) (γ) denotes the k N th nearest neighbor ofγ out of γ 1 , . . . , γ N . Therefore, for any λ > 0,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ P N γ (k N ) (γ) −γ > λ N ≤ P N i : γ i −γ ≤ λ N ≤ k N − 1 .
By Assumption 4, this probability is upper bounded by P(β ≤ k − 1), where β ∼ Binom(N, g(λ N ) dγ ). By Hoeffding's inequality,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ exp −2(N g(λk 1 /N p ) dγ − k N + 1) 2 N , ec2
e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information for k N ≤ N g(λk 1 /N p ) dγ + 1. We note that this condition on k N is satisfied for N sufficiently large because δ + pd γ < 1 by Assumption 1. Because the right hand side in the above inequality has a finite sum over N , (7) follows by the Borel Cantelli lemma.
For the proof of (8), it follows from Assumption 1 that
N i=1 w i N (γ) 2 ≤ k 3 N 1−2δ
deterministically (for all sufficiently large N such that k 3 N δ ≤ N − 1) and p(2 + d ξ ) > 2δ − 1.
Thus, (8) follows with η = 2δ − 1.
Kernel regression: Assumption 1 stipulates that the kernel function K(·) is Gaussian, triangular, or Epanechnikov, which are defined in Section 3. It is easy to verify that these kernel functions satisfy the following:
1. K is nonnegative, finite valued, and monotonically decreasing (for nonnegative inputs).
2. u α K(u) → 0 as u → ∞ for any α ∈ R.
3. ∃u * > 0 such that K(u * ) > 0.
For the proof of (7), define q > 0 such that p < q < δ. Letting D be the diameter of Γ and g N (γ) =
N i=1 K( γ i −γ /h N ), we have N i=1 w i N (γ) γ i −γ = N i=1 w i N (γ)1{ γ i −γ ≤ N −q } γ i −γ + 1 g N (γ) N i=1 K γ i −γ h N 1{ γ i −γ > N −q } γ i −γ ≤ N −q + N DK(N −q /h N ) g N (γ) ,
where the inequality follows from the monotonicity of K. By construction, N −q / N → 0, so we just need to handle the second term. We note, for any λ > 0,
P N N DK(N −q /h N ) g N (γ) > λ N ≤ P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N , where Z N i = 1{ γ i −γ ≤ u * h N }.
To achieve this inequality, we lower bounded each term in g N (γ) by K(u * ) or 0, because of the monotonicity of K. By Hoeffding's inequality, for some constants k 5 , k 6 > 0 that do not depend on N . We used Assumption 4 for the second inequality. Because δ > q, the second kernel property implies N 1/2+p K(k 4 N −q+δ ) goes to 0 as N goes to infinity, so that term is irrelevant. Because 1/2 − δd γ > 0 by Assumption 1, the right hand side of the inequality has a finite sum over N , and thus (7) follows from the Borel Cantelli lemma.
P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N ≤ exp − 2 N EZ N i − N D λ N K(u * ) K(N −q /h N ) 2 + N ≤ exp − 2 N g(u * h N ) dγ − N D λ N K(u * ) K(N −q /h N ) 2 + N = exp − k 5 N 1/2−δdγ − k 6 N 1/2+p K(k 4 N −q+δ ) 2 + ,
For the proof of (8), define
v N = K( γ 1 −γ /h N ) . . . K( γ N −γ /h N ) .
We note that
N i=1 w i N (γ) 2 = v N 2 2 v N 2 1 ≤ v N ∞ v N 1 ≤ K(0) K(u * ) N i=1 Z N i ,
where Z N i is defined above. The first inequality follows from Holder's inequality, and the second inequality follows from the monotonicity of K. Next, we defineZ N i to be a Bernoulli random variable with parameter g(u * h N ) dγ for each i. For any θ ∈ (0, 1),
E P N exp −θ N i=1 w i N (γ) 2 ≤ E P N exp −θK(u * ) N i=1Z N i K(0) = 1 − g(u * h N ) dγ + g(u * h N ) dγ exp(−θK(u * )/K(0)) N ≤ exp −N g(u * h N ) dγ (1 − exp(−θK(u * )/K(0))) ≤ exp −N g(u * h N ) dγ θK(u * ) 2K(0) = exp − θK(u * )g(k 4 u * ) dγ N 1−δdγ 2K(0) .
The first inequality follows because g(u * h N ) dγ is an upper bound on P( γ i −γ ≤ u * h N ) by Assumption 4. The first equality follows from the definition of the moment generating function for a binomial random variable. The next line follows from the inequality e x ≥ 1 + x and the following from the inequality 1 − e −x ≥ x/2 for 0 ≤ x ≤ 1. Because 1 − δd γ > p(2 + d ξ ), this completes the proof of (8) with η = 1 − δd γ and k 2 = K(u * )g(k 4 u * ) dγ /2K(0).
EC.2. Proof of Theorem 1
In this section, we present our proof of Theorem 1. First, we must introduce some necessary terminology. To connect Theorem 2 to sample robust optimization, we consider the ∞-Wasserstein metric, which is given by:
d ∞ (Q, Q ) ≡ inf Π-ess sup Ξ×Ξ ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively , where the essential supremum of the joint distribution is defined as
Π-ess sup Ξ×Ξ ξ − ξ = inf M : Π ξ − ξ > M = 0 .
We make use of the following result from Bertsimas et al. (2018a):
Lemma EC.1. For any measurable f : Ξ → R, N i=1 w N i (γ) sup ζ∈U i N f (ζ) = sup Q∈P(Ξ): d∞(P N γ ,Q)≤ N E ξ∼Q [f (ξ)].
The proof of Lemma EC.1 follows identical reasoning as in Bertsimas et al. (2018a) and is thus omitted.
Next, we state a result from Bertsimas et al. (2018a) (their Theorem EC.1), which bounds the difference in worst case objective values between 1-Wasserstein and ∞-Wasserstein distributionally robust optimization problems. We note that Bertsimas et al. (2018a) proved the following result for the case that Q is the unweighted empirical measure, but their proof carries through for the case here in which Q is a weighted empirical measure.
Lemma EC.2. Let Z ⊆ R d , f : Z → R be measurable, and ζ 1 , . . . , ζ N ∈ Z. Suppose that Q = N i=1 w i δ ζ i
for given weights w 1 , . . . , w N ≥ 0 that sum to one. If θ 2 ≥ 2θ 1 ≥ 0, then
sup Q∈P(Z): d 1 (Q ,Q)≤θ 1 E ξ∼Q [f (ξ)] ≤ sup Q∈P(Z): d∞(Q ,Q)≤θ 2 E ξ∼Q [f (ξ)] + 4θ 1 θ 2 sup ζ∈Z |f (ζ)|.
We now restate and prove the main result, which combines the new measure concentration result from this paper with similar proof techniques as Bertsimas et al. (2018a) and Esfahani and Kuhn (2018).
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
Proof. We break the limit into upper and lower parts. The proof of the lower part follows from an argument similar to that used by Bertsimas et al. (2018a). The proof of the upper part follows from the argument used by Esfahani and Kuhn (2018). To begin, we define
D N := {ζ : ζ ≤ log N },
and let Pγ |D N (·) be shorthand for P(· | γ =γ, ξ ∈ D N ). Then, applying Assumption 2,
P N ∪ N i=1 U i N ⊆ D N ≤ P max i≤N ξ i + N > log N ≤ N P( ξ > log N − N ) = N E [P( ξ − E[ ξ | γ] > log N − N − E[ ξ | γ] | γ)] ≤ N E P ξ − E[ ξ | γ] > log N − N − sup γ ∈Γ E[ ξ | γ = γ ] | γ ≤ N E 2 exp − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 = 2 exp log N − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 , (EC.2)
which has a finite sum over N ∈ N. Therefore, by the Borel-Cantelli lemma, there exists N 0 ∈ N,
P ∞ -almost surely, such that ∪ N i=1 U i N ⊆ D N ∀N ≥ N 0 .
We now choose any r > 0 such that N N −r satisfies Assumption 1, and define N 1 := max{N 0 , 2 1 r }.
Then, the following holds for all N ≥ N 1 and π ∈ Π:
sup Q∈P(D N ∩Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] ≤ sup Q∈P(D N ∩Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c π (ξ 1 , . . . , ξ T )] + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| = N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| ≤ N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4C N r (1 + log N ). (EC.3)
Indeed, the first supremum satisfies the conditions of Lemma EC.2 since N ≥ N 0 and N ≥ 2 1 r , and the equality follows from Lemma EC.1 since N ≥ N 0 . The final inequality follows from Assumption 5 and the construction of D N . We observe that the second term on (EC.3) converges to zero as N → ∞. Next, we observe that We handle the first term with the Cauchy-Schwartz inequality,
E[c π (ξ 1 , . . . , ξ T ) | γ =γ] E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )] = E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] + E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }].E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] ≤ E ξ∼Pγ [c π (ξ 1 , . . . , ξ T ) 2 ]Pγ(ξ / ∈ D N ).
By Assumptions 2 and 5, the above bound is finite and converges to zero as N → ∞ uniformly over π ∈ Π. We handle the second termby the new concentration measure from this paper. Specifically, it follows from Theorem 2 that there exists an N 2 ≥ N 1 , P ∞ -almost surely, such that
d 1 (Pγ,P N γ ) ≤ N N r ∀N ≥ N 2 .
Therefore, for all N ≥ N 2 and decision rules π ∈ Π:
E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ ∈ D N }] = E ξ∼Pγ c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + Pγ(ξ ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) α N ≤ sup Q∈P(Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] − Pγ(ξ / ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ).
Indeed, the inequality follows because N ≥ N 2 . It follows from Assumption 5 and (EC.2) that the second term in the final equality converges to zero as N → ∞ uniformly over π ∈ Π. Combining the above, we conclude that lim inf N →∞v
N (γ) = lim inf N →∞ inf π∈Π N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) ≥ inf π∈Π E[c π (ξ 1 , . . . , ξ T ) | γ =γ] = v * (γ),
where the inequality holds P ∞ -almost surely. This completes the proof of (EC.1).
Upper bound. We now prove that lim sup N →∞v
N (γ) ≤ v * (γ), P ∞ -almost surely. (EC.4)
Indeed, for any arbitrary δ > 0, let x δ ∈ X be a δ-optimal solution for (1). By Esfahani and Kuhn (2018, Lemma A.1) and Assumption 5, there exists a non-increasing sequence of functions f j (ζ 1 , . . . , ζ T ), j ∈ N, such that lim j→∞ f j (ζ 1 , . . . , ζ T ) = c x δ (ζ 1 , . . . , ζ T ), ∀ζ ∈ Ξ e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information ec7 and f j is L j -Lipschitz continuous. Furthermore, for each N ∈ N, choose any probability distribution
Q N ∈ P(Ξ) such that d 1 (Q N ,P N γ ) ≤ N and sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ.
For any j ∈ N,
lim sup N →∞v N (γ) ≤ lim sup N →∞ sup Q∈P(Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Q N [f j (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j d 1 (Pγ,Q N ) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + d 1 (Q N ,P N γ )) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + N ) + δ = E Pγ [f j (ξ 1 , . . . , ξ T )] + δ, P ∞ -almost surely,
where we have used the fact d 1 (P, Q) ≤ d ∞ (P, Q) for the second inequality, the dual form of the 1-Wasserstein metric for the fifth inequality (because f j is L j -Lipschitz), and Theorem 2 for the equality. Taking the limit as j → ∞, and applying the monotone convergence theorem (which is allowed because E ξ∼Pγ |f 1 (ξ 1 , . . . , ξ T )| ≤ L 1 E ξ∼Pγ ξ + |f 1 (0)| < ∞ by Assumption 4), gives lim sup N →∞v
N (γ) ≤ E ξ∼Pγ [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ v * (γ) + 2δ, P ∞ -almost surely.
Since δ > 0 was chosen arbitrarily, the proof of (EC.4) is complete.
EC.3. Proof of Theorem 3
In this section, we present our proof of Theorem 3 from Section 5. We restate the theorem here for convenience.
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. We first show thatṽ N (γ) ≥v N (γ). Indeed, consider any primary decision ruleπ and auxiliary decision rulesȳ i 1 , . . . ,ȳ i T for each i ∈ {1, . . . , N } which are optimal for (11). 2 Then, it follows from feasibility to (11) that w i N (γ)c π (ζ 1 , . . . , ζ T )
≤ N i=1 w i N (γ)cπ(ζ 1 , . . . , ζ T ) ≤ N i=1 w i N (γ) sup ζ∈U i N T t=1
f tπt (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h tȳ i t (ζ 1 , . . . , ζ t ) =ṽ N (γ).
The other side of the inequality follows from similar reasoning. Indeed, letπ be an optimal solution to (3). For each i ∈ {1, . . . , N } and t ∈ {1, . . . , T }, defineȳ i t ∈ R t as any decision rule that satisfies y i t (ζ 1 , . . . , ζ t ) ∈ arg min y t ∈R Combining the above inequalities, the proof is complete.
EC.4. Tractable Reformulation of the Multi-Policy Approximation
For completeness, we now show how to reformulate the multi-policy approximation scheme with linear decision rules from Section 5 into a deterministic optimization problem using standard techniques from robust optimization.
We begin by transforming (11) with linear decision rules into a more compact representation.
First, we combine the primary linear decision rules across stages as X T −2,1 X T −2,2 X T −2,3 · · · 0 0 0 X T −1,1 X T −1,2 X T −1,3 · · · X T −1,T −2 0 0 X T,1 X T,2 X T,3 · · · X T,T −2 X T,T −1 0 We note that the zero entries in the above matrix are necessary to ensure that the linear decision rules are non-anticipative. Similarly, for each i ∈ {1, . . . , N }, we represent the auxiliary linear decision rules as
x 0 = x 1,0 . . . x T,0 ∈ R dx , X = 0 0 0 · · · 0 0 0 X 2,1 0 0 · · · 0 0 0 X 3,1 X 3,2 0 · · · 0 0 0 . . .. ∈ R dx×d ξ .y i 0 = y i 1,0 . . . y i T,0 ∈ R dy , Y i = Y i 1,1 0 · · · 0 0 Y i 2,1 Y i 2,2 · · · 0 0 . . . . . . . . . . . . . . . Y i T −1,1 Y i T −1,2 · · · Y i T −1,T −1 0 Y i T,1 Y i T,2 · · · Y i t,t−1 Y i T,T ∈ R dy ×d ξ .
We now combine the problem parameters. Let d = (d 1 , . . . , d T ) ∈ R m and
f = f 1 . . . f T ∈ R dx , A = A 1,1 0 · · · 0 0 A 2,1 A 2,2 · · · 0 0 . . . . . . . . . . . . . . . A T −1,1 A T −1,2 · · · A T −1,T −1 0 A T,1 A T,2 · · · A t,t−1 A T,T ∈ R m×dx , g = g 1 . . . g T ∈ R d ξ , B = B 1,1 0 · · · 0 0 B 2,1 B 2,2 · · · 0 0 . . . . . . . . . . . . . . . B T −1,1 B T −1,2 · · · B T −1,T −1 0 B T,1 B T,2 · · · B t,t−1 B T,T ∈ R m×dx , h = h 1 . . . h T ∈ R dy , C =
C 1,1 0 · · · 0 0 0 C 2,2 · · · 0 0 . . . Therefore, using the above compact notation, we can rewrite the multi-policy approximation with linear decision rules as
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ subject to A(x 0 + Xζ) + Bζ + C y i 0 + Y i ζ ≤ d x 0 + Xζ ∈ X ∀ζ ∈ U i N , i ∈ {1, . . . , N },(EC.5)
where X := X 1 × · · · × X T and the matrices X and Y are non-anticipative. Note that the linear decision rules in the above optimization problem are represented using O(d ξ max{d x , N d y }) decision variables, where d x := d 1 x + · · · + d T x and d y := d 1 y + · · · + d T y . Thus, the complexity of representing the primary and auxiliary linear decision rules scales efficiently both in the size of the dataset and the number of stages. For simplicity, we present the reformulation for the case in which there are no constraints on the decision variables and nonnegativity constraints on the random variables.
Theorem EC.2. Suppose Ξ = R d ξ + and X = R dx . Then, (EC.5) is equivalent to
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ Λ i ∈R m×d ξ + , s i ∈R d ξ + N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + (s i ) ξ i + N X f + g + (Y i ) h + s i * subject to A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d ∀i ∈ {1, . . . , N }.
where Z * := ( z 1 * , . . . , z r * ) ∈ R r for any matrix Z ∈ R r×n .
Proof. For any c ∈ R d ξ and ξ ∈ Ξ, it follows directly from strong duality for conic optimization that max ζ≥0 {c ζ : ζ − ξ ≤ } = min λ≥0 {(c + λ) ξ + c + λ * } .
We use this result to reformulate the objective and constraints of (EC.5). First, let the j-th rows of A, B, C and the j-th element of d be denoted by a j ∈ R dx , b j ∈ R ξ , c j ∈ R dy , and d j ∈ R. Then, each robust constraint has the form
a j (x 0 + Xζ) + b j ζ + c j (y i 0 + Y i ζ) ≤ d j ∀ζ ∈ U i N .
Rearranging terms,
(a j X + b j + c j Y i )ζ ≤ d j − a j x 0 − c j y i 0 ∀ζ ∈ U i N ,
which applying duality becomes ∃λ i j ≥ 0 : X a j + b j + (Y i ) c j + λ i j ξ i + N X a j + b j + (Y i ) c j + λ i j * ≤ d j − a j x 0 − c j y i 0 .
Rearranging terms, the robust constraints for each i ∈ {1, . . . , N } are satisfied if and only if
∃Λ i ≥ 0 : A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d,
where the dual norm for a matrix is applied separately for each row. Similarly, the objective function takes the form
N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + sup ζ∈U i N f X + g + h Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + inf s i ≥0 X f + g + (Y i ) h + s i ξ i + N X f + g + (Y i ) h + s i * = N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + inf s i ≥0 (s i ) ξ i + N X f + g + (Y i ) h + s i * .
Combining the reformulations above, we obtain the desired reformulation. | 14,319 |
1907.07307 | 2961578905 | We present a data-driven framework for incorporating side information in dynamic optimization under uncertainty. Specifically, our approach uses predictive machine learning methods (such as k-nearest neighbors, kernel regression, and random forests) to weight the relative importance of various data-driven uncertainty sets in a robust optimization formulation. Through a novel measure concentration result for local machine learning methods, we prove that the proposed framework is asymptotically optimal for stochastic dynamic optimization with covariates. We also describe a general-purpose approximation for the proposed framework, based on overlapping linear decision rules, which is computationally tractable and produces high-quality solutions for dynamic problems with many stages. Across a variety of examples in shipment planning, inventory management, and finance, our method achieves improvements of up to 15 over alternatives and requires less than one minute of computation time on problems with twelve stages. | Several recent papers have focused on tractable approximations of two- and multi-stage and robust optimization. Many approaches are based around policy approximation schemes, including lifted linear decision rules , @math -adaptivity , and finite adaptability . Alternative approaches include tractable approximations of copositive formulations . Closest related to the approximation scheme in this paper are @cite_1 and @cite_13 , which address two-stage problems via overlapping decision rules. @cite_1 propose a modeling approach that leads to novel approximations of various distributionally robust applications, including two-stage distributionally robust optimization using Wasserstein ambiguity sets and expectations of piecewise convex objective functions in single-stage problems. Independently, @cite_13 investigate a of two-stage sample robust optimization by optimizing a separate linear decision rule for each uncertainty set and prove that this approximation gap converges to zero as the amount of data goes to infinity. In of this paper, we show how to extend similar techniques to dynamic problems with many stages for the first time. | {
"abstract": [
"We investigate a data-driven approach to two-stage stochastic linear optimization in which an uncertainty set is constructed around each data point. We propose an approximation algorithm for these sample robust optimization problems by optimizing a separate linear decision rule for each uncertainty set. We show that the proposed algorithm combines the asymptotic optimality and scalability of the sample average approximation while simultaneously offering improved out-of-sample performance guarantees. The practical value of our method is demonstrated in network inventory management and hospital scheduling.",
"Stochastic programming provides a versatile framework for decision-making under uncertainty, but the resulting optimization problems can be computationally demanding. It has recently been shown that primal and dual linear decision rule approximations can yield tractable upper and lower bounds on the optimal value of a stochastic program. Unfortunately, linear decision rules often provide crude approximations that result in loose bounds. To address this problem, we propose a lifting technique that maps a given stochastic program to an equivalent problem on a higher-dimensional probability space. We prove that solving the lifted problem in primal and dual linear decision rules provides tighter bounds than those obtained from applying linear decision rules to the original problem. We also show that there is a one-to-one correspondence between linear decision rules in the lifted problem and families of nonlinear decision rules in the original problem. Finally, we identify structured liftings that give rise to highly flexible piecewise linear and nonlinear decision rules, and we assess their performance in the context of a dynamic production planning problem."
],
"cite_N": [
"@cite_13",
"@cite_1"
],
"mid": [
"2960111905",
"2000508521"
]
} | Dynamic Optimization with Side Information | Dynamic decision making under uncertainty forms the foundation for numerous fundamental problems in operations research and management science. In these problems, a decision maker attempts to minimize an uncertain objective over time, as information incrementally becomes available. For example, consider a retailer with the goal of managing the inventory of a new short life cycle product. Each week, the retailer must decide an ordering quantity to replenish its inventory. Future demand for the product is unknown, but the retailer can base its ordering decisions on the remaining inventory level, which depends on the realized demands in previous weeks. A risk-averse investor faces a similar problem when constructing and adjusting a portfolio of assets in order to achieve a desirable risk-return tradeoff over a horizon of many months. Additional examples abound in energy planning, airline routing, and ride sharing, as well as in many other areas.
To make high quality decisions in dynamic environments, the decision maker must accurately model future uncertainty. Often, practitioners have access to side information or auxiliary covariates, which can help predict that uncertainty. For a retailer, although the future demand for a newly introduced clothing item is unknown, data on the brand, style, and color of the item, as well as data on market trends and social media, can help predict it. For a risk-averse investor, while the returns of the assets in future stages are uncertain, recent asset returns and prices of relevant options can provide crucial insight into upcoming volatility. Consequently, organizations across many industries are continuing to prioritize the use of predictive analytics in order to leverage vast quantities of data to understand future uncertainty and make better operational decisions.
A recent body of work has aimed to leverage predictive analytics in decision making under uncertainty. For example, Hannah et al. (2010), Ban and Rudin (2018), Bertsimas and Kallus (2014) and Ho and Hanasusanto (2019) investigate prescriptive approaches, based on sample average approximation, that use local machine learning to assign weights to the historical data based on covariates. Bertsimas and Van Parys (2017) propose adding robustness to those weights to achieve optimal asymptotic budget guarantees. Elmachtoub and Grigas (2017) develop an approach for linear optimization problems in which a machine learning model is trained to minimize the decision cost. All of these approaches are specialized for single-stage or two-stage optimization problems, and do not readily generalize to problems with many stages. For a class of dynamic inventory problems, propose a data-driven approach by fitting the stochastic process and covariates to a parametric regression model, which is asymptotically optimal when the model is correctly specified. Bertsimas and McCord (2019) propose a different approach based on dynamic programming that uses nonparametric machine learning methods to handle auxiliary covariates.
However, these dynamic approaches require scenario tree enumeration and suffer from the curse of dimensionality. To the best of our knowledge, no previous work leverages machine learning in a computationally tractable, data-driven framework for decision making in dynamic environments with covariates.
Recently, Bertsimas et al. (2018a) developed a data-driven approach for dynamic optimization under uncertainty that they call sample robust optimization (SRO). Their SRO framework solves a robust optimization problem in which an uncertainty set is constructed around each historical sample path. They show this data-driven framework enjoys nonparametric out-of-sample performance guarantees for a class of dynamic linear optimization problems without covariates and show that this framework can be approximated using decision rule techniques from robust optimization.
Contributions
In this paper, we present a new framework for leveraging side information in dynamic optimization. Specifically, we propose combining local machine learning methods with the sample robust optimization framework. Through a new measure concentration result, we show that the proposed sample robust optimization with covariates framework is asymptotically optimal, providing the assurance that the resulting decisions are nearly optimal in the presence of big data. We also demonstrate the tractability of the approach via an approximation algorithm based on overlapping linear decision rules. To the best of our knowledge, our method is the first nonparametric approach for tractably solving dynamic optimization problems with covariates, offering practitioners a general-purpose tool for better decision making with predictive analytics. We summarize our main contributions as follows:
• We present a general-purpose framework for leveraging machine learning in data-driven dynamic optimization with covariates. Our approach extends the sample robust optimization framework by assigning weights to the uncertainty sets based on covariates. The weights are computed using machine learning methods such as k-nearest neighbor regression, kernel regression, and random forest regression.
• We provide theoretical justification for the proposed framework in the big data setting. First, we develop a new measure concentration result for local machine learning methods (Theorem 2), which shows that the weighted empirical distribution produced by local predictors converges quickly to the true conditional distribution. To the best of our knowledge, such a result for local machine learning is the first of its kind. We use Theorem 2 to establish that the proposed framework is asymptotically optimal for dynamic optimization with covariates without any parametric assumptions (Theorem 1).
• To find high quality solutions for problems with many stages in practical computation times, we present an approximation scheme based on overlapping linear decision rules. Specifically, we propose using separate linear decision rules for each uncertainty set to approximate the costs incurred in each stage. We show that the approximation is computationally tractable, both with respect to the number of stages and size of the historical dataset.
• By using all available data, we show that our method produces decisions that achieve improved out-of-sample performance. Specifically, in a variety of examples (shipment planning, inventory management, and finance), across a variety of time horizons, our proposed method outperforms alternatives, in a statistically significant manner, achieving up to 15% improvement in average out-of-sample cost. Moreover, our algorithm is practical and scalable, requiring less than one minute on examples with up to twelve stages.
The paper is organized as follows. Section 2 introduces the problem setting and notation. Section 3 proposes the new framework for incorporating machine learning into dynamic optimization. Section 4 develops theoretical guarantees on the proposed framework. Section 5 presents the general multi-policy approximation scheme for dynamic optimization with covariates. Section 6 presents a detailed investigation and computational simulations of the proposed methodology in shipment planning, inventory management, and finance. We conclude in Section 7.
Problem Setting
We consider finite-horizon discrete-time stochastic dynamic optimization problems. The uncertain quantities observed in each stage are denoted by random variables
ξ 1 ∈ Ξ 1 ⊆ R d 1 ξ , . . . , ξ T ∈ Ξ T ⊆ R d T ξ . The decisions made in each stage are denoted by x 1 ∈ X 1 ⊆ R d 1 x , . . . , x T ∈ X T ⊆ R d T x .
Given realizations of the uncertain quantities and decisions, we incur a cost of
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) ∈ R.
A decision rule π = (π 1 , . . . , π T ) is a collection of measurable functions π t : Ξ 1 × · · · × Ξ t−1 → X t which specify what decision to make in stage t based of the information observed up to that point.
Given realizations of the uncertain quantities and choice of decision rules, the resulting cost is c π ξ 1 , . . . , ξ T , ) := c(ξ 1 , . . . , ξ T , π 1 , . . . , π T (ξ 1 , . . . , ξ T −1 ) .
Before selecting the decision rules, we observe auxiliary covariates γ ∈ Γ ⊆ R dγ . For example, in the aforementioned fashion setting, the auxiliary covariates may information on the brand, style, and color of a new clothing item and the remaining uncertainties representing the demand for the product in each week of the lifecycle. Given a realization of the covariates γ =γ, our goal is to find decision rules which minimize the conditional expected cost:
v * (γ) := minimize π∈Π E c π (ξ 1 , . . . , ξ T ) γ =γ .(1)
We refer to (1) as dynamic optimization with covariates. The optimization takes place over a collection Π which is any subset of the space of all non-anticipative decision rules.
In this paper, we assume that the joint distribution of the covariates and uncertain quantities (γ, ξ 1 , . . . , ξ T ) is unknown, and our knowledge consists of historical data of the form
(γ 1 , ξ 1 1 , . . . , ξ 1 T ), . . . , (γ N , ξ N 1 , . . . , ξ N T ),
where each of these tuples consists of a realization of the auxiliary covariates and the following realization of the random variables over the stages. For example, in the aforementioned fashion setting, each tuple corresponds to the covariates of a past fashion item as well as its demand over its lifecycle. We will not assume any parametric structure on the relationship between the covariates and future uncertainty.
The goal of this paper is a general-purpose, computationally tractable, data-driven approach for approximately solving dynamic optimization with covariates. In the following sections, we propose and analyze a new framework which leverages nonparametric machine learning, trained from historical data, to predict future uncertainty from covariates in a way that leads to near-optimal decision rules to (1).
Notation
The joint probability distribution of the covariates γ and uncertain quantities ξ = (ξ 1 , . . . , ξ T ) is denoted by P. For the purpose of proving theorems, we assume throughout this paper that the historical data are independent and identically distributed (i.i.d.) samples from this distribution P. In other words, we assume that the historical data satisfies
((γ 1 , ξ 1 ), . . . , (γ N , ξ N )) ∼ P N ,
where P N := P × · · · × P is the product measure. The set of all probability distributions supported on Ξ := Ξ 1 × · · · × Ξ T ⊆ R d ξ is denoted by P(Ξ). For each of the covariatesγ ∈ Γ, we assume that its conditional probability distribution satisfies Pγ ∈ P(Ξ), where Pγ(·) is shorthand for P(· | γ = γ). We sometimes use subscript notation for expectations to specify the underlying probability distribution; for example, the following two expressions are equivalent:
E ξ∼Pγ [f (ξ 1 , . . . , ξ T )] ≡ E [f (ξ 1 , . . . , ξ T ) | γ =γ] .
Finally, we say that the cost function resulting from a policy π is upper semicontinuous if lim sup ζ→ζ c π (ζ 1 , . . . , ζ T ) ≤ c π (ζ 1 , . . . ,ζ T ) for allζ ∈ Ξ.
Sample Robust Optimization with Covariates
In this section, we present our approach for incorporating machine learning in dynamic optimization. We first review sample robust optimization, and then we introduce our new sample robust optimization with covariates framework.
Preliminary: sample robust optimization
Consider a stochastic dynamic optimization problem of the form (1) in which there are no auxiliary covariates. The underlying joint distribution of the random variables ξ ≡ (ξ 1 , . . . , ξ T ) is unknown, but we have data consisting of sample paths, ξ 1 ≡ (ξ 1 1 , . . . , ξ 1 T ), . . . , ξ N ≡ (ξ N 1 , . . . , ξ N T ). For this setting, sample robust optimization can be used to find approximate solutions in stochastic dynamic optimization. To apply the framework, one constructs an uncertainty set around each sample path in the training data and then chooses the decision rules that optimize the average of the worstcase realizations of the cost. Formally, this framework results in the following robust optimization problem:
minimize π∈Π N i=1 1 N sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(2)
where U i N ⊆ Ξ is an uncertainty set around ξ i . Intuitively speaking, (2) chooses the decision rules by averaging over the historical sample paths which are adversarially perturbed. Under mild probabilistic assumptions on the underlying joint distribution and appropriately constructed uncertainty sets, Bertsimas et al. (2018a) show that sample robust optimization converges asymptotically to the underlying stochastic problem and that (2) is amenable to approximations similar to dynamic robust optimization.
Incorporating covariates into sample robust optimization
We now present our new framework, based on sample robust optimization, for solving dynamic optimization with covariates. In the proposed framework, we first train a machine learning algorithm on the historical data to predict future uncertainty (ξ 1 , . . . , ξ T ) as a function of the covariates.
From the trained learner, we obtain weight functions w i N (γ), for i = 1, . . . , N , each of which captures the relevance of the ith training sample to the new covariates,γ. We incorporate the weights into sample robust optimization by multiplying the cost associated with each training example by the corresponding weight function. The resulting sample robust optimization with covariates framework is as follows:
v N (γ) := minimize π∈Π N i=1 w i N (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(3)
where the uncertainty sets are defined
U i N := ζ ∈ Ξ : ζ − ξ i ≤ N ,
and · is some p norm with p ≥ 1.
The above framework provides the flexibility for the practitioner to construct weights from a variety of machine learning algorithms. We focus in this paper on weight functions which come from nonparametric machine learning methods. Examples of viable predictive models include k-nearest neighbors (kNN), kernel regression, classification and regression trees (CART), and random forests (RF). We describe these four classes of weight functions.
Definition 1. The k-nearest neighbor weight functions are given by:
w i N,kNN (γ) := 1 k N , if γ i is a k N -nearest neighbor ofγ, 0, otherwise. Formally, γ i is a k N -nearest neighbor ofγ if |{j ∈ {1, . . . , N } \ i : γ j −γ < γ i −γ }| < k N .
For more technical details, we refer the reader to Biau and Devroye (2015).
Definition 2. The kernel regression weight functions are given by:
w i N,KR (γ) := K( γ i −γ /h N ) N j=1 K( γ j −γ /h N ) ,
where K(·) is the kernel function and h N is the bandwidth parameter. Examples of kernel functions include the Gaussian kernel, K(u) = 1 √ 2π e −u 2 /2 , the triangular kernel, K(u) = (1 − u)1{u ≤ 1}, and the Epanechnikov kernel, K(u) = 3 4 (1 − u 2 )1{u ≤ 1}. For more information on kernel regression, see Friedman et al. (2001, Chapter 6).
The next two types of weight functions we present are based on classification and regression trees (Breiman et al. 1984) and random forests (Breiman 2001). We refer the reader to Bertsimas and Kallus (2014) for technical implementation details.
Definition 3. The classification and regression tree weight functions are given by:
w i N,CART (γ) := 1 |l N (γ)| , i ∈ l N (γ), 0, otherwise,
where l N (γ) is the set of indices i such that γ i is contained in the same leaf of the tree asγ.
Definition 4. The random forest weight functions are given by:
w i N,RF (γ) := 1 B B b=1 w i,b N,CART (γ),
where B is the number of trees in the ensemble, and w i,b N,CART (γ) refers to the weight function of the bth tree in the ensemble.
All of the above weight functions come from nonparametric machine learning methods. They are highly effective as predictive methods because they can learn complex relationships between the covariates and the response variable without requiring the practitioner to state an explicit parametric form. Similarly, as we prove in Section 4, solutions to (3) with these weight functions are asymptotically optimal for (1) without any parametric restrictions on the relationship between γ and ξ. In other words, incorporating covariates into sample robust optimization via (3) leads to better decisions asymptotically, even without specific knowledge of how the covariates affect the uncertainty.
Asymptotic Optimality
In this section, we establish asymptotic optimality guarantees for sample robust optimization with auxiliary covariates. We prove that, under mild conditions, (3) converges to (1) as the number of training samples goes to infinity. Thus, as the amount of data grows, sample robust optimization with covariates becomes an optimal approximation of the underlying stochastic dynamic optimization problem. Crucially, our convergence guarantee does not require parametric restrictions on the space of decision rules (e.g., linearity) or parametric restrictions on the joint distribution of the covariates and uncertain quantities. These theoretical results are consistent with empirical experiments in Section 6.
Main result
We begin by presenting our main result. The proof of the result depends on some technical assumptions and concepts from distributionally robust optimization. For simplicity, we defer the statement and discussion of technical assumptions regarding the underlying probability distribution and cost until Sections 4.3 and 4.4, and first discuss what is needed to apply the method in practice. The practitioner needs to select a weight function, parameters associated with that weight function, and the radius, N , of the uncertainty sets. While these may be selected by cross validation, we show that the method will in general converge if the parameters are selected to satisfy the following: Assumption 1. The weight functions and uncertainty set radius satisfy one of the following:
1. {w i N (·)} are k-nearest neighbor weight functions with k N = min( k 3 N δ , N − 1) for constants k 3 > 0 and δ ∈ ( 1 2 , 1), and N = k 1 N p for constants k 1 > 0 and 0 < p < min 1−δ dγ , 2δ−1 d ξ +2 . 2. {w i N (·)
} are kernel regression weight functions with the Gaussian, triangular, or Epanechnikov kernel function and h N = k 4 N −δ for constants k 4 > 0 and δ ∈ 0, 1 2dγ , and N = k 1 N p for constants k 1 > 0 and 0 < p < min δ,
1−δdγ 2+d ξ .
Given Assumption 1, our main result is the following.
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
The theorem says that objective value of (3) converge almost surely to the optimal value of the fullinformation problem, (1), as N goes to infinity. The assumptions of the theorem require that the joint distribution and the feasible decision rules are well behaved. We will discuss these technical assumptions in more detail in the following sections.
In order to prove the asymptotic optimality of sample robust optimization with covariates, we view (3) through the more general lens of Wasserstein-based distributionally robust optimization.
We first review some properties of the Wasserstein metric and then prove a key intermediary result, from which our main result follows.
Review of the Wasserstein metric
The Wasserstein metric provides a distance function between probability distributions. In particular, given two probability distributions Q, Q ∈ P(Ξ), the type-1 Wasserstein distance is defined as the optimal objective value of a minimization problem:
d 1 (Q, Q ) := inf E (ξ,ξ )∼Π ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively . The Wasserstein metric is particularly appealing because a distribution with finite support can have a finite distance to a continuous distribution. This allows us to construct a Wasserstein ball around an empirical distribution that includes continuous distributions, which cannot be done with other popular measures such as the Kullback-Leilbler divergence (Kullback and Leibler 1951).
We remark that the 1-Wasserstein metric satisfies the axioms of a metric, including the triangle inequality (Clement and Desch 2008):
d 1 (Q 1 , Q 2 ) ≤ d 1 (Q 1 , Q 3 ) + d 1 (Q 3 , Q 2 ), ∀Q 1 , Q 2 , Q 3 ∈ P(Ξ).
Important to this paper, the 1-Wasserstein metric admits a dual form, as shown by Kantorovich and Rubinstein (1958),
d 1 (Q, Q ) = sup Lip(h)≤1 |E ξ∼Q [h(ξ)] − E ξ∼Q [h(ξ)]| ,
where the supremum is taken over all 1-Lipschitz functions. Note that the absolute value is optional in the dual form of the metric, and the space of Lipschitz functions can be restricted to those which satisfy h(0) = 0 without loss of generality. Finally, we remark that Fournier and Guillin (2015) prove under a light-tailed assumption that the 1-Wasserstein distance between the empirical distribution and its underlying distribution concentrates around zero with high probability. Theorem 2 in the following section extends this concentration result to the setting with auxiliary covariates.
Concentration of the weighted empirical measure
Given a local predictive method, let the corresponding weighted empirical measure be defined aŝ
P N γ := N i=1 w i N (γ)δ ξ i ,
where δ ξ denotes the Dirac probability distribution which places point mass at ξ. In this section,
we prove under mild assumptions that the weighted empirical measureP N γ concentrations quickly to Pγ with respect to the 1-Wasserstein metric. We introduce the following assumptions on the underlying joint probability distribution:
Assumption 2 (Conditional Subgaussianity). There exists a parameter σ > 0 such that
P ( ξ − E[ ξ | γ =γ] > t | γ =γ) ≤ exp − t 2 2σ 2 ∀t > 0,γ ∈ Γ.
Assumption 3 (Lipschitz Continuity). There exists 0 < L < ∞ such that
d 1 (Pγ, Pγ ) ≤ L γ −γ , ∀γ,γ ∈ Γ.
Assumption 4 (Smoothness of Auxiliary Covariates). The set Γ is compact, and there exists g > 0 such that
P( γ −γ ≤ ) ≥ g dγ , ∀ > 0,γ ∈ Γ.
With these assumptions, we are ready to prove the concentration result, which is proved using a novel technique that relies on the dual form of the Wasserstein metric and a discrete approximation of the space of 1-Lipschitz functions.
Theorem 2. Suppose the weight function and uncertainty sets satisfy Assumption 1 and the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4. Then, for everyγ ∈ Γ,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. = 0.
Proof. Without loss of generality, we assume throughout the proof that all norms · refer to the ∞ norm. 1 Fix anyγ ∈ Γ. It follows from Assumption 1 that
{w i N (γ)} are not functions of ξ 1 , . . . , ξ N ; (4) N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N; (5) N = k 1 N p , ∀N ∈ N,(6)
for constants k 1 , p > 0. Moreover, Assumption 1 also implies that there exists constants k 2 > 0 and
η > p(2 + d ξ ) such that lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely; (7) E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
The proof of the the above statements under Assumption 1 is found in Appendix EC.1. Now,
choose any fixed q ∈ (0, η/(2 + d ξ ) − p), and let b N := N q , B N := ζ ∈ R d ξ : ζ ≤ b N , I N := 1 ξ 1 , . . . , ξ N ∈ B N .
Finally, we define the following intermediary probability distributions:
Q N γ := N i=1 w i N (γ)P γ i ,Q N γ|B N := N i=1 w i N (γ)P γ i |B N , where P γ i |B N (·) is shorthand for P(· | γ = γ i , ξ ∈ B N ).
Applying the triangle inequality for the 1-Wasserstein metric and the union bound,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. ≤ P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. + P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. + P ∞ d 1 (Q N γ|B N ,P N γ ) > N 3 i.o. .
We now proceed to bound each of the above terms.
1 To see why this is without loss of generality, consider any other p norm where p ≥ 1. In this case,
ξ − ξ p ≤ d 1/p ξ ξ − ξ ∞.
By the definition of the 1-Wasserstein metric, this implies
d p 1 (Pγ ,P N γ ) ≤ d 1/p ξ d ∞ 1 (Pγ ,P N γ ),
where d p 1 refers to the 1-Wasserstein metric with the p norm. If N satisfies Assumption 1, N /d 1/p ξ also satisfies Assumption 1, so the result for all other choices of p norms follows from the result with the ∞ norm.
Term 1: d 1 (Pγ,Q N γ ): By the dual form of the 1-Wasserstein metric,
d 1 (Pγ,Q N γ ) = sup Lip(h)≤1 E[h(ξ)|γ =γ] − N i=1 w i N (γ)E[h(ξ)|γ = γ i ] ,
where the supremum is taken over all 1-Lipschitz functions. By (5) and Jensen's inequality, we can upper bound this by
d 1 (Pγ,Q N γ ) ≤ N i=1 w i N (γ) sup Lip(h)≤1 E[h(ξ)|γ =γ] − E[h(ξ)|γ = γ i ] = N i=1 w i N (γ)d 1 Pγ, P γ i ≤ L N i=1 w i N (γ) γ − γ i ,
where the final inequality follows from Assumption 3. Therefore, it follows from (7) that
P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. = 0. (9) Term 2: d 1 (Q N γ ,Q N γ|B N ): Consider any Lipschitz function Lip(h) ≤ 1 for which h(0) = 0, and let N ∈ N satisfy bN ≥ σ + supγ ∈Γ E[ ξ |γ =γ] (which is finite because of Assumption 4). Then, for all N ≥N , and allγ ∈ Γ, E[h(ξ)|γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ)1{ξ ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ) | γ =γ , ξ ∈ B N ]P (ξ ∈ B N | γ =γ ) − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]P(ξ / ∈ B N | γ =γ ) ≤ E[ ξ 1{ξ / ∈ B N } | γ =γ ] + b N P(ξ / ∈ B N | γ =γ ) = ∞ b N P ( ξ > t | γ =γ ) dt + b N P ( ξ ≥ b N | γ =γ ) ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 .
The first inequality follows because |h(ξ)| ≤ b N for all ξ ∈ B N and |h(ξ)| ≤ ξ otherwise. For the second inequality, we used the Gaussian tail inequality ∞ x e −t 2 /2 dt ≤ e −x 2 /2 for x ≥ 1 (Vershynin 2018) along with Assumption 2. Because this bound holds uniformly over all h, and allγ ∈ Γ, it follows that
d 1 (Q N γ ,Q N γ|B N ) = sup Lip(h)≤1,h(0)=0 N i=1 w i N (γ) E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ N i=1 w i N (γ) sup Lip(h)≤1,h(0)=0 E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ sup γ ∈Γ sup Lip(h)≤1,h(0)=0 |E[h(ξ) | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]| ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 ,
for all N ≥N . It is easy to see that the right hand side above divided by N /3 goes to 0 as N goes to infinity, so
P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. = 0. Term 3: d 1 (Q N γ|B N ,P N γ )
: By the law of total probability,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 ≤ P N (I N = 0) + P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 .
We now show that each of the above terms have finite summations. First,
∞ N =1 P N (I N = 0) ≤ ∞ N =1 N sup γ ∈Γ P(ξ / ∈ B N | γ =γ ) ≤ ∞ N =1 N sup γ ∈Γ exp − (b N − E [ ξ | γ =γ ]) 2 2σ 2 < ∞.
The first inequality follows from the union bound, the second inequality follows from Assumption 2, and the final inequality follows because supγ ∈Γ E[ ξ |γ =γ ] < ∞ and the definition of b N .
Second, for each l ∈ N, we define several quantities. Let P l be the partitioning of
B N = [−b N , b N ] d ξ into 2 ld ξ translations of (−b N 2 −l , b N 2 −l ] d ξ .
Let H l be the set of piecewise constant functions which are constant on each region of the partition P l , taking values on {kb N 2 −l : k ∈ {0, ±1, ±2, ±3, . . . , ±2 l }}. Note that |H l | = (2 l+1 + 1) 2 ld ξ . Then, we observe that for all Lipschitz functions Lip(h) ≤ 1 which satisfy h(0) = 0, there exists aĥ ∈ H l such that
sup ζ∈B N |h(ζ) −ĥ(ζ)| ≤ b N 2 −l+1 .
Indeed, within each region of the partition, h can vary by no more than b N 2 −l+1 . The possible function values forĥ are separated by b N 2 −l . Because h is bounded by ±b N , this implies the existence ofĥ ∈ H l such thatĥ has a value within b N 2 −l+1 of h everywhere within that region. The identical reasoning holds for all other regions of the partition.
Therefore, for every l ∈ N,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 = P N sup Lip(h)≤1 h(0)=0 N i=1 w i N (γ) h(ξ i ) − E[h(ξ) | γ = γ i , ξ ∈ B N ] > N 3 I N = 1 ≤ P N sup h∈H l N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − 2 · b N 2 −l+1 I N = 1 ≤ |H l | sup h∈H l P N N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − b N 2 −l+2 I N = 1 ,
where the final inequality follows from the union bound. We choose l = 2 + log 2
6b N N , in which case N 3 − b N 2 −l+2 ≥ N 6 .
Furthermore, for all sufficiently large N ,
|H l | = (2 l+1 + 1) 2 ld ξ ≤ 96 b N N 24 d ξ (b N / N ) d ξ = exp 24 d ξ b N N d ξ log 96b N N .
Applying Hoeffding's inequality, and noting |ĥ(ξ i )| is bounded by b N when ξ i ∈ B N , we have the following for allĥ ∈ H l :
P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1 = E P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1, γ 1 , . . . , γ N I N = 1 ≤ E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N = 1 = E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N 1 P N (I N = 1) ≤ 2E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N ≤ 2 exp − k 2 N η 2 N 72b 2 N ,
for N sufficiently large that P(I N = 1) ≥ 1/2 and 2 N /72b 2 N < 1. Note that (8) was used for the final inequality. Combining these results, we have
P N d 1 (P N γ ,Q N γ|B N ) > N /3 I N = 1 ≤ 2 exp 24 d ξ b N N d ξ log 96b N N − k 2 2 N N η 72N b 2 N ,
for N sufficiently large. For some constants c 1 , c 2 > 0, and sufficiently large N , this is upper bounded by 2 exp −c 1 N η−2(p+q) + c 2 N d ξ (q+p) log N .
Since 0 < d ξ (p + q) < η − 2(p + q), we can conduct a limit comparison test with 1/N 2 to see that this term has a finite sum over N , which completes the proof.
Proof of main result
Theorem 2 provides the key ingredient for the proof of the main consistency result. We state one final assumption, which requires that the objective function of (1) is upper semicontinuous and bounded by linear functions of the uncertainty.
Assumption 5. For all π ∈ Π, c π (ζ 1 , . . . , ζ T ) is upper semicontinuous in ζ and |c(ζ, x)| ≤ C(1 + ζ ) for all ζ ∈ Ξ and some C > 0.
Under this assumption, the proof of Theorem 1 follows from Theorem 2 via arguments similar to those used by Esfahani and Kuhn (2018) and Bertsimas et al. (2018a). We state it fully in Appendix EC.2.
Tractable Approximations
In the previous sections, we presented the new framework of sample robust optimization with covariates and established its asymptotic optimality without any significant structural restrictions on the space of decision rules. In this section, we focus on tractable methods for approximately solving the robust optimization problems that result from this proposed framework. Specifically, we develop a formulation which uses auxiliary decision rules to approximate the cost function.
In combination with linear decision rules, this approach enables us to find high-quality decisions for real-world problems with more than ten stages in less than one minute, as we demonstrate in Section 6.
We focus in this section on dynamic optimization problems with cost functions of the form
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) = T t=1 f t x t + g t ξ t + min y t ∈R d t y h t y t : t s=1 A t,s x s + t s=1 B t,s ξ s + C t y t ≤ d t .(10)
Such cost functions appear frequently in applications such as inventory management and supply chain networks. Unfortunately, it is well known that these cost functions are convex in the uncertainty ξ 1 , . . . , ξ T . Thus, even evaluating the worst-case cost over a convex uncertainty set is computationally demanding in general, as it requires the maximization of a convex function.
As an intermediary step towards developing an approximation scheme for (3) with the above cost function, we consider the following optimization problem:
v N (γ) := minimize π∈Π, y i t ∈R t ∀i,t N i=1 w i N (γ) sup ζ∈U i N T t=1 f t π t (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h t y i t (ζ 1 , . . . , ζ t ) subject to t s=1 A t,s π s (ζ 1 , . . . , ζ s−1 ) + t s=1 B t,s ζ s + C t y i t (ζ 1 , . . . , ζ t ) ≤ d t ∀ζ ∈ U i N , i ∈ {1, . . . , N }, t ∈ {1, . . . , T },(11)
where R t is the set of all functions y : Ξ 1 × · · · × Ξ t → R d t y . In this problem, we have introduced auxiliary decision rules which capture the minimization portion of (10) in each stage. We refer to (11) as a multi-policy approach, as it involves different auxiliary decision rules for each uncertainty set. The following theorem shows that (11) is equivalent to (3).
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. See Appendix EC.3.
We observe that (11) involves optimizing over decision rules, and thus is computationally challenging to solve in general. Nonetheless, we can obtain a tractable approximation of (11) by further restricting the space of primary and auxiliary decision rules. For instance, we can restrict all primary and auxiliary decision rules as linear decision rules of the form
π t ζ 1 , . . . , ζ t−1 = x t,0 + t−1 s=1 X t,s ζ s , y i t (ζ 1 , . . . , ζ t ) = y i t,0 + t s=1 Y i t,s ζ s .
One can alternatively elect to use a richer class of decision rules, such as lifted linear decision rules (Chen andZhang 2009, Georghiou et al. 2015). In all cases, feasible approximations that restrict the space of decision rules of (11) provide an upper bound on the costv N (γ) and produce decision rules that are feasible for (11).
The key benefit of the multi-policy approximation scheme is that it offers many degrees of freedom in approximating the nonlinear cost function. Specifically, in (11), a separate auxiliary decision rule y i t captures the value of the cost function for each uncertainty set in each stage. We approximate each y i t with a linear decision rule, which only needs to be locally accurate, i.e., accurate for realizations in the corresponding uncertainty set. As a result, (11) with linear decision rules results in significantly tighter approximations of (3) compared to using a single linear decision rule, y t , for all uncertainty sets in each stage. Moreover, these additional degrees of freedom come with only a mild increase in computation cost, and we substantiate these claims via computational experiments in Section 6.2. In Appendix EC.4, we provide the reformulation of the multi-policy approximation scheme with linear decision rules into a deterministic optimization problem using standard techniques from robust optimization.
Computational Experiments
We perform computational experiments to assess the out-of-sample performance and computational tractability of the proposed methodologies across several applications. These examples are twostage shipment planning (Section 6.1), dynamic inventory management (Section 6.2), and portfolio optimization (Section 6.3). Table 1 Relationship of four methods. We compare several methods using different machine learning models. These methods include the proposed sample robust optimization with covariates, sample average approximation (SAA), the predictions to prescriptions (PtP) approach of Bertsimas and Kallus (2014), and sample robust optimization without covariates (SRO). In Table 1, we show that each of the above methods are particular instances of (3) from Section 3. The methods in the left column ignore covariates by assigning equal weights to each uncertainty set, and the methods in the right column incorporate covariates by choosing the weights based on predictive machine learning. The methods in the top row do not incorporate any robustness, and the methods in the bottom row incorporate robustness via a positive N in the uncertainty sets. In addition, for the dynamic inventory management example, we also implement and compare to the residual tree algorithm described in Ban et al.
(2018). In each experiment, the relevant methods are applied to the same training datasets, and their solutions are evaluated against a common testing dataset. Further details are provided in each of the following sections.
Shipment planning
We first consider a two-stage shipment planning problem in which a decision maker seeks to satisfy demand in several locations from several production facilities while minimizing production and transportation costs. Our problem setting closely follows Bertsimas and Kallus (2014), in which the decision maker has access to auxiliary covariates (promotions, social media, market trends), which may be predictive of future sales in each retail location. Additionally, after observing demand, the decision maker has the opportunity to produce additional units y f ≥ 0 in each facility at a cost of p 2 > p 1 per unit. The fulfillment of each unit of demand generates r > 0 in revenue. Given the above notation and dynamics, the cost incurred by the decision maker is
c(ξ, x) = f ∈F p 1 x f − ∈L rξ + minimize s∈R L×F + , y∈R F + f ∈F p 2 y f + f ∈F ∈L c f s f subject to f ∈F s f ≥ ξ ∀ ∈ L ∈L s f ≤ x f + y f ∀f ∈ F.
Experiments. We perform computational experiments using the same parameters and data generation procedure as Bertsimas and Kallus (2014). Specifically, we consider an instance with |F| = 4, |L| = 12, p 1 = 5, p 2 = 100, and r = 90. The network topology, transportation costs, and the joint distribution of the covariates γ ∈ R 3 and demands ξ ∈ R 12 are the same as Bertsimas and Kallus (2014), with the exception that we generate the covariates as i.i.d. samples as opposed to an ARMA process (but with the same marginal distribution).
In our experiments, we compare sample robust optimization with covariates, sample average approximation, sample robust optimization, and predictions to prescriptions. For the robust approaches (bottom row of Table 1), we construct the uncertainty sets from Section 3 using the 1 norm and Ξ = R 12 + , solve these problems using the multi-policy approximation with linear decision rules described in Section 5, and consider uncertainty sets with radius ∈ {100, 500}. For the approaches using covariates (right column of Table 1), we used the k N -nearest neighbors with parameter k N = 2N 5 . All solutions were evaluated on a test set of size 100 and the results were averaged over 100 independent training sets.
Results. In Figure 1, we present the average out-of-sample profits of the various methods. The results show that the best out-of-sample average profit is attained when using the proposed sample robust optimization with covariates. Interestingly, we observe no discernible differences between sample average approximation and sample robust optimization in Figure 1, suggesting the value gained by incorporating covariates in this example. Compared to the approach of Bertsimas and Kallus (2014), sample robust optimization with covariates achieves a better out-of-sample average performance for each choice of . Table 2 shows that these differences are statistically significant.
This example demonstrates that, in addition to enjoying asymptotic optimality guarantees, sample robust optimization with covariates provides meaningful value across various values of N .
Dynamic inventory management
We next consider a dynamic inventory control problem over the first T = 12 weeks of a new product. In each week, a retailer observes demand for the product and can replenish inventory Out-of-sample profit for the shipment planning example. The p-values from the Wilcoxon signed rank test for comparison with the predictive to prescriptive analytics method (PtP-kNN) and sample robust optimization with covariates (SRO-kNN). After adjusting for multiple hypothesis testing, all results are significant at the α = 0.05 significance level because all p-values are less than Problem Description. In each stage t ∈ {1, . . . , T }, the retailer procures inventory from multiple suppliers to satisfy demand for a single product. The demands for the product across stages are denoted by ξ 1 , . . . , ξ T ≥ 0. In each stage t, and before the demand ξ t is observed, the retailer places procurement orders at various suppliers indexed by J = {1, . . . , |J |}. Each supplier j ∈ J has perunit order cost of c tj ≥ 0 and a lead time of j stages. At the end of each stage, the firm incurs a per-unit holding cost of h t and a backorder cost of b t . Inventory is fully backlogged and the firm starts with zero initial inventory. The cost incurred by the firm over the time horizon is captured by c(ξ 1 , . . . , ξ T , x 1 , . . . ,
x T ) = T t=1 j∈J c tj x tj + minimize y t ∈R y t subject to y t ≥ h t j∈J t− j s=1 x sj − t s=1 ξ s y t ≥ −b t j∈J t− j s=1 x sj − t s=1 ξ s .
Experiments. The parameters of the procurement problem were chosen based on Ban et al.
(2018). Specifically, we consider the case of two suppliers where c t1 = 1.0, c t2 = 0.5, h t = 0.25, and b t = 11 for each stage. The first supplier has no lead time and the second supplier has a lead time of one stage. We generate training and test data from the same distribution as the shipment planning problem in Section 6.1. In this case, the demands produced by this process are interpreted as the demands over the T = 12 stages. We perform computational experiments comparing the proposed sample robust optimization with covariates and the residual tree algorithm proposed by . In particular, we compare sample robust optimization with covariates with the multi-policy approximation as well as without the multi-policy approximation (in which we use a single auxiliary linear decision rule for y t for all uncertainty sets in each stage). The uncertainty sets from Section 3 are defined with the 2 norm and Ξ = R 12 + . The out-of-sample cost resulting from the decision rules were averaged over 100 training sets of size N = 40 and 100 testing points, and sample robust optimization with covariates used k-nearest neighbors with varying choices of k and radius ≥ 0 of the uncertainty sets. Table 3, we show the average out-of-sample cost resulting from sample robust optimization with covariates using linear decision rules, with and without the multi-policy approximation from Section 5. In both settings, we used k-nearest neighbors as the machine learning method and evaluated the out-of-sample performance by applying the linear decision rules for the ordering quantities. The results of these computational experiments in Table 3 demonstrate that significant improvements in average out-of-sample performance are found when combining the multi-policy approximation with covariates via k-nearest neighbors. We show in Table 4 that these results are statistically significant. For comparison, we also implemented the residual tree algorithm from Ban et al. (2018). When using their algorithm with a binning of B = 2 in each stage, their approach resulted in an average out-of-sample cost of 27142. We were unable to run with a binning of B = 3 Table 3 Average out-of-sample cost for dynamic procurement problem. Average out-of-sample cost for the dynamic procurement problem using sample robust optimization with N = 40. For each uncertainty set radius and parameter k, average was taken over 100 training sets and 100 test points. Optimal is indicated in bold. The residual tree algorithm with a binning of B = 2 in each stage gave an average out-of-sample cost of 27142. Table 4 Statistical significance for dynamic procurement problem.
Results. In
Method k 0 100 200 300 400 500 600 700 Sample robust optimization Linear decision rules no covariates * * * * * * * * k-nearest neighbors 26 * * * * * * * * 20 * * * * * * * * 13 * * * * * * * * Linear decision rules with multi-policy no covariates * * * * * * * * k-nearest neighbors 26 * * * * 1.4 × 10 −5 * * * 20 * * * * -* * * 13 * * * * 5.8 × 10 −3 1 × 10 −3 * *
The p-values of the Wilcoxon signed rank test for comparison with sample robust optimization using linear decision rules with multi-policy, k = 20, and = 400. An asterisk denotes that the p-value was less than 10 −8 . After adjusting for multiple hypothesis testing, each result is significant at the α = 0.05 significance level if its p-value is less than 0.05 63 ≈ 7.9 × 10 −4 . Table 5.
Portfolio optimization
Finally, we consider a single-stage portfolio optimization problem in which we wish to find an allocation of a fixed budget to n assets. Our goal is to simultaneously maximize the expected return while minimizing the the conditional value at risk (cVaR) of the portfolio. Before selecting our portfolio, we observe auxiliary covariates which include general market indicators such as index performance as well as macroeconomic numbers released by the US Bureau of Labor Statistics. Problem Description. We denote the portfolio allocation among the assets by x ∈ X := {x ∈ R n + :
n j=1 x j = 1}, and the returns of the assets by the random variables ξ ∈ R n . The conditional value at risk at the α ∈ (0, 1) level measures the expected loss of the portfolio, conditional on losses being above the 1 − α quantile of the loss distribution. Rockafellar and Uryasev (2000) showed that the cVaR of a portfolio can be computed as the optimal objective value of a convex minimization problem. Therefore, our portfolio optimization problem can be expressed as a convex optimization problem with an auxiliary decision variable, β ∈ R. Thus, given an observationγ of the auxiliary covariates, our goal is to solve minimize x∈X , β∈R
E β + 1 α max(0, −x ξ − β) − λx ξ γ =γ ,(12)
where λ ∈ R + is a trade-off parameter that balances the risk and return objectives. Table 1), we construct the uncertainty sets from Section 3 using the 1 norm. For each training sample size, we compute the out-of-sample objective on a test set of size 1000, and we average the results over 100 instances of training data.
In order to select N and other tuning parameters associated with the machine learning weight functions, we first split the data into a training and validation set. We then train the weight Figure 2 Out-of-sample objective for the portfolio optimization example.
functions using the training set, compute decisions for each of the instances in the validation set, and compute the out-of-sample cost on the validation set. We repeat this for a variety of parameter values and select the combination that achieves the best cost on the validation set.
Following a similar reformulation approach as Esfahani and Kuhn (2018), we solve the robust approaches exactly by observing that
minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N β + 1 α max{0, −x ζ − β} − λx ζ = minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N max β − λx ζ, 1 α + λ x ζ = minimize x∈X , β∈R N i=1 w i N (γ) max sup ζ∈U i N {β − λx ζ} , sup ζ∈U i N 1 α + λ x ζ , = minimize x∈X , β∈R,v∈R N N i=1 w i N (γ)v i subject to v i ≥ β − λx ζ v i ≥ 1 α + λ x ζ ∀ζ ∈ U i N , i ∈ {1, . . . , N }.
The final expression can be reformulated as a deterministic optimization problem by reformulating the robust constraints.
Results. In Figure 2, we show the average out-of-sample objective values using the various methods. Consistent with the computational results of Esfahani and Kuhn (2018) and Bertsimas and Van Parys (2017), the results underscore the importance of robustness in preventing overfitting and achieving good out-of-sample performance in the small data regime. Indeed, we observe that the sample average approximation, which ignores the auxiliary data, outperforms PtP-kNN and PtP-CART when the amount of training data is limited. We believe this is due to the fact the latter methods both throw out training examples, so the methods overfit when the training data is limited, leading to poor out-of-sample performance. In contrast, our methods (SRO-kNN and SRO-CART) typically achieve the strongest out-of-sample performance, even though the amount of training data is limited.
Conclusion
In this paper, we introduced sample robust optimization with covariates, a new framework for solving dynamic optimization problems with side information. Through three computational examples, we demonstrated that our method achieves significantly better out-of-sample performance than scenario-based alternatives. We complemented these empirical observations with theoretical analysis, showing our nonparametric method is asymptotically optimal via a new concentration measure result for local learning methods. Finally, we showed our approach inherits the tractability of robust optimization, scaling to problems with many stages via the multi-policy approximation scheme. Xin Chen and Yuhan Zhang. Uncertain linear programs: extended affinely adjustable robust counterparts.
Operations Research, 57 (6)
N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N.(4)
Moreover, there exists constants k 2 > 0 and η > p(2 + d ξ ) such that
lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely;(7)E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
Proof. We observe that (4) and (5) follow directly from the definitions of the weight functions.
The proofs of (7) and (8) are split into two parts, one for the k-nearest neighbor weights and one for kernel regression weights.
k-Nearest Neighbors: For the proof of (7), we note
N i=1 w i N (γ) γ i −γ ≤ γ (k N ) (γ) −γ ,
where γ (k N ) (γ) denotes the k N th nearest neighbor ofγ out of γ 1 , . . . , γ N . Therefore, for any λ > 0,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ P N γ (k N ) (γ) −γ > λ N ≤ P N i : γ i −γ ≤ λ N ≤ k N − 1 .
By Assumption 4, this probability is upper bounded by P(β ≤ k − 1), where β ∼ Binom(N, g(λ N ) dγ ). By Hoeffding's inequality,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ exp −2(N g(λk 1 /N p ) dγ − k N + 1) 2 N , ec2
e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information for k N ≤ N g(λk 1 /N p ) dγ + 1. We note that this condition on k N is satisfied for N sufficiently large because δ + pd γ < 1 by Assumption 1. Because the right hand side in the above inequality has a finite sum over N , (7) follows by the Borel Cantelli lemma.
For the proof of (8), it follows from Assumption 1 that
N i=1 w i N (γ) 2 ≤ k 3 N 1−2δ
deterministically (for all sufficiently large N such that k 3 N δ ≤ N − 1) and p(2 + d ξ ) > 2δ − 1.
Thus, (8) follows with η = 2δ − 1.
Kernel regression: Assumption 1 stipulates that the kernel function K(·) is Gaussian, triangular, or Epanechnikov, which are defined in Section 3. It is easy to verify that these kernel functions satisfy the following:
1. K is nonnegative, finite valued, and monotonically decreasing (for nonnegative inputs).
2. u α K(u) → 0 as u → ∞ for any α ∈ R.
3. ∃u * > 0 such that K(u * ) > 0.
For the proof of (7), define q > 0 such that p < q < δ. Letting D be the diameter of Γ and g N (γ) =
N i=1 K( γ i −γ /h N ), we have N i=1 w i N (γ) γ i −γ = N i=1 w i N (γ)1{ γ i −γ ≤ N −q } γ i −γ + 1 g N (γ) N i=1 K γ i −γ h N 1{ γ i −γ > N −q } γ i −γ ≤ N −q + N DK(N −q /h N ) g N (γ) ,
where the inequality follows from the monotonicity of K. By construction, N −q / N → 0, so we just need to handle the second term. We note, for any λ > 0,
P N N DK(N −q /h N ) g N (γ) > λ N ≤ P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N , where Z N i = 1{ γ i −γ ≤ u * h N }.
To achieve this inequality, we lower bounded each term in g N (γ) by K(u * ) or 0, because of the monotonicity of K. By Hoeffding's inequality, for some constants k 5 , k 6 > 0 that do not depend on N . We used Assumption 4 for the second inequality. Because δ > q, the second kernel property implies N 1/2+p K(k 4 N −q+δ ) goes to 0 as N goes to infinity, so that term is irrelevant. Because 1/2 − δd γ > 0 by Assumption 1, the right hand side of the inequality has a finite sum over N , and thus (7) follows from the Borel Cantelli lemma.
P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N ≤ exp − 2 N EZ N i − N D λ N K(u * ) K(N −q /h N ) 2 + N ≤ exp − 2 N g(u * h N ) dγ − N D λ N K(u * ) K(N −q /h N ) 2 + N = exp − k 5 N 1/2−δdγ − k 6 N 1/2+p K(k 4 N −q+δ ) 2 + ,
For the proof of (8), define
v N = K( γ 1 −γ /h N ) . . . K( γ N −γ /h N ) .
We note that
N i=1 w i N (γ) 2 = v N 2 2 v N 2 1 ≤ v N ∞ v N 1 ≤ K(0) K(u * ) N i=1 Z N i ,
where Z N i is defined above. The first inequality follows from Holder's inequality, and the second inequality follows from the monotonicity of K. Next, we defineZ N i to be a Bernoulli random variable with parameter g(u * h N ) dγ for each i. For any θ ∈ (0, 1),
E P N exp −θ N i=1 w i N (γ) 2 ≤ E P N exp −θK(u * ) N i=1Z N i K(0) = 1 − g(u * h N ) dγ + g(u * h N ) dγ exp(−θK(u * )/K(0)) N ≤ exp −N g(u * h N ) dγ (1 − exp(−θK(u * )/K(0))) ≤ exp −N g(u * h N ) dγ θK(u * ) 2K(0) = exp − θK(u * )g(k 4 u * ) dγ N 1−δdγ 2K(0) .
The first inequality follows because g(u * h N ) dγ is an upper bound on P( γ i −γ ≤ u * h N ) by Assumption 4. The first equality follows from the definition of the moment generating function for a binomial random variable. The next line follows from the inequality e x ≥ 1 + x and the following from the inequality 1 − e −x ≥ x/2 for 0 ≤ x ≤ 1. Because 1 − δd γ > p(2 + d ξ ), this completes the proof of (8) with η = 1 − δd γ and k 2 = K(u * )g(k 4 u * ) dγ /2K(0).
EC.2. Proof of Theorem 1
In this section, we present our proof of Theorem 1. First, we must introduce some necessary terminology. To connect Theorem 2 to sample robust optimization, we consider the ∞-Wasserstein metric, which is given by:
d ∞ (Q, Q ) ≡ inf Π-ess sup Ξ×Ξ ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively , where the essential supremum of the joint distribution is defined as
Π-ess sup Ξ×Ξ ξ − ξ = inf M : Π ξ − ξ > M = 0 .
We make use of the following result from Bertsimas et al. (2018a):
Lemma EC.1. For any measurable f : Ξ → R, N i=1 w N i (γ) sup ζ∈U i N f (ζ) = sup Q∈P(Ξ): d∞(P N γ ,Q)≤ N E ξ∼Q [f (ξ)].
The proof of Lemma EC.1 follows identical reasoning as in Bertsimas et al. (2018a) and is thus omitted.
Next, we state a result from Bertsimas et al. (2018a) (their Theorem EC.1), which bounds the difference in worst case objective values between 1-Wasserstein and ∞-Wasserstein distributionally robust optimization problems. We note that Bertsimas et al. (2018a) proved the following result for the case that Q is the unweighted empirical measure, but their proof carries through for the case here in which Q is a weighted empirical measure.
Lemma EC.2. Let Z ⊆ R d , f : Z → R be measurable, and ζ 1 , . . . , ζ N ∈ Z. Suppose that Q = N i=1 w i δ ζ i
for given weights w 1 , . . . , w N ≥ 0 that sum to one. If θ 2 ≥ 2θ 1 ≥ 0, then
sup Q∈P(Z): d 1 (Q ,Q)≤θ 1 E ξ∼Q [f (ξ)] ≤ sup Q∈P(Z): d∞(Q ,Q)≤θ 2 E ξ∼Q [f (ξ)] + 4θ 1 θ 2 sup ζ∈Z |f (ζ)|.
We now restate and prove the main result, which combines the new measure concentration result from this paper with similar proof techniques as Bertsimas et al. (2018a) and Esfahani and Kuhn (2018).
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
Proof. We break the limit into upper and lower parts. The proof of the lower part follows from an argument similar to that used by Bertsimas et al. (2018a). The proof of the upper part follows from the argument used by Esfahani and Kuhn (2018). To begin, we define
D N := {ζ : ζ ≤ log N },
and let Pγ |D N (·) be shorthand for P(· | γ =γ, ξ ∈ D N ). Then, applying Assumption 2,
P N ∪ N i=1 U i N ⊆ D N ≤ P max i≤N ξ i + N > log N ≤ N P( ξ > log N − N ) = N E [P( ξ − E[ ξ | γ] > log N − N − E[ ξ | γ] | γ)] ≤ N E P ξ − E[ ξ | γ] > log N − N − sup γ ∈Γ E[ ξ | γ = γ ] | γ ≤ N E 2 exp − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 = 2 exp log N − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 , (EC.2)
which has a finite sum over N ∈ N. Therefore, by the Borel-Cantelli lemma, there exists N 0 ∈ N,
P ∞ -almost surely, such that ∪ N i=1 U i N ⊆ D N ∀N ≥ N 0 .
We now choose any r > 0 such that N N −r satisfies Assumption 1, and define N 1 := max{N 0 , 2 1 r }.
Then, the following holds for all N ≥ N 1 and π ∈ Π:
sup Q∈P(D N ∩Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] ≤ sup Q∈P(D N ∩Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c π (ξ 1 , . . . , ξ T )] + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| = N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| ≤ N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4C N r (1 + log N ). (EC.3)
Indeed, the first supremum satisfies the conditions of Lemma EC.2 since N ≥ N 0 and N ≥ 2 1 r , and the equality follows from Lemma EC.1 since N ≥ N 0 . The final inequality follows from Assumption 5 and the construction of D N . We observe that the second term on (EC.3) converges to zero as N → ∞. Next, we observe that We handle the first term with the Cauchy-Schwartz inequality,
E[c π (ξ 1 , . . . , ξ T ) | γ =γ] E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )] = E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] + E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }].E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] ≤ E ξ∼Pγ [c π (ξ 1 , . . . , ξ T ) 2 ]Pγ(ξ / ∈ D N ).
By Assumptions 2 and 5, the above bound is finite and converges to zero as N → ∞ uniformly over π ∈ Π. We handle the second termby the new concentration measure from this paper. Specifically, it follows from Theorem 2 that there exists an N 2 ≥ N 1 , P ∞ -almost surely, such that
d 1 (Pγ,P N γ ) ≤ N N r ∀N ≥ N 2 .
Therefore, for all N ≥ N 2 and decision rules π ∈ Π:
E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ ∈ D N }] = E ξ∼Pγ c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + Pγ(ξ ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) α N ≤ sup Q∈P(Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] − Pγ(ξ / ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ).
Indeed, the inequality follows because N ≥ N 2 . It follows from Assumption 5 and (EC.2) that the second term in the final equality converges to zero as N → ∞ uniformly over π ∈ Π. Combining the above, we conclude that lim inf N →∞v
N (γ) = lim inf N →∞ inf π∈Π N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) ≥ inf π∈Π E[c π (ξ 1 , . . . , ξ T ) | γ =γ] = v * (γ),
where the inequality holds P ∞ -almost surely. This completes the proof of (EC.1).
Upper bound. We now prove that lim sup N →∞v
N (γ) ≤ v * (γ), P ∞ -almost surely. (EC.4)
Indeed, for any arbitrary δ > 0, let x δ ∈ X be a δ-optimal solution for (1). By Esfahani and Kuhn (2018, Lemma A.1) and Assumption 5, there exists a non-increasing sequence of functions f j (ζ 1 , . . . , ζ T ), j ∈ N, such that lim j→∞ f j (ζ 1 , . . . , ζ T ) = c x δ (ζ 1 , . . . , ζ T ), ∀ζ ∈ Ξ e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information ec7 and f j is L j -Lipschitz continuous. Furthermore, for each N ∈ N, choose any probability distribution
Q N ∈ P(Ξ) such that d 1 (Q N ,P N γ ) ≤ N and sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ.
For any j ∈ N,
lim sup N →∞v N (γ) ≤ lim sup N →∞ sup Q∈P(Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Q N [f j (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j d 1 (Pγ,Q N ) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + d 1 (Q N ,P N γ )) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + N ) + δ = E Pγ [f j (ξ 1 , . . . , ξ T )] + δ, P ∞ -almost surely,
where we have used the fact d 1 (P, Q) ≤ d ∞ (P, Q) for the second inequality, the dual form of the 1-Wasserstein metric for the fifth inequality (because f j is L j -Lipschitz), and Theorem 2 for the equality. Taking the limit as j → ∞, and applying the monotone convergence theorem (which is allowed because E ξ∼Pγ |f 1 (ξ 1 , . . . , ξ T )| ≤ L 1 E ξ∼Pγ ξ + |f 1 (0)| < ∞ by Assumption 4), gives lim sup N →∞v
N (γ) ≤ E ξ∼Pγ [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ v * (γ) + 2δ, P ∞ -almost surely.
Since δ > 0 was chosen arbitrarily, the proof of (EC.4) is complete.
EC.3. Proof of Theorem 3
In this section, we present our proof of Theorem 3 from Section 5. We restate the theorem here for convenience.
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. We first show thatṽ N (γ) ≥v N (γ). Indeed, consider any primary decision ruleπ and auxiliary decision rulesȳ i 1 , . . . ,ȳ i T for each i ∈ {1, . . . , N } which are optimal for (11). 2 Then, it follows from feasibility to (11) that w i N (γ)c π (ζ 1 , . . . , ζ T )
≤ N i=1 w i N (γ)cπ(ζ 1 , . . . , ζ T ) ≤ N i=1 w i N (γ) sup ζ∈U i N T t=1
f tπt (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h tȳ i t (ζ 1 , . . . , ζ t ) =ṽ N (γ).
The other side of the inequality follows from similar reasoning. Indeed, letπ be an optimal solution to (3). For each i ∈ {1, . . . , N } and t ∈ {1, . . . , T }, defineȳ i t ∈ R t as any decision rule that satisfies y i t (ζ 1 , . . . , ζ t ) ∈ arg min y t ∈R Combining the above inequalities, the proof is complete.
EC.4. Tractable Reformulation of the Multi-Policy Approximation
For completeness, we now show how to reformulate the multi-policy approximation scheme with linear decision rules from Section 5 into a deterministic optimization problem using standard techniques from robust optimization.
We begin by transforming (11) with linear decision rules into a more compact representation.
First, we combine the primary linear decision rules across stages as X T −2,1 X T −2,2 X T −2,3 · · · 0 0 0 X T −1,1 X T −1,2 X T −1,3 · · · X T −1,T −2 0 0 X T,1 X T,2 X T,3 · · · X T,T −2 X T,T −1 0 We note that the zero entries in the above matrix are necessary to ensure that the linear decision rules are non-anticipative. Similarly, for each i ∈ {1, . . . , N }, we represent the auxiliary linear decision rules as
x 0 = x 1,0 . . . x T,0 ∈ R dx , X = 0 0 0 · · · 0 0 0 X 2,1 0 0 · · · 0 0 0 X 3,1 X 3,2 0 · · · 0 0 0 . . .. ∈ R dx×d ξ .y i 0 = y i 1,0 . . . y i T,0 ∈ R dy , Y i = Y i 1,1 0 · · · 0 0 Y i 2,1 Y i 2,2 · · · 0 0 . . . . . . . . . . . . . . . Y i T −1,1 Y i T −1,2 · · · Y i T −1,T −1 0 Y i T,1 Y i T,2 · · · Y i t,t−1 Y i T,T ∈ R dy ×d ξ .
We now combine the problem parameters. Let d = (d 1 , . . . , d T ) ∈ R m and
f = f 1 . . . f T ∈ R dx , A = A 1,1 0 · · · 0 0 A 2,1 A 2,2 · · · 0 0 . . . . . . . . . . . . . . . A T −1,1 A T −1,2 · · · A T −1,T −1 0 A T,1 A T,2 · · · A t,t−1 A T,T ∈ R m×dx , g = g 1 . . . g T ∈ R d ξ , B = B 1,1 0 · · · 0 0 B 2,1 B 2,2 · · · 0 0 . . . . . . . . . . . . . . . B T −1,1 B T −1,2 · · · B T −1,T −1 0 B T,1 B T,2 · · · B t,t−1 B T,T ∈ R m×dx , h = h 1 . . . h T ∈ R dy , C =
C 1,1 0 · · · 0 0 0 C 2,2 · · · 0 0 . . . Therefore, using the above compact notation, we can rewrite the multi-policy approximation with linear decision rules as
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ subject to A(x 0 + Xζ) + Bζ + C y i 0 + Y i ζ ≤ d x 0 + Xζ ∈ X ∀ζ ∈ U i N , i ∈ {1, . . . , N },(EC.5)
where X := X 1 × · · · × X T and the matrices X and Y are non-anticipative. Note that the linear decision rules in the above optimization problem are represented using O(d ξ max{d x , N d y }) decision variables, where d x := d 1 x + · · · + d T x and d y := d 1 y + · · · + d T y . Thus, the complexity of representing the primary and auxiliary linear decision rules scales efficiently both in the size of the dataset and the number of stages. For simplicity, we present the reformulation for the case in which there are no constraints on the decision variables and nonnegativity constraints on the random variables.
Theorem EC.2. Suppose Ξ = R d ξ + and X = R dx . Then, (EC.5) is equivalent to
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ Λ i ∈R m×d ξ + , s i ∈R d ξ + N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + (s i ) ξ i + N X f + g + (Y i ) h + s i * subject to A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d ∀i ∈ {1, . . . , N }.
where Z * := ( z 1 * , . . . , z r * ) ∈ R r for any matrix Z ∈ R r×n .
Proof. For any c ∈ R d ξ and ξ ∈ Ξ, it follows directly from strong duality for conic optimization that max ζ≥0 {c ζ : ζ − ξ ≤ } = min λ≥0 {(c + λ) ξ + c + λ * } .
We use this result to reformulate the objective and constraints of (EC.5). First, let the j-th rows of A, B, C and the j-th element of d be denoted by a j ∈ R dx , b j ∈ R ξ , c j ∈ R dy , and d j ∈ R. Then, each robust constraint has the form
a j (x 0 + Xζ) + b j ζ + c j (y i 0 + Y i ζ) ≤ d j ∀ζ ∈ U i N .
Rearranging terms,
(a j X + b j + c j Y i )ζ ≤ d j − a j x 0 − c j y i 0 ∀ζ ∈ U i N ,
which applying duality becomes ∃λ i j ≥ 0 : X a j + b j + (Y i ) c j + λ i j ξ i + N X a j + b j + (Y i ) c j + λ i j * ≤ d j − a j x 0 − c j y i 0 .
Rearranging terms, the robust constraints for each i ∈ {1, . . . , N } are satisfied if and only if
∃Λ i ≥ 0 : A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d,
where the dual norm for a matrix is applied separately for each row. Similarly, the objective function takes the form
N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + sup ζ∈U i N f X + g + h Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + inf s i ≥0 X f + g + (Y i ) h + s i ξ i + N X f + g + (Y i ) h + s i * = N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + inf s i ≥0 (s i ) ξ i + N X f + g + (Y i ) h + s i * .
Combining the reformulations above, we obtain the desired reformulation. | 14,319 |
1907.07307 | 2961578905 | We present a data-driven framework for incorporating side information in dynamic optimization under uncertainty. Specifically, our approach uses predictive machine learning methods (such as k-nearest neighbors, kernel regression, and random forests) to weight the relative importance of various data-driven uncertainty sets in a robust optimization formulation. Through a novel measure concentration result for local machine learning methods, we prove that the proposed framework is asymptotically optimal for stochastic dynamic optimization with covariates. We also describe a general-purpose approximation for the proposed framework, based on overlapping linear decision rules, which is computationally tractable and produces high-quality solutions for dynamic problems with many stages. Across a variety of examples in shipment planning, inventory management, and finance, our method achieves improvements of up to 15 over alternatives and requires less than one minute of computation time on problems with twelve stages. | As discussed previously, the methodology in this paper also follows recent work on incorporating covariates in optimization under uncertainty using local predictive methods (such as @math -nearest neighbor regression, kernel regression, and random forests). In particular, the asymptotic optimality justification of @cite_11 in single-stage settings relies on the strong universal consistency for local predictive models (, @cite_6 ). Our proof of asymptotic optimality instead relies on convergence guarantees rooted in distributionally robust optimization. The reason we use a different approach is that the arguments for the convergence for local predictive models from @cite_11 require finite dimensional decision variables. In contrast, the convergence guarantees in this paper apply for dynamic optimization over general spaces of policies. | {
"abstract": [
"Elementary approaches to classic strong laws of large numbers use a monotonicity argument or a Tauberian argument of summability theory. Together with results on variance of sums of dependent random variables they allow to establish various strong laws of large numbers in case of dependence, especially under mixing conditions. Strong consistency of nonparametric regression estimates of local averaging type (kernel and nearest neighbor estimates), pointwise as well as in L 2, can be considered as a generalization of strong laws of large numbers. Both approaches can be used to establish strong universal consistency in the case of independence and, mostly by sharpened integrability assumptions, consistency under ρ-mixing or α-mixing. In a similar way Rosenblatt-Parzen kernel density estimates are treated.",
"In this paper, we combine ideas from machine learning (ML) and operations research and management science (OR MS) in developing a framework, along with specific methods, for using data to prescribe decisions in OR MS problems. In a departure from other work on data-driven optimization and reflecting our practical experience with the data available in applications of OR MS, we consider data consisting, not only of observations of quantities with direct effect on costs revenues, such as demand or returns, but predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate that our proposed solution methods are generally applicable to a wide range of decision problems. We prove that they are computationally tractable and asymptotically optimal under mild conditions even when data is not independent and identically distributed (iid) and even for censored observations. As an analogue to the coefficient of determination @math , we develop a metric @math termed the coefficient of prescriptiveness to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. To demonstrate the power of our approach in a real-world setting we study an inventory management problem faced by the distribution arm of an international media conglomerate, which ships an average of 1 billion units per year. We leverage both internal data and public online data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, accounts for an 88 improvement as measured by our coefficient of prescriptiveness."
],
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2211648829",
"1507350577"
]
} | Dynamic Optimization with Side Information | Dynamic decision making under uncertainty forms the foundation for numerous fundamental problems in operations research and management science. In these problems, a decision maker attempts to minimize an uncertain objective over time, as information incrementally becomes available. For example, consider a retailer with the goal of managing the inventory of a new short life cycle product. Each week, the retailer must decide an ordering quantity to replenish its inventory. Future demand for the product is unknown, but the retailer can base its ordering decisions on the remaining inventory level, which depends on the realized demands in previous weeks. A risk-averse investor faces a similar problem when constructing and adjusting a portfolio of assets in order to achieve a desirable risk-return tradeoff over a horizon of many months. Additional examples abound in energy planning, airline routing, and ride sharing, as well as in many other areas.
To make high quality decisions in dynamic environments, the decision maker must accurately model future uncertainty. Often, practitioners have access to side information or auxiliary covariates, which can help predict that uncertainty. For a retailer, although the future demand for a newly introduced clothing item is unknown, data on the brand, style, and color of the item, as well as data on market trends and social media, can help predict it. For a risk-averse investor, while the returns of the assets in future stages are uncertain, recent asset returns and prices of relevant options can provide crucial insight into upcoming volatility. Consequently, organizations across many industries are continuing to prioritize the use of predictive analytics in order to leverage vast quantities of data to understand future uncertainty and make better operational decisions.
A recent body of work has aimed to leverage predictive analytics in decision making under uncertainty. For example, Hannah et al. (2010), Ban and Rudin (2018), Bertsimas and Kallus (2014) and Ho and Hanasusanto (2019) investigate prescriptive approaches, based on sample average approximation, that use local machine learning to assign weights to the historical data based on covariates. Bertsimas and Van Parys (2017) propose adding robustness to those weights to achieve optimal asymptotic budget guarantees. Elmachtoub and Grigas (2017) develop an approach for linear optimization problems in which a machine learning model is trained to minimize the decision cost. All of these approaches are specialized for single-stage or two-stage optimization problems, and do not readily generalize to problems with many stages. For a class of dynamic inventory problems, propose a data-driven approach by fitting the stochastic process and covariates to a parametric regression model, which is asymptotically optimal when the model is correctly specified. Bertsimas and McCord (2019) propose a different approach based on dynamic programming that uses nonparametric machine learning methods to handle auxiliary covariates.
However, these dynamic approaches require scenario tree enumeration and suffer from the curse of dimensionality. To the best of our knowledge, no previous work leverages machine learning in a computationally tractable, data-driven framework for decision making in dynamic environments with covariates.
Recently, Bertsimas et al. (2018a) developed a data-driven approach for dynamic optimization under uncertainty that they call sample robust optimization (SRO). Their SRO framework solves a robust optimization problem in which an uncertainty set is constructed around each historical sample path. They show this data-driven framework enjoys nonparametric out-of-sample performance guarantees for a class of dynamic linear optimization problems without covariates and show that this framework can be approximated using decision rule techniques from robust optimization.
Contributions
In this paper, we present a new framework for leveraging side information in dynamic optimization. Specifically, we propose combining local machine learning methods with the sample robust optimization framework. Through a new measure concentration result, we show that the proposed sample robust optimization with covariates framework is asymptotically optimal, providing the assurance that the resulting decisions are nearly optimal in the presence of big data. We also demonstrate the tractability of the approach via an approximation algorithm based on overlapping linear decision rules. To the best of our knowledge, our method is the first nonparametric approach for tractably solving dynamic optimization problems with covariates, offering practitioners a general-purpose tool for better decision making with predictive analytics. We summarize our main contributions as follows:
• We present a general-purpose framework for leveraging machine learning in data-driven dynamic optimization with covariates. Our approach extends the sample robust optimization framework by assigning weights to the uncertainty sets based on covariates. The weights are computed using machine learning methods such as k-nearest neighbor regression, kernel regression, and random forest regression.
• We provide theoretical justification for the proposed framework in the big data setting. First, we develop a new measure concentration result for local machine learning methods (Theorem 2), which shows that the weighted empirical distribution produced by local predictors converges quickly to the true conditional distribution. To the best of our knowledge, such a result for local machine learning is the first of its kind. We use Theorem 2 to establish that the proposed framework is asymptotically optimal for dynamic optimization with covariates without any parametric assumptions (Theorem 1).
• To find high quality solutions for problems with many stages in practical computation times, we present an approximation scheme based on overlapping linear decision rules. Specifically, we propose using separate linear decision rules for each uncertainty set to approximate the costs incurred in each stage. We show that the approximation is computationally tractable, both with respect to the number of stages and size of the historical dataset.
• By using all available data, we show that our method produces decisions that achieve improved out-of-sample performance. Specifically, in a variety of examples (shipment planning, inventory management, and finance), across a variety of time horizons, our proposed method outperforms alternatives, in a statistically significant manner, achieving up to 15% improvement in average out-of-sample cost. Moreover, our algorithm is practical and scalable, requiring less than one minute on examples with up to twelve stages.
The paper is organized as follows. Section 2 introduces the problem setting and notation. Section 3 proposes the new framework for incorporating machine learning into dynamic optimization. Section 4 develops theoretical guarantees on the proposed framework. Section 5 presents the general multi-policy approximation scheme for dynamic optimization with covariates. Section 6 presents a detailed investigation and computational simulations of the proposed methodology in shipment planning, inventory management, and finance. We conclude in Section 7.
Problem Setting
We consider finite-horizon discrete-time stochastic dynamic optimization problems. The uncertain quantities observed in each stage are denoted by random variables
ξ 1 ∈ Ξ 1 ⊆ R d 1 ξ , . . . , ξ T ∈ Ξ T ⊆ R d T ξ . The decisions made in each stage are denoted by x 1 ∈ X 1 ⊆ R d 1 x , . . . , x T ∈ X T ⊆ R d T x .
Given realizations of the uncertain quantities and decisions, we incur a cost of
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) ∈ R.
A decision rule π = (π 1 , . . . , π T ) is a collection of measurable functions π t : Ξ 1 × · · · × Ξ t−1 → X t which specify what decision to make in stage t based of the information observed up to that point.
Given realizations of the uncertain quantities and choice of decision rules, the resulting cost is c π ξ 1 , . . . , ξ T , ) := c(ξ 1 , . . . , ξ T , π 1 , . . . , π T (ξ 1 , . . . , ξ T −1 ) .
Before selecting the decision rules, we observe auxiliary covariates γ ∈ Γ ⊆ R dγ . For example, in the aforementioned fashion setting, the auxiliary covariates may information on the brand, style, and color of a new clothing item and the remaining uncertainties representing the demand for the product in each week of the lifecycle. Given a realization of the covariates γ =γ, our goal is to find decision rules which minimize the conditional expected cost:
v * (γ) := minimize π∈Π E c π (ξ 1 , . . . , ξ T ) γ =γ .(1)
We refer to (1) as dynamic optimization with covariates. The optimization takes place over a collection Π which is any subset of the space of all non-anticipative decision rules.
In this paper, we assume that the joint distribution of the covariates and uncertain quantities (γ, ξ 1 , . . . , ξ T ) is unknown, and our knowledge consists of historical data of the form
(γ 1 , ξ 1 1 , . . . , ξ 1 T ), . . . , (γ N , ξ N 1 , . . . , ξ N T ),
where each of these tuples consists of a realization of the auxiliary covariates and the following realization of the random variables over the stages. For example, in the aforementioned fashion setting, each tuple corresponds to the covariates of a past fashion item as well as its demand over its lifecycle. We will not assume any parametric structure on the relationship between the covariates and future uncertainty.
The goal of this paper is a general-purpose, computationally tractable, data-driven approach for approximately solving dynamic optimization with covariates. In the following sections, we propose and analyze a new framework which leverages nonparametric machine learning, trained from historical data, to predict future uncertainty from covariates in a way that leads to near-optimal decision rules to (1).
Notation
The joint probability distribution of the covariates γ and uncertain quantities ξ = (ξ 1 , . . . , ξ T ) is denoted by P. For the purpose of proving theorems, we assume throughout this paper that the historical data are independent and identically distributed (i.i.d.) samples from this distribution P. In other words, we assume that the historical data satisfies
((γ 1 , ξ 1 ), . . . , (γ N , ξ N )) ∼ P N ,
where P N := P × · · · × P is the product measure. The set of all probability distributions supported on Ξ := Ξ 1 × · · · × Ξ T ⊆ R d ξ is denoted by P(Ξ). For each of the covariatesγ ∈ Γ, we assume that its conditional probability distribution satisfies Pγ ∈ P(Ξ), where Pγ(·) is shorthand for P(· | γ = γ). We sometimes use subscript notation for expectations to specify the underlying probability distribution; for example, the following two expressions are equivalent:
E ξ∼Pγ [f (ξ 1 , . . . , ξ T )] ≡ E [f (ξ 1 , . . . , ξ T ) | γ =γ] .
Finally, we say that the cost function resulting from a policy π is upper semicontinuous if lim sup ζ→ζ c π (ζ 1 , . . . , ζ T ) ≤ c π (ζ 1 , . . . ,ζ T ) for allζ ∈ Ξ.
Sample Robust Optimization with Covariates
In this section, we present our approach for incorporating machine learning in dynamic optimization. We first review sample robust optimization, and then we introduce our new sample robust optimization with covariates framework.
Preliminary: sample robust optimization
Consider a stochastic dynamic optimization problem of the form (1) in which there are no auxiliary covariates. The underlying joint distribution of the random variables ξ ≡ (ξ 1 , . . . , ξ T ) is unknown, but we have data consisting of sample paths, ξ 1 ≡ (ξ 1 1 , . . . , ξ 1 T ), . . . , ξ N ≡ (ξ N 1 , . . . , ξ N T ). For this setting, sample robust optimization can be used to find approximate solutions in stochastic dynamic optimization. To apply the framework, one constructs an uncertainty set around each sample path in the training data and then chooses the decision rules that optimize the average of the worstcase realizations of the cost. Formally, this framework results in the following robust optimization problem:
minimize π∈Π N i=1 1 N sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(2)
where U i N ⊆ Ξ is an uncertainty set around ξ i . Intuitively speaking, (2) chooses the decision rules by averaging over the historical sample paths which are adversarially perturbed. Under mild probabilistic assumptions on the underlying joint distribution and appropriately constructed uncertainty sets, Bertsimas et al. (2018a) show that sample robust optimization converges asymptotically to the underlying stochastic problem and that (2) is amenable to approximations similar to dynamic robust optimization.
Incorporating covariates into sample robust optimization
We now present our new framework, based on sample robust optimization, for solving dynamic optimization with covariates. In the proposed framework, we first train a machine learning algorithm on the historical data to predict future uncertainty (ξ 1 , . . . , ξ T ) as a function of the covariates.
From the trained learner, we obtain weight functions w i N (γ), for i = 1, . . . , N , each of which captures the relevance of the ith training sample to the new covariates,γ. We incorporate the weights into sample robust optimization by multiplying the cost associated with each training example by the corresponding weight function. The resulting sample robust optimization with covariates framework is as follows:
v N (γ) := minimize π∈Π N i=1 w i N (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ),(3)
where the uncertainty sets are defined
U i N := ζ ∈ Ξ : ζ − ξ i ≤ N ,
and · is some p norm with p ≥ 1.
The above framework provides the flexibility for the practitioner to construct weights from a variety of machine learning algorithms. We focus in this paper on weight functions which come from nonparametric machine learning methods. Examples of viable predictive models include k-nearest neighbors (kNN), kernel regression, classification and regression trees (CART), and random forests (RF). We describe these four classes of weight functions.
Definition 1. The k-nearest neighbor weight functions are given by:
w i N,kNN (γ) := 1 k N , if γ i is a k N -nearest neighbor ofγ, 0, otherwise. Formally, γ i is a k N -nearest neighbor ofγ if |{j ∈ {1, . . . , N } \ i : γ j −γ < γ i −γ }| < k N .
For more technical details, we refer the reader to Biau and Devroye (2015).
Definition 2. The kernel regression weight functions are given by:
w i N,KR (γ) := K( γ i −γ /h N ) N j=1 K( γ j −γ /h N ) ,
where K(·) is the kernel function and h N is the bandwidth parameter. Examples of kernel functions include the Gaussian kernel, K(u) = 1 √ 2π e −u 2 /2 , the triangular kernel, K(u) = (1 − u)1{u ≤ 1}, and the Epanechnikov kernel, K(u) = 3 4 (1 − u 2 )1{u ≤ 1}. For more information on kernel regression, see Friedman et al. (2001, Chapter 6).
The next two types of weight functions we present are based on classification and regression trees (Breiman et al. 1984) and random forests (Breiman 2001). We refer the reader to Bertsimas and Kallus (2014) for technical implementation details.
Definition 3. The classification and regression tree weight functions are given by:
w i N,CART (γ) := 1 |l N (γ)| , i ∈ l N (γ), 0, otherwise,
where l N (γ) is the set of indices i such that γ i is contained in the same leaf of the tree asγ.
Definition 4. The random forest weight functions are given by:
w i N,RF (γ) := 1 B B b=1 w i,b N,CART (γ),
where B is the number of trees in the ensemble, and w i,b N,CART (γ) refers to the weight function of the bth tree in the ensemble.
All of the above weight functions come from nonparametric machine learning methods. They are highly effective as predictive methods because they can learn complex relationships between the covariates and the response variable without requiring the practitioner to state an explicit parametric form. Similarly, as we prove in Section 4, solutions to (3) with these weight functions are asymptotically optimal for (1) without any parametric restrictions on the relationship between γ and ξ. In other words, incorporating covariates into sample robust optimization via (3) leads to better decisions asymptotically, even without specific knowledge of how the covariates affect the uncertainty.
Asymptotic Optimality
In this section, we establish asymptotic optimality guarantees for sample robust optimization with auxiliary covariates. We prove that, under mild conditions, (3) converges to (1) as the number of training samples goes to infinity. Thus, as the amount of data grows, sample robust optimization with covariates becomes an optimal approximation of the underlying stochastic dynamic optimization problem. Crucially, our convergence guarantee does not require parametric restrictions on the space of decision rules (e.g., linearity) or parametric restrictions on the joint distribution of the covariates and uncertain quantities. These theoretical results are consistent with empirical experiments in Section 6.
Main result
We begin by presenting our main result. The proof of the result depends on some technical assumptions and concepts from distributionally robust optimization. For simplicity, we defer the statement and discussion of technical assumptions regarding the underlying probability distribution and cost until Sections 4.3 and 4.4, and first discuss what is needed to apply the method in practice. The practitioner needs to select a weight function, parameters associated with that weight function, and the radius, N , of the uncertainty sets. While these may be selected by cross validation, we show that the method will in general converge if the parameters are selected to satisfy the following: Assumption 1. The weight functions and uncertainty set radius satisfy one of the following:
1. {w i N (·)} are k-nearest neighbor weight functions with k N = min( k 3 N δ , N − 1) for constants k 3 > 0 and δ ∈ ( 1 2 , 1), and N = k 1 N p for constants k 1 > 0 and 0 < p < min 1−δ dγ , 2δ−1 d ξ +2 . 2. {w i N (·)
} are kernel regression weight functions with the Gaussian, triangular, or Epanechnikov kernel function and h N = k 4 N −δ for constants k 4 > 0 and δ ∈ 0, 1 2dγ , and N = k 1 N p for constants k 1 > 0 and 0 < p < min δ,
1−δdγ 2+d ξ .
Given Assumption 1, our main result is the following.
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
The theorem says that objective value of (3) converge almost surely to the optimal value of the fullinformation problem, (1), as N goes to infinity. The assumptions of the theorem require that the joint distribution and the feasible decision rules are well behaved. We will discuss these technical assumptions in more detail in the following sections.
In order to prove the asymptotic optimality of sample robust optimization with covariates, we view (3) through the more general lens of Wasserstein-based distributionally robust optimization.
We first review some properties of the Wasserstein metric and then prove a key intermediary result, from which our main result follows.
Review of the Wasserstein metric
The Wasserstein metric provides a distance function between probability distributions. In particular, given two probability distributions Q, Q ∈ P(Ξ), the type-1 Wasserstein distance is defined as the optimal objective value of a minimization problem:
d 1 (Q, Q ) := inf E (ξ,ξ )∼Π ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively . The Wasserstein metric is particularly appealing because a distribution with finite support can have a finite distance to a continuous distribution. This allows us to construct a Wasserstein ball around an empirical distribution that includes continuous distributions, which cannot be done with other popular measures such as the Kullback-Leilbler divergence (Kullback and Leibler 1951).
We remark that the 1-Wasserstein metric satisfies the axioms of a metric, including the triangle inequality (Clement and Desch 2008):
d 1 (Q 1 , Q 2 ) ≤ d 1 (Q 1 , Q 3 ) + d 1 (Q 3 , Q 2 ), ∀Q 1 , Q 2 , Q 3 ∈ P(Ξ).
Important to this paper, the 1-Wasserstein metric admits a dual form, as shown by Kantorovich and Rubinstein (1958),
d 1 (Q, Q ) = sup Lip(h)≤1 |E ξ∼Q [h(ξ)] − E ξ∼Q [h(ξ)]| ,
where the supremum is taken over all 1-Lipschitz functions. Note that the absolute value is optional in the dual form of the metric, and the space of Lipschitz functions can be restricted to those which satisfy h(0) = 0 without loss of generality. Finally, we remark that Fournier and Guillin (2015) prove under a light-tailed assumption that the 1-Wasserstein distance between the empirical distribution and its underlying distribution concentrates around zero with high probability. Theorem 2 in the following section extends this concentration result to the setting with auxiliary covariates.
Concentration of the weighted empirical measure
Given a local predictive method, let the corresponding weighted empirical measure be defined aŝ
P N γ := N i=1 w i N (γ)δ ξ i ,
where δ ξ denotes the Dirac probability distribution which places point mass at ξ. In this section,
we prove under mild assumptions that the weighted empirical measureP N γ concentrations quickly to Pγ with respect to the 1-Wasserstein metric. We introduce the following assumptions on the underlying joint probability distribution:
Assumption 2 (Conditional Subgaussianity). There exists a parameter σ > 0 such that
P ( ξ − E[ ξ | γ =γ] > t | γ =γ) ≤ exp − t 2 2σ 2 ∀t > 0,γ ∈ Γ.
Assumption 3 (Lipschitz Continuity). There exists 0 < L < ∞ such that
d 1 (Pγ, Pγ ) ≤ L γ −γ , ∀γ,γ ∈ Γ.
Assumption 4 (Smoothness of Auxiliary Covariates). The set Γ is compact, and there exists g > 0 such that
P( γ −γ ≤ ) ≥ g dγ , ∀ > 0,γ ∈ Γ.
With these assumptions, we are ready to prove the concentration result, which is proved using a novel technique that relies on the dual form of the Wasserstein metric and a discrete approximation of the space of 1-Lipschitz functions.
Theorem 2. Suppose the weight function and uncertainty sets satisfy Assumption 1 and the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4. Then, for everyγ ∈ Γ,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. = 0.
Proof. Without loss of generality, we assume throughout the proof that all norms · refer to the ∞ norm. 1 Fix anyγ ∈ Γ. It follows from Assumption 1 that
{w i N (γ)} are not functions of ξ 1 , . . . , ξ N ; (4) N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N; (5) N = k 1 N p , ∀N ∈ N,(6)
for constants k 1 , p > 0. Moreover, Assumption 1 also implies that there exists constants k 2 > 0 and
η > p(2 + d ξ ) such that lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely; (7) E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
The proof of the the above statements under Assumption 1 is found in Appendix EC.1. Now,
choose any fixed q ∈ (0, η/(2 + d ξ ) − p), and let b N := N q , B N := ζ ∈ R d ξ : ζ ≤ b N , I N := 1 ξ 1 , . . . , ξ N ∈ B N .
Finally, we define the following intermediary probability distributions:
Q N γ := N i=1 w i N (γ)P γ i ,Q N γ|B N := N i=1 w i N (γ)P γ i |B N , where P γ i |B N (·) is shorthand for P(· | γ = γ i , ξ ∈ B N ).
Applying the triangle inequality for the 1-Wasserstein metric and the union bound,
P ∞ {d 1 (Pγ,P N γ ) > N } i.o. ≤ P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. + P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. + P ∞ d 1 (Q N γ|B N ,P N γ ) > N 3 i.o. .
We now proceed to bound each of the above terms.
1 To see why this is without loss of generality, consider any other p norm where p ≥ 1. In this case,
ξ − ξ p ≤ d 1/p ξ ξ − ξ ∞.
By the definition of the 1-Wasserstein metric, this implies
d p 1 (Pγ ,P N γ ) ≤ d 1/p ξ d ∞ 1 (Pγ ,P N γ ),
where d p 1 refers to the 1-Wasserstein metric with the p norm. If N satisfies Assumption 1, N /d 1/p ξ also satisfies Assumption 1, so the result for all other choices of p norms follows from the result with the ∞ norm.
Term 1: d 1 (Pγ,Q N γ ): By the dual form of the 1-Wasserstein metric,
d 1 (Pγ,Q N γ ) = sup Lip(h)≤1 E[h(ξ)|γ =γ] − N i=1 w i N (γ)E[h(ξ)|γ = γ i ] ,
where the supremum is taken over all 1-Lipschitz functions. By (5) and Jensen's inequality, we can upper bound this by
d 1 (Pγ,Q N γ ) ≤ N i=1 w i N (γ) sup Lip(h)≤1 E[h(ξ)|γ =γ] − E[h(ξ)|γ = γ i ] = N i=1 w i N (γ)d 1 Pγ, P γ i ≤ L N i=1 w i N (γ) γ − γ i ,
where the final inequality follows from Assumption 3. Therefore, it follows from (7) that
P ∞ d 1 (Pγ,Q N γ ) > N 3 i.o. = 0. (9) Term 2: d 1 (Q N γ ,Q N γ|B N ): Consider any Lipschitz function Lip(h) ≤ 1 for which h(0) = 0, and let N ∈ N satisfy bN ≥ σ + supγ ∈Γ E[ ξ |γ =γ] (which is finite because of Assumption 4). Then, for all N ≥N , and allγ ∈ Γ, E[h(ξ)|γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ)1{ξ ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] + E[h(ξ) | γ =γ , ξ ∈ B N ]P (ξ ∈ B N | γ =γ ) − E[h(ξ) | γ =γ , ξ ∈ B N ] = E[h(ξ)1{ξ / ∈ B N } | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]P(ξ / ∈ B N | γ =γ ) ≤ E[ ξ 1{ξ / ∈ B N } | γ =γ ] + b N P(ξ / ∈ B N | γ =γ ) = ∞ b N P ( ξ > t | γ =γ ) dt + b N P ( ξ ≥ b N | γ =γ ) ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 .
The first inequality follows because |h(ξ)| ≤ b N for all ξ ∈ B N and |h(ξ)| ≤ ξ otherwise. For the second inequality, we used the Gaussian tail inequality ∞ x e −t 2 /2 dt ≤ e −x 2 /2 for x ≥ 1 (Vershynin 2018) along with Assumption 2. Because this bound holds uniformly over all h, and allγ ∈ Γ, it follows that
d 1 (Q N γ ,Q N γ|B N ) = sup Lip(h)≤1,h(0)=0 N i=1 w i N (γ) E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ N i=1 w i N (γ) sup Lip(h)≤1,h(0)=0 E[h(ξ) | γ = γ i ] − E[h(ξ) | γ = γ i , ξ ∈ B N ] ≤ sup γ ∈Γ sup Lip(h)≤1,h(0)=0 |E[h(ξ) | γ =γ ] − E[h(ξ) | γ =γ , ξ ∈ B N ]| ≤ (σ + b N ) exp − 1 2σ 2 b N − sup γ ∈Γ E[ ξ |γ =γ ] 2 ,
for all N ≥N . It is easy to see that the right hand side above divided by N /3 goes to 0 as N goes to infinity, so
P ∞ d 1 (Q N γ ,Q N γ|B N ) > N 3 i.o. = 0. Term 3: d 1 (Q N γ|B N ,P N γ )
: By the law of total probability,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 ≤ P N (I N = 0) + P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 .
We now show that each of the above terms have finite summations. First,
∞ N =1 P N (I N = 0) ≤ ∞ N =1 N sup γ ∈Γ P(ξ / ∈ B N | γ =γ ) ≤ ∞ N =1 N sup γ ∈Γ exp − (b N − E [ ξ | γ =γ ]) 2 2σ 2 < ∞.
The first inequality follows from the union bound, the second inequality follows from Assumption 2, and the final inequality follows because supγ ∈Γ E[ ξ |γ =γ ] < ∞ and the definition of b N .
Second, for each l ∈ N, we define several quantities. Let P l be the partitioning of
B N = [−b N , b N ] d ξ into 2 ld ξ translations of (−b N 2 −l , b N 2 −l ] d ξ .
Let H l be the set of piecewise constant functions which are constant on each region of the partition P l , taking values on {kb N 2 −l : k ∈ {0, ±1, ±2, ±3, . . . , ±2 l }}. Note that |H l | = (2 l+1 + 1) 2 ld ξ . Then, we observe that for all Lipschitz functions Lip(h) ≤ 1 which satisfy h(0) = 0, there exists aĥ ∈ H l such that
sup ζ∈B N |h(ζ) −ĥ(ζ)| ≤ b N 2 −l+1 .
Indeed, within each region of the partition, h can vary by no more than b N 2 −l+1 . The possible function values forĥ are separated by b N 2 −l . Because h is bounded by ±b N , this implies the existence ofĥ ∈ H l such thatĥ has a value within b N 2 −l+1 of h everywhere within that region. The identical reasoning holds for all other regions of the partition.
Therefore, for every l ∈ N,
P N d 1 (Q N γ|B N ,P N γ ) > N 3 I N = 1 = P N sup Lip(h)≤1 h(0)=0 N i=1 w i N (γ) h(ξ i ) − E[h(ξ) | γ = γ i , ξ ∈ B N ] > N 3 I N = 1 ≤ P N sup h∈H l N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − 2 · b N 2 −l+1 I N = 1 ≤ |H l | sup h∈H l P N N i=1 w i N (γ) ĥ (ξ i ) − E ĥ (ξ) | γ = γ i , ξ ∈ B N > N 3 − b N 2 −l+2 I N = 1 ,
where the final inequality follows from the union bound. We choose l = 2 + log 2
6b N N , in which case N 3 − b N 2 −l+2 ≥ N 6 .
Furthermore, for all sufficiently large N ,
|H l | = (2 l+1 + 1) 2 ld ξ ≤ 96 b N N 24 d ξ (b N / N ) d ξ = exp 24 d ξ b N N d ξ log 96b N N .
Applying Hoeffding's inequality, and noting |ĥ(ξ i )| is bounded by b N when ξ i ∈ B N , we have the following for allĥ ∈ H l :
P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1 = E P N N i=1 w i N (γ) ĥ (ξ i ) − E[ĥ(ξ)|ξ ∈ B N , γ = γ i ] > N 6 I N = 1, γ 1 , . . . , γ N I N = 1 ≤ E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N = 1 = E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N I N 1 P N (I N = 1) ≤ 2E exp − 2 N 72 N i=1 (w i N (γ)) 2 b 2 N ≤ 2 exp − k 2 N η 2 N 72b 2 N ,
for N sufficiently large that P(I N = 1) ≥ 1/2 and 2 N /72b 2 N < 1. Note that (8) was used for the final inequality. Combining these results, we have
P N d 1 (P N γ ,Q N γ|B N ) > N /3 I N = 1 ≤ 2 exp 24 d ξ b N N d ξ log 96b N N − k 2 2 N N η 72N b 2 N ,
for N sufficiently large. For some constants c 1 , c 2 > 0, and sufficiently large N , this is upper bounded by 2 exp −c 1 N η−2(p+q) + c 2 N d ξ (q+p) log N .
Since 0 < d ξ (p + q) < η − 2(p + q), we can conduct a limit comparison test with 1/N 2 to see that this term has a finite sum over N , which completes the proof.
Proof of main result
Theorem 2 provides the key ingredient for the proof of the main consistency result. We state one final assumption, which requires that the objective function of (1) is upper semicontinuous and bounded by linear functions of the uncertainty.
Assumption 5. For all π ∈ Π, c π (ζ 1 , . . . , ζ T ) is upper semicontinuous in ζ and |c(ζ, x)| ≤ C(1 + ζ ) for all ζ ∈ Ξ and some C > 0.
Under this assumption, the proof of Theorem 1 follows from Theorem 2 via arguments similar to those used by Esfahani and Kuhn (2018) and Bertsimas et al. (2018a). We state it fully in Appendix EC.2.
Tractable Approximations
In the previous sections, we presented the new framework of sample robust optimization with covariates and established its asymptotic optimality without any significant structural restrictions on the space of decision rules. In this section, we focus on tractable methods for approximately solving the robust optimization problems that result from this proposed framework. Specifically, we develop a formulation which uses auxiliary decision rules to approximate the cost function.
In combination with linear decision rules, this approach enables us to find high-quality decisions for real-world problems with more than ten stages in less than one minute, as we demonstrate in Section 6.
We focus in this section on dynamic optimization problems with cost functions of the form
c (ξ 1 , . . . , ξ T , x 1 , . . . , x T ) = T t=1 f t x t + g t ξ t + min y t ∈R d t y h t y t : t s=1 A t,s x s + t s=1 B t,s ξ s + C t y t ≤ d t .(10)
Such cost functions appear frequently in applications such as inventory management and supply chain networks. Unfortunately, it is well known that these cost functions are convex in the uncertainty ξ 1 , . . . , ξ T . Thus, even evaluating the worst-case cost over a convex uncertainty set is computationally demanding in general, as it requires the maximization of a convex function.
As an intermediary step towards developing an approximation scheme for (3) with the above cost function, we consider the following optimization problem:
v N (γ) := minimize π∈Π, y i t ∈R t ∀i,t N i=1 w i N (γ) sup ζ∈U i N T t=1 f t π t (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h t y i t (ζ 1 , . . . , ζ t ) subject to t s=1 A t,s π s (ζ 1 , . . . , ζ s−1 ) + t s=1 B t,s ζ s + C t y i t (ζ 1 , . . . , ζ t ) ≤ d t ∀ζ ∈ U i N , i ∈ {1, . . . , N }, t ∈ {1, . . . , T },(11)
where R t is the set of all functions y : Ξ 1 × · · · × Ξ t → R d t y . In this problem, we have introduced auxiliary decision rules which capture the minimization portion of (10) in each stage. We refer to (11) as a multi-policy approach, as it involves different auxiliary decision rules for each uncertainty set. The following theorem shows that (11) is equivalent to (3).
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. See Appendix EC.3.
We observe that (11) involves optimizing over decision rules, and thus is computationally challenging to solve in general. Nonetheless, we can obtain a tractable approximation of (11) by further restricting the space of primary and auxiliary decision rules. For instance, we can restrict all primary and auxiliary decision rules as linear decision rules of the form
π t ζ 1 , . . . , ζ t−1 = x t,0 + t−1 s=1 X t,s ζ s , y i t (ζ 1 , . . . , ζ t ) = y i t,0 + t s=1 Y i t,s ζ s .
One can alternatively elect to use a richer class of decision rules, such as lifted linear decision rules (Chen andZhang 2009, Georghiou et al. 2015). In all cases, feasible approximations that restrict the space of decision rules of (11) provide an upper bound on the costv N (γ) and produce decision rules that are feasible for (11).
The key benefit of the multi-policy approximation scheme is that it offers many degrees of freedom in approximating the nonlinear cost function. Specifically, in (11), a separate auxiliary decision rule y i t captures the value of the cost function for each uncertainty set in each stage. We approximate each y i t with a linear decision rule, which only needs to be locally accurate, i.e., accurate for realizations in the corresponding uncertainty set. As a result, (11) with linear decision rules results in significantly tighter approximations of (3) compared to using a single linear decision rule, y t , for all uncertainty sets in each stage. Moreover, these additional degrees of freedom come with only a mild increase in computation cost, and we substantiate these claims via computational experiments in Section 6.2. In Appendix EC.4, we provide the reformulation of the multi-policy approximation scheme with linear decision rules into a deterministic optimization problem using standard techniques from robust optimization.
Computational Experiments
We perform computational experiments to assess the out-of-sample performance and computational tractability of the proposed methodologies across several applications. These examples are twostage shipment planning (Section 6.1), dynamic inventory management (Section 6.2), and portfolio optimization (Section 6.3). Table 1 Relationship of four methods. We compare several methods using different machine learning models. These methods include the proposed sample robust optimization with covariates, sample average approximation (SAA), the predictions to prescriptions (PtP) approach of Bertsimas and Kallus (2014), and sample robust optimization without covariates (SRO). In Table 1, we show that each of the above methods are particular instances of (3) from Section 3. The methods in the left column ignore covariates by assigning equal weights to each uncertainty set, and the methods in the right column incorporate covariates by choosing the weights based on predictive machine learning. The methods in the top row do not incorporate any robustness, and the methods in the bottom row incorporate robustness via a positive N in the uncertainty sets. In addition, for the dynamic inventory management example, we also implement and compare to the residual tree algorithm described in Ban et al.
(2018). In each experiment, the relevant methods are applied to the same training datasets, and their solutions are evaluated against a common testing dataset. Further details are provided in each of the following sections.
Shipment planning
We first consider a two-stage shipment planning problem in which a decision maker seeks to satisfy demand in several locations from several production facilities while minimizing production and transportation costs. Our problem setting closely follows Bertsimas and Kallus (2014), in which the decision maker has access to auxiliary covariates (promotions, social media, market trends), which may be predictive of future sales in each retail location. Additionally, after observing demand, the decision maker has the opportunity to produce additional units y f ≥ 0 in each facility at a cost of p 2 > p 1 per unit. The fulfillment of each unit of demand generates r > 0 in revenue. Given the above notation and dynamics, the cost incurred by the decision maker is
c(ξ, x) = f ∈F p 1 x f − ∈L rξ + minimize s∈R L×F + , y∈R F + f ∈F p 2 y f + f ∈F ∈L c f s f subject to f ∈F s f ≥ ξ ∀ ∈ L ∈L s f ≤ x f + y f ∀f ∈ F.
Experiments. We perform computational experiments using the same parameters and data generation procedure as Bertsimas and Kallus (2014). Specifically, we consider an instance with |F| = 4, |L| = 12, p 1 = 5, p 2 = 100, and r = 90. The network topology, transportation costs, and the joint distribution of the covariates γ ∈ R 3 and demands ξ ∈ R 12 are the same as Bertsimas and Kallus (2014), with the exception that we generate the covariates as i.i.d. samples as opposed to an ARMA process (but with the same marginal distribution).
In our experiments, we compare sample robust optimization with covariates, sample average approximation, sample robust optimization, and predictions to prescriptions. For the robust approaches (bottom row of Table 1), we construct the uncertainty sets from Section 3 using the 1 norm and Ξ = R 12 + , solve these problems using the multi-policy approximation with linear decision rules described in Section 5, and consider uncertainty sets with radius ∈ {100, 500}. For the approaches using covariates (right column of Table 1), we used the k N -nearest neighbors with parameter k N = 2N 5 . All solutions were evaluated on a test set of size 100 and the results were averaged over 100 independent training sets.
Results. In Figure 1, we present the average out-of-sample profits of the various methods. The results show that the best out-of-sample average profit is attained when using the proposed sample robust optimization with covariates. Interestingly, we observe no discernible differences between sample average approximation and sample robust optimization in Figure 1, suggesting the value gained by incorporating covariates in this example. Compared to the approach of Bertsimas and Kallus (2014), sample robust optimization with covariates achieves a better out-of-sample average performance for each choice of . Table 2 shows that these differences are statistically significant.
This example demonstrates that, in addition to enjoying asymptotic optimality guarantees, sample robust optimization with covariates provides meaningful value across various values of N .
Dynamic inventory management
We next consider a dynamic inventory control problem over the first T = 12 weeks of a new product. In each week, a retailer observes demand for the product and can replenish inventory Out-of-sample profit for the shipment planning example. The p-values from the Wilcoxon signed rank test for comparison with the predictive to prescriptive analytics method (PtP-kNN) and sample robust optimization with covariates (SRO-kNN). After adjusting for multiple hypothesis testing, all results are significant at the α = 0.05 significance level because all p-values are less than Problem Description. In each stage t ∈ {1, . . . , T }, the retailer procures inventory from multiple suppliers to satisfy demand for a single product. The demands for the product across stages are denoted by ξ 1 , . . . , ξ T ≥ 0. In each stage t, and before the demand ξ t is observed, the retailer places procurement orders at various suppliers indexed by J = {1, . . . , |J |}. Each supplier j ∈ J has perunit order cost of c tj ≥ 0 and a lead time of j stages. At the end of each stage, the firm incurs a per-unit holding cost of h t and a backorder cost of b t . Inventory is fully backlogged and the firm starts with zero initial inventory. The cost incurred by the firm over the time horizon is captured by c(ξ 1 , . . . , ξ T , x 1 , . . . ,
x T ) = T t=1 j∈J c tj x tj + minimize y t ∈R y t subject to y t ≥ h t j∈J t− j s=1 x sj − t s=1 ξ s y t ≥ −b t j∈J t− j s=1 x sj − t s=1 ξ s .
Experiments. The parameters of the procurement problem were chosen based on Ban et al.
(2018). Specifically, we consider the case of two suppliers where c t1 = 1.0, c t2 = 0.5, h t = 0.25, and b t = 11 for each stage. The first supplier has no lead time and the second supplier has a lead time of one stage. We generate training and test data from the same distribution as the shipment planning problem in Section 6.1. In this case, the demands produced by this process are interpreted as the demands over the T = 12 stages. We perform computational experiments comparing the proposed sample robust optimization with covariates and the residual tree algorithm proposed by . In particular, we compare sample robust optimization with covariates with the multi-policy approximation as well as without the multi-policy approximation (in which we use a single auxiliary linear decision rule for y t for all uncertainty sets in each stage). The uncertainty sets from Section 3 are defined with the 2 norm and Ξ = R 12 + . The out-of-sample cost resulting from the decision rules were averaged over 100 training sets of size N = 40 and 100 testing points, and sample robust optimization with covariates used k-nearest neighbors with varying choices of k and radius ≥ 0 of the uncertainty sets. Table 3, we show the average out-of-sample cost resulting from sample robust optimization with covariates using linear decision rules, with and without the multi-policy approximation from Section 5. In both settings, we used k-nearest neighbors as the machine learning method and evaluated the out-of-sample performance by applying the linear decision rules for the ordering quantities. The results of these computational experiments in Table 3 demonstrate that significant improvements in average out-of-sample performance are found when combining the multi-policy approximation with covariates via k-nearest neighbors. We show in Table 4 that these results are statistically significant. For comparison, we also implemented the residual tree algorithm from Ban et al. (2018). When using their algorithm with a binning of B = 2 in each stage, their approach resulted in an average out-of-sample cost of 27142. We were unable to run with a binning of B = 3 Table 3 Average out-of-sample cost for dynamic procurement problem. Average out-of-sample cost for the dynamic procurement problem using sample robust optimization with N = 40. For each uncertainty set radius and parameter k, average was taken over 100 training sets and 100 test points. Optimal is indicated in bold. The residual tree algorithm with a binning of B = 2 in each stage gave an average out-of-sample cost of 27142. Table 4 Statistical significance for dynamic procurement problem.
Results. In
Method k 0 100 200 300 400 500 600 700 Sample robust optimization Linear decision rules no covariates * * * * * * * * k-nearest neighbors 26 * * * * * * * * 20 * * * * * * * * 13 * * * * * * * * Linear decision rules with multi-policy no covariates * * * * * * * * k-nearest neighbors 26 * * * * 1.4 × 10 −5 * * * 20 * * * * -* * * 13 * * * * 5.8 × 10 −3 1 × 10 −3 * *
The p-values of the Wilcoxon signed rank test for comparison with sample robust optimization using linear decision rules with multi-policy, k = 20, and = 400. An asterisk denotes that the p-value was less than 10 −8 . After adjusting for multiple hypothesis testing, each result is significant at the α = 0.05 significance level if its p-value is less than 0.05 63 ≈ 7.9 × 10 −4 . Table 5.
Portfolio optimization
Finally, we consider a single-stage portfolio optimization problem in which we wish to find an allocation of a fixed budget to n assets. Our goal is to simultaneously maximize the expected return while minimizing the the conditional value at risk (cVaR) of the portfolio. Before selecting our portfolio, we observe auxiliary covariates which include general market indicators such as index performance as well as macroeconomic numbers released by the US Bureau of Labor Statistics. Problem Description. We denote the portfolio allocation among the assets by x ∈ X := {x ∈ R n + :
n j=1 x j = 1}, and the returns of the assets by the random variables ξ ∈ R n . The conditional value at risk at the α ∈ (0, 1) level measures the expected loss of the portfolio, conditional on losses being above the 1 − α quantile of the loss distribution. Rockafellar and Uryasev (2000) showed that the cVaR of a portfolio can be computed as the optimal objective value of a convex minimization problem. Therefore, our portfolio optimization problem can be expressed as a convex optimization problem with an auxiliary decision variable, β ∈ R. Thus, given an observationγ of the auxiliary covariates, our goal is to solve minimize x∈X , β∈R
E β + 1 α max(0, −x ξ − β) − λx ξ γ =γ ,(12)
where λ ∈ R + is a trade-off parameter that balances the risk and return objectives. Table 1), we construct the uncertainty sets from Section 3 using the 1 norm. For each training sample size, we compute the out-of-sample objective on a test set of size 1000, and we average the results over 100 instances of training data.
In order to select N and other tuning parameters associated with the machine learning weight functions, we first split the data into a training and validation set. We then train the weight Figure 2 Out-of-sample objective for the portfolio optimization example.
functions using the training set, compute decisions for each of the instances in the validation set, and compute the out-of-sample cost on the validation set. We repeat this for a variety of parameter values and select the combination that achieves the best cost on the validation set.
Following a similar reformulation approach as Esfahani and Kuhn (2018), we solve the robust approaches exactly by observing that
minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N β + 1 α max{0, −x ζ − β} − λx ζ = minimize x∈X , β∈R N i=1 w i N (γ) sup ζ∈U i N max β − λx ζ, 1 α + λ x ζ = minimize x∈X , β∈R N i=1 w i N (γ) max sup ζ∈U i N {β − λx ζ} , sup ζ∈U i N 1 α + λ x ζ , = minimize x∈X , β∈R,v∈R N N i=1 w i N (γ)v i subject to v i ≥ β − λx ζ v i ≥ 1 α + λ x ζ ∀ζ ∈ U i N , i ∈ {1, . . . , N }.
The final expression can be reformulated as a deterministic optimization problem by reformulating the robust constraints.
Results. In Figure 2, we show the average out-of-sample objective values using the various methods. Consistent with the computational results of Esfahani and Kuhn (2018) and Bertsimas and Van Parys (2017), the results underscore the importance of robustness in preventing overfitting and achieving good out-of-sample performance in the small data regime. Indeed, we observe that the sample average approximation, which ignores the auxiliary data, outperforms PtP-kNN and PtP-CART when the amount of training data is limited. We believe this is due to the fact the latter methods both throw out training examples, so the methods overfit when the training data is limited, leading to poor out-of-sample performance. In contrast, our methods (SRO-kNN and SRO-CART) typically achieve the strongest out-of-sample performance, even though the amount of training data is limited.
Conclusion
In this paper, we introduced sample robust optimization with covariates, a new framework for solving dynamic optimization problems with side information. Through three computational examples, we demonstrated that our method achieves significantly better out-of-sample performance than scenario-based alternatives. We complemented these empirical observations with theoretical analysis, showing our nonparametric method is asymptotically optimal via a new concentration measure result for local learning methods. Finally, we showed our approach inherits the tractability of robust optimization, scaling to problems with many stages via the multi-policy approximation scheme. Xin Chen and Yuhan Zhang. Uncertain linear programs: extended affinely adjustable robust counterparts.
Operations Research, 57 (6)
N i=1 w i N (γ) = 1 and w 1 N (γ), . . . , w N N (γ) ≥ 0, ∀N ∈ N.(4)
Moreover, there exists constants k 2 > 0 and η > p(2 + d ξ ) such that
lim N →∞ 1 N N i=1 w i N (γ) γ i −γ = 0, P ∞ -almost surely;(7)E P N exp −θ N i=1 w i N (γ) 2 ≤ exp(−k 2 θN η ), ∀θ ∈ (0, 1), N ∈ N.(8)
Proof. We observe that (4) and (5) follow directly from the definitions of the weight functions.
The proofs of (7) and (8) are split into two parts, one for the k-nearest neighbor weights and one for kernel regression weights.
k-Nearest Neighbors: For the proof of (7), we note
N i=1 w i N (γ) γ i −γ ≤ γ (k N ) (γ) −γ ,
where γ (k N ) (γ) denotes the k N th nearest neighbor ofγ out of γ 1 , . . . , γ N . Therefore, for any λ > 0,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ P N γ (k N ) (γ) −γ > λ N ≤ P N i : γ i −γ ≤ λ N ≤ k N − 1 .
By Assumption 4, this probability is upper bounded by P(β ≤ k − 1), where β ∼ Binom(N, g(λ N ) dγ ). By Hoeffding's inequality,
P N N i=1 w i N (γ) γ i −γ > λ N ≤ exp −2(N g(λk 1 /N p ) dγ − k N + 1) 2 N , ec2
e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information for k N ≤ N g(λk 1 /N p ) dγ + 1. We note that this condition on k N is satisfied for N sufficiently large because δ + pd γ < 1 by Assumption 1. Because the right hand side in the above inequality has a finite sum over N , (7) follows by the Borel Cantelli lemma.
For the proof of (8), it follows from Assumption 1 that
N i=1 w i N (γ) 2 ≤ k 3 N 1−2δ
deterministically (for all sufficiently large N such that k 3 N δ ≤ N − 1) and p(2 + d ξ ) > 2δ − 1.
Thus, (8) follows with η = 2δ − 1.
Kernel regression: Assumption 1 stipulates that the kernel function K(·) is Gaussian, triangular, or Epanechnikov, which are defined in Section 3. It is easy to verify that these kernel functions satisfy the following:
1. K is nonnegative, finite valued, and monotonically decreasing (for nonnegative inputs).
2. u α K(u) → 0 as u → ∞ for any α ∈ R.
3. ∃u * > 0 such that K(u * ) > 0.
For the proof of (7), define q > 0 such that p < q < δ. Letting D be the diameter of Γ and g N (γ) =
N i=1 K( γ i −γ /h N ), we have N i=1 w i N (γ) γ i −γ = N i=1 w i N (γ)1{ γ i −γ ≤ N −q } γ i −γ + 1 g N (γ) N i=1 K γ i −γ h N 1{ γ i −γ > N −q } γ i −γ ≤ N −q + N DK(N −q /h N ) g N (γ) ,
where the inequality follows from the monotonicity of K. By construction, N −q / N → 0, so we just need to handle the second term. We note, for any λ > 0,
P N N DK(N −q /h N ) g N (γ) > λ N ≤ P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N , where Z N i = 1{ γ i −γ ≤ u * h N }.
To achieve this inequality, we lower bounded each term in g N (γ) by K(u * ) or 0, because of the monotonicity of K. By Hoeffding's inequality, for some constants k 5 , k 6 > 0 that do not depend on N . We used Assumption 4 for the second inequality. Because δ > q, the second kernel property implies N 1/2+p K(k 4 N −q+δ ) goes to 0 as N goes to infinity, so that term is irrelevant. Because 1/2 − δd γ > 0 by Assumption 1, the right hand side of the inequality has a finite sum over N , and thus (7) follows from the Borel Cantelli lemma.
P N N i=1 Z N i K(u * ) < N DK(N −q /h N ) λ N ≤ exp − 2 N EZ N i − N D λ N K(u * ) K(N −q /h N ) 2 + N ≤ exp − 2 N g(u * h N ) dγ − N D λ N K(u * ) K(N −q /h N ) 2 + N = exp − k 5 N 1/2−δdγ − k 6 N 1/2+p K(k 4 N −q+δ ) 2 + ,
For the proof of (8), define
v N = K( γ 1 −γ /h N ) . . . K( γ N −γ /h N ) .
We note that
N i=1 w i N (γ) 2 = v N 2 2 v N 2 1 ≤ v N ∞ v N 1 ≤ K(0) K(u * ) N i=1 Z N i ,
where Z N i is defined above. The first inequality follows from Holder's inequality, and the second inequality follows from the monotonicity of K. Next, we defineZ N i to be a Bernoulli random variable with parameter g(u * h N ) dγ for each i. For any θ ∈ (0, 1),
E P N exp −θ N i=1 w i N (γ) 2 ≤ E P N exp −θK(u * ) N i=1Z N i K(0) = 1 − g(u * h N ) dγ + g(u * h N ) dγ exp(−θK(u * )/K(0)) N ≤ exp −N g(u * h N ) dγ (1 − exp(−θK(u * )/K(0))) ≤ exp −N g(u * h N ) dγ θK(u * ) 2K(0) = exp − θK(u * )g(k 4 u * ) dγ N 1−δdγ 2K(0) .
The first inequality follows because g(u * h N ) dγ is an upper bound on P( γ i −γ ≤ u * h N ) by Assumption 4. The first equality follows from the definition of the moment generating function for a binomial random variable. The next line follows from the inequality e x ≥ 1 + x and the following from the inequality 1 − e −x ≥ x/2 for 0 ≤ x ≤ 1. Because 1 − δd γ > p(2 + d ξ ), this completes the proof of (8) with η = 1 − δd γ and k 2 = K(u * )g(k 4 u * ) dγ /2K(0).
EC.2. Proof of Theorem 1
In this section, we present our proof of Theorem 1. First, we must introduce some necessary terminology. To connect Theorem 2 to sample robust optimization, we consider the ∞-Wasserstein metric, which is given by:
d ∞ (Q, Q ) ≡ inf Π-ess sup Ξ×Ξ ξ − ξ :
Π is a joint distribution of ξ and ξ with marginals Q and Q , respectively , where the essential supremum of the joint distribution is defined as
Π-ess sup Ξ×Ξ ξ − ξ = inf M : Π ξ − ξ > M = 0 .
We make use of the following result from Bertsimas et al. (2018a):
Lemma EC.1. For any measurable f : Ξ → R, N i=1 w N i (γ) sup ζ∈U i N f (ζ) = sup Q∈P(Ξ): d∞(P N γ ,Q)≤ N E ξ∼Q [f (ξ)].
The proof of Lemma EC.1 follows identical reasoning as in Bertsimas et al. (2018a) and is thus omitted.
Next, we state a result from Bertsimas et al. (2018a) (their Theorem EC.1), which bounds the difference in worst case objective values between 1-Wasserstein and ∞-Wasserstein distributionally robust optimization problems. We note that Bertsimas et al. (2018a) proved the following result for the case that Q is the unweighted empirical measure, but their proof carries through for the case here in which Q is a weighted empirical measure.
Lemma EC.2. Let Z ⊆ R d , f : Z → R be measurable, and ζ 1 , . . . , ζ N ∈ Z. Suppose that Q = N i=1 w i δ ζ i
for given weights w 1 , . . . , w N ≥ 0 that sum to one. If θ 2 ≥ 2θ 1 ≥ 0, then
sup Q∈P(Z): d 1 (Q ,Q)≤θ 1 E ξ∼Q [f (ξ)] ≤ sup Q∈P(Z): d∞(Q ,Q)≤θ 2 E ξ∼Q [f (ξ)] + 4θ 1 θ 2 sup ζ∈Z |f (ζ)|.
We now restate and prove the main result, which combines the new measure concentration result from this paper with similar proof techniques as Bertsimas et al. (2018a) and Esfahani and Kuhn (2018).
Theorem 1. Suppose the weight function and uncertainty sets satisfy Assumption 1, the joint probability distribution of (γ, ξ) satisfies Assumptions 2-4 from Section 4.3, and the cost function satisfies Assumption 5 from Section 4.4. Then, for everyγ ∈ Γ, lim N →∞v
N (γ) = v * (γ), P ∞ -almost surely.
Proof. We break the limit into upper and lower parts. The proof of the lower part follows from an argument similar to that used by Bertsimas et al. (2018a). The proof of the upper part follows from the argument used by Esfahani and Kuhn (2018). To begin, we define
D N := {ζ : ζ ≤ log N },
and let Pγ |D N (·) be shorthand for P(· | γ =γ, ξ ∈ D N ). Then, applying Assumption 2,
P N ∪ N i=1 U i N ⊆ D N ≤ P max i≤N ξ i + N > log N ≤ N P( ξ > log N − N ) = N E [P( ξ − E[ ξ | γ] > log N − N − E[ ξ | γ] | γ)] ≤ N E P ξ − E[ ξ | γ] > log N − N − sup γ ∈Γ E[ ξ | γ = γ ] | γ ≤ N E 2 exp − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 = 2 exp log N − (log N − N − sup γ ∈Γ E[ ξ | γ = γ ]) 2 2σ 2 , (EC.2)
which has a finite sum over N ∈ N. Therefore, by the Borel-Cantelli lemma, there exists N 0 ∈ N,
P ∞ -almost surely, such that ∪ N i=1 U i N ⊆ D N ∀N ≥ N 0 .
We now choose any r > 0 such that N N −r satisfies Assumption 1, and define N 1 := max{N 0 , 2 1 r }.
Then, the following holds for all N ≥ N 1 and π ∈ Π:
sup Q∈P(D N ∩Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] ≤ sup Q∈P(D N ∩Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c π (ξ 1 , . . . , ξ T )] + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| = N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4 N r sup ζ∈D N ∩Ξ |c π (ζ 1 , . . . , ζ T )| ≤ N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) + 4C N r (1 + log N ). (EC.3)
Indeed, the first supremum satisfies the conditions of Lemma EC.2 since N ≥ N 0 and N ≥ 2 1 r , and the equality follows from Lemma EC.1 since N ≥ N 0 . The final inequality follows from Assumption 5 and the construction of D N . We observe that the second term on (EC.3) converges to zero as N → ∞. Next, we observe that We handle the first term with the Cauchy-Schwartz inequality,
E[c π (ξ 1 , . . . , ξ T ) | γ =γ] E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )] = E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] + E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }].E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ / ∈ D N }] ≤ E ξ∼Pγ [c π (ξ 1 , . . . , ξ T ) 2 ]Pγ(ξ / ∈ D N ).
By Assumptions 2 and 5, the above bound is finite and converges to zero as N → ∞ uniformly over π ∈ Π. We handle the second termby the new concentration measure from this paper. Specifically, it follows from Theorem 2 that there exists an N 2 ≥ N 1 , P ∞ -almost surely, such that
d 1 (Pγ,P N γ ) ≤ N N r ∀N ≥ N 2 .
Therefore, for all N ≥ N 2 and decision rules π ∈ Π:
E ξ∼Pγ [c π (ξ 1 , . . . , ξ T )1{ξ ∈ D N }] = E ξ∼Pγ c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + Pγ(ξ ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) α N ≤ sup Q∈P(Ξ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) 1{ξ ∈ D N } + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q c π (ξ 1 , . . . , ξ T ) − inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ) + α N = sup Q∈P(Ξ∩D N ): d 1( Q,P N γ )≤ N N r E ξ∼Q [c π (ξ 1 , . . . , ξ T )] − Pγ(ξ / ∈ D N ) inf ζ∈D N ∩Ξ c π (ζ 1 , . . . , ζ T ).
Indeed, the inequality follows because N ≥ N 2 . It follows from Assumption 5 and (EC.2) that the second term in the final equality converges to zero as N → ∞ uniformly over π ∈ Π. Combining the above, we conclude that lim inf N →∞v
N (γ) = lim inf N →∞ inf π∈Π N i=1 w N i (γ) sup ζ∈U i N c π (ζ 1 , . . . , ζ T ) ≥ inf π∈Π E[c π (ξ 1 , . . . , ξ T ) | γ =γ] = v * (γ),
where the inequality holds P ∞ -almost surely. This completes the proof of (EC.1).
Upper bound. We now prove that lim sup N →∞v
N (γ) ≤ v * (γ), P ∞ -almost surely. (EC.4)
Indeed, for any arbitrary δ > 0, let x δ ∈ X be a δ-optimal solution for (1). By Esfahani and Kuhn (2018, Lemma A.1) and Assumption 5, there exists a non-increasing sequence of functions f j (ζ 1 , . . . , ζ T ), j ∈ N, such that lim j→∞ f j (ζ 1 , . . . , ζ T ) = c x δ (ζ 1 , . . . , ζ T ), ∀ζ ∈ Ξ e-companion to Bertsimas, McCord, and Sturt: Dynamic Optimization with Side Information ec7 and f j is L j -Lipschitz continuous. Furthermore, for each N ∈ N, choose any probability distribution
Q N ∈ P(Ξ) such that d 1 (Q N ,P N γ ) ≤ N and sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ.
For any j ∈ N,
lim sup N →∞v N (γ) ≤ lim sup N →∞ sup Q∈P(Ξ): d∞(Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ sup Q∈P(Ξ): d 1 (Q,P N γ )≤ N E ξ∼Q [c x δ (ξ 1 , . . . , ξ T )] ≤ lim sup N →∞ E ξ∼Q N [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Q N [f j (ξ 1 , . . . , ξ T )] + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j d 1 (Pγ,Q N ) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + d 1 (Q N ,P N γ )) + δ ≤ lim sup N →∞ E ξ∼Pγ [f j (ξ 1 , . . . , ξ T )] + L j (d 1 (Pγ,P N γ ) + N ) + δ = E Pγ [f j (ξ 1 , . . . , ξ T )] + δ, P ∞ -almost surely,
where we have used the fact d 1 (P, Q) ≤ d ∞ (P, Q) for the second inequality, the dual form of the 1-Wasserstein metric for the fifth inequality (because f j is L j -Lipschitz), and Theorem 2 for the equality. Taking the limit as j → ∞, and applying the monotone convergence theorem (which is allowed because E ξ∼Pγ |f 1 (ξ 1 , . . . , ξ T )| ≤ L 1 E ξ∼Pγ ξ + |f 1 (0)| < ∞ by Assumption 4), gives lim sup N →∞v
N (γ) ≤ E ξ∼Pγ [c x δ (ξ 1 , . . . , ξ T )] + δ ≤ v * (γ) + 2δ, P ∞ -almost surely.
Since δ > 0 was chosen arbitrarily, the proof of (EC.4) is complete.
EC.3. Proof of Theorem 3
In this section, we present our proof of Theorem 3 from Section 5. We restate the theorem here for convenience.
Theorem 3. For cost functions of the form (10),ṽ N (γ) =v N (γ).
Proof. We first show thatṽ N (γ) ≥v N (γ). Indeed, consider any primary decision ruleπ and auxiliary decision rulesȳ i 1 , . . . ,ȳ i T for each i ∈ {1, . . . , N } which are optimal for (11). 2 Then, it follows from feasibility to (11) that w i N (γ)c π (ζ 1 , . . . , ζ T )
≤ N i=1 w i N (γ)cπ(ζ 1 , . . . , ζ T ) ≤ N i=1 w i N (γ) sup ζ∈U i N T t=1
f tπt (ζ 1 , . . . , ζ t−1 ) + g t ζ t + h tȳ i t (ζ 1 , . . . , ζ t ) =ṽ N (γ).
The other side of the inequality follows from similar reasoning. Indeed, letπ be an optimal solution to (3). For each i ∈ {1, . . . , N } and t ∈ {1, . . . , T }, defineȳ i t ∈ R t as any decision rule that satisfies y i t (ζ 1 , . . . , ζ t ) ∈ arg min y t ∈R Combining the above inequalities, the proof is complete.
EC.4. Tractable Reformulation of the Multi-Policy Approximation
For completeness, we now show how to reformulate the multi-policy approximation scheme with linear decision rules from Section 5 into a deterministic optimization problem using standard techniques from robust optimization.
We begin by transforming (11) with linear decision rules into a more compact representation.
First, we combine the primary linear decision rules across stages as X T −2,1 X T −2,2 X T −2,3 · · · 0 0 0 X T −1,1 X T −1,2 X T −1,3 · · · X T −1,T −2 0 0 X T,1 X T,2 X T,3 · · · X T,T −2 X T,T −1 0 We note that the zero entries in the above matrix are necessary to ensure that the linear decision rules are non-anticipative. Similarly, for each i ∈ {1, . . . , N }, we represent the auxiliary linear decision rules as
x 0 = x 1,0 . . . x T,0 ∈ R dx , X = 0 0 0 · · · 0 0 0 X 2,1 0 0 · · · 0 0 0 X 3,1 X 3,2 0 · · · 0 0 0 . . .. ∈ R dx×d ξ .y i 0 = y i 1,0 . . . y i T,0 ∈ R dy , Y i = Y i 1,1 0 · · · 0 0 Y i 2,1 Y i 2,2 · · · 0 0 . . . . . . . . . . . . . . . Y i T −1,1 Y i T −1,2 · · · Y i T −1,T −1 0 Y i T,1 Y i T,2 · · · Y i t,t−1 Y i T,T ∈ R dy ×d ξ .
We now combine the problem parameters. Let d = (d 1 , . . . , d T ) ∈ R m and
f = f 1 . . . f T ∈ R dx , A = A 1,1 0 · · · 0 0 A 2,1 A 2,2 · · · 0 0 . . . . . . . . . . . . . . . A T −1,1 A T −1,2 · · · A T −1,T −1 0 A T,1 A T,2 · · · A t,t−1 A T,T ∈ R m×dx , g = g 1 . . . g T ∈ R d ξ , B = B 1,1 0 · · · 0 0 B 2,1 B 2,2 · · · 0 0 . . . . . . . . . . . . . . . B T −1,1 B T −1,2 · · · B T −1,T −1 0 B T,1 B T,2 · · · B t,t−1 B T,T ∈ R m×dx , h = h 1 . . . h T ∈ R dy , C =
C 1,1 0 · · · 0 0 0 C 2,2 · · · 0 0 . . . Therefore, using the above compact notation, we can rewrite the multi-policy approximation with linear decision rules as
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ subject to A(x 0 + Xζ) + Bζ + C y i 0 + Y i ζ ≤ d x 0 + Xζ ∈ X ∀ζ ∈ U i N , i ∈ {1, . . . , N },(EC.5)
where X := X 1 × · · · × X T and the matrices X and Y are non-anticipative. Note that the linear decision rules in the above optimization problem are represented using O(d ξ max{d x , N d y }) decision variables, where d x := d 1 x + · · · + d T x and d y := d 1 y + · · · + d T y . Thus, the complexity of representing the primary and auxiliary linear decision rules scales efficiently both in the size of the dataset and the number of stages. For simplicity, we present the reformulation for the case in which there are no constraints on the decision variables and nonnegativity constraints on the random variables.
Theorem EC.2. Suppose Ξ = R d ξ + and X = R dx . Then, (EC.5) is equivalent to
minimize x 0 ∈R dx ,X∈R dx×d ξ y i 0 ∈R dy , Y i ∈R dy ×d ξ Λ i ∈R m×d ξ + , s i ∈R d ξ + N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + (s i ) ξ i + N X f + g + (Y i ) h + s i * subject to A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d ∀i ∈ {1, . . . , N }.
where Z * := ( z 1 * , . . . , z r * ) ∈ R r for any matrix Z ∈ R r×n .
Proof. For any c ∈ R d ξ and ξ ∈ Ξ, it follows directly from strong duality for conic optimization that max ζ≥0 {c ζ : ζ − ξ ≤ } = min λ≥0 {(c + λ) ξ + c + λ * } .
We use this result to reformulate the objective and constraints of (EC.5). First, let the j-th rows of A, B, C and the j-th element of d be denoted by a j ∈ R dx , b j ∈ R ξ , c j ∈ R dy , and d j ∈ R. Then, each robust constraint has the form
a j (x 0 + Xζ) + b j ζ + c j (y i 0 + Y i ζ) ≤ d j ∀ζ ∈ U i N .
Rearranging terms,
(a j X + b j + c j Y i )ζ ≤ d j − a j x 0 − c j y i 0 ∀ζ ∈ U i N ,
which applying duality becomes ∃λ i j ≥ 0 : X a j + b j + (Y i ) c j + λ i j ξ i + N X a j + b j + (Y i ) c j + λ i j * ≤ d j − a j x 0 − c j y i 0 .
Rearranging terms, the robust constraints for each i ∈ {1, . . . , N } are satisfied if and only if
∃Λ i ≥ 0 : A x 0 + Xξ i + Bξ i + C y i 0 + Y i ξ i + Λ i ξ i + N AX + B + CY i + Λ i * ≤ d,
where the dual norm for a matrix is applied separately for each row. Similarly, the objective function takes the form
N i=1 w i N (γ) sup ζ∈U i N f (x 0 + Xζ) + g ζ + h y i 0 + Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + sup ζ∈U i N f X + g + h Y i ζ = N i=1 w i N (γ) f x 0 + h y i 0 + inf s i ≥0 X f + g + (Y i ) h + s i ξ i + N X f + g + (Y i ) h + s i * = N i=1 w i N (γ) f x 0 + Xξ i + g ξ i + h y i 0 + Y i ξ i + inf s i ≥0 (s i ) ξ i + N X f + g + (Y i ) h + s i * .
Combining the reformulations above, we obtain the desired reformulation. | 14,319 |
1901.05573 | 2911027578 | A key property underlying the success of evolutionary algorithms (EAs) is their global search behavior, which allows the algorithms to jump' from a current state to other parts of the search space, thereby avoiding to get stuck in local optima. This property is obtained through a random choice of the radius at which offspring are sampled from previously evaluated solutions. It is well known that, thanks to this global search behavior, the probability that an EA using standard bit mutation finds a global optimum of an arbitrary function @math tends to one as the number of function evaluations grows. This advantage over heuristics using a fixed search radius, however, comes at the cost of using non-optimal step sizes also in those regimes in which the optimal rate is stable for a long time. This downside results in significant performance losses for many standard benchmark problems. We introduce in this work a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only. To this end, we introduce , in which the binomial choice of the search radius is replaced by a normal distribution. Normalized standard bit mutation allows a straightforward way to control its variance, and hence the degree of randomness involved. We experiment with a self-adjusting choice of this variance, and demonstrate its effectiveness for the two classic benchmark problems LeadingOnes and OneMax. Our work thereby also touches a largely ignored question in discrete evolutionary computation: multi-dimensional parameter control. | As reasoned above, normalized standard bit mutation offers an elegant way to interpolate between deterministic mutation strengths and regular standard bit mutation, thus showing that Randomized Local Search (RLS) variants with their deterministic search radii and the (1+1) EA with mutation rate @math are essentially just different instantiations of the same meta-algorithm. Similar results also extend to population-based @math EAs. Note that normalized standard bit mutation also allows other degrees of randomization, thereby offering a wide range for further experimentation. In this context we note that for the special case of standard RLS (i.e., the greedy (1+1) hill climber that flips in each iteration exactly one uniformly chosen bit) a similar meta-model allowing to interpolate between the (1+1) EA and RLS is the (1+1) EA @math introduced in @cite_2 @cite_5 . This model, however, is much less flexible, and does not allow, for example, deterministic search radii greater than one. | {
"abstract": [
"The idea to recombine two or more search points into a new solution is one of the main design principles of evolutionary computation (EC). Its usefulness in the combinatorial optimization context, however, is subject to a highly controversial discussion between EC practitioners and the broader Computer Science research community. While the former, naturally, report significant speedups procured by crossover, the belief that sexual reproduction cannot advance the search for high-quality solutions seems common, for example, amongst theoretical computer scientists. Examples that help understand the role of crossover in combinatorial optimization are needed to promote an intensified discussion on this subject.",
"Analyzing the computational complexity of evolutionary algorithms has become an accepted and important branch in evolutionary computation theory. This is usually done by analyzing the (expected) optimization time measured by means of the number of function evaluations and describing its growth as a function of a measure for the size of the search space. Most often asymptotic results describing only the order of growth are derived. This corresponds to classical analysis of (randomized) algorithms in algorithmics. Recently, the emerging field of algorithm engineering has demonstrated that for practical purposes this analysis can be too coarse and more details of the algorithm and its implementation have to be taken into account in order to obtain results that are valid in practice. Using a very recent analysis of a simple evolutionary algorithm as starting point it is shown that the same holds for evolutionary algorithms. Considering this example it is demonstrated that counting function evaluations more precisely can lead to results contradicting actual run times. Motivated by these limitations of computational complexity analysis an algorithm engineering-like approach is presented."
],
"cite_N": [
"@cite_5",
"@cite_2"
],
"mid": [
"2888223476",
"2012697403"
]
} | Interpolating Local and Global Search by Controlling the Variance of Standard Bit Mutation * | Among the most successfully applied iterative optimization heuristics are local search variants and evolutionary algorithms (EAs). While the former sample at a fixed radius around previously evaluated solutions, most evolutionary algorithms classify as global search algorithms which can escape local optima by creating offspring at larger distances. In the context of optimizing pseudo-Boolean functions f : {0, 1} n → R, for example, the most commonly found variation operator in EAs is standard bit mutation. Standard bit mutation creates a new solution y by flipping each bit of the parent individual x ∈ {0, 1} n with some probability 0 < p < 1, independently for each position. The probability to sample a specific offspring y at distance 0 ≤ d ≤ n from x thus equals p H(x,y) (1 − p) H(x,y) , where H(x, y) = |{1 ≤ i ≤ n | x i = y i }| denotes the Hamming distance of x and y. This probability is strictly positive for all y, thus showing that the probability that an EA using standard bit mutation will have sampled a global optimum of f converges to one as the number of iterations increases. In contrast to pure random search, however, the distance at which the offspring y is sampled follows a binomial distribution, Bin(n, p), and is thus concentrated around its mean np.
The ability to escape local optima comes at the price of frequent uses of non-optimal search radii even in those regimes in which the latter are stable for a long time. The incapability of standard bit mutation to adjust to such situations results in important performance losses on almost all classical benchmark functions, which often exhibit large parts of the optimization process in which flipping a certain number of bits is required. A convenient way to control the degree of randomness in the choice of the search radius would therefore be highly desirable.
In this work we introduce such an interpolation. It allows to calibrate between deterministic and pure random search, while encompassing standard bit mutation as one specification. More precisely, we investigate normalized standard bit mutation, in which the mutation strength (i.e., the search radius) is sampled from a normal distribution N (µ, σ 2 ). By choosing σ = 0 one obtains a deterministic choice, and the "degree of randomness" increases with increasing σ. By the central limit theorem, we recover a distribution that is very similar to that of standard bit mutation by setting µ = np and σ 2 = np(1 − p).
Apart from conceptual advantages, normalized standard bit mutation offers the advantage of separating the variance from the mean, which makes it easy to control both parameters independently during the optimization process. While multi-dimensional parameter control for discrete EAs is still in its infancy, cf. comments in [KHE15, DD19], we demonstrate in this work a simple, yet efficient way to control mean and variance of normalized standard bit mutation. As test case to investigate the benefits of normalized standard bit mutation we have chosen the 2-rate (1 + λ) EA r/2,2r from [DGWY17]. The choice of this reference algorithm is based on our previous work [DYvR + 18] in which we observed, via a detailed fixed-target analysis of several (1 + λ) EAs, that for the two benchmark problems OneMax and LeadingOnes this algorithm performs significantly better than the plain (1 + λ) EA for a large range of initial target values. For both functions flipping one bit is optimal for a large fraction of the optimization process, cf. Figure 2. In these regimes the 2-rate (1 + λ) EA r/2,2r drastically looses performance due to sampling half the offspring with a mutation rate that is four times as large as the optimal one. Controlling the variance of this distribution seems therefore promising.
On the way towards a (1 + λ) EA r/2,2r variant with self-adjusting choice of mean and variance we discover that already replacing the 2-rate sampling strategy of this algorithm by a normalized choice of the mutation strength significantly improves its performance. Controlling the variance then yields additional performance gains on the tested OneMax instances (we consider problem dimensions up to 10 000). On LeadingOnes, the variance control improves performance for small values of λ. Unlike one might first expect, for this test function the average optimization time (i.e., number of search points evaluated until an optimal solution is evaluated for the first time) of the (1 + 50) variants of the (1 + λ) EA r/2,2r is better than that of their (1 + 2) counterparts, which is an observation of independent interest.
Experimental Setup
Unless stated otherwise, all numbers reported in this work are based on 100 independent runs of the respective algorithms. To ease readability, we only display average values. All raw data as well as detailed summaries with quantiles, standard deviations, etc. are available at https://github.com/FurongYe/Fixed-Target-Results. Selected statistical results can be found in Tables 1 and 2, respectively. These summaries have been created with IOHprofiler, our recently announced benchmarking and data analysis tool [DWY + 18].
2 Previous Observations for the Two-Rate (1 + λ) EA and the Two Benchmark Problems A starting point of our work are results presented in [DYvR + 18]. In this work we observed that the evolutionary algorithm with success-based self-adjusting mutation rate proposed in [DGWY17] outperforms the (1 + λ) EA for a large range of sub-optimal targets. It then drastically looses performance in the later parts of the optimization process, which results in an overall poor optimization time on OneMax and LeadingOnes functions of moderate problem dimensions n ≤ 10 000. The in [DGWY17] proven optimal asymptotic behavior on OneMax can thus not be observed for these dimensions. We briefly summarize in this section the algorithm from [DGWY17] and the results presented in [DYvR + 18]. We also discuss a few basic properties of the two benchmark problems, which explain the choices made in subsequent sections.
The Two-Rate EA
The algorithm introduced in [DGWY17], which we named (1 + λ) EA r/2,2r in [DYvR + 18], is a (1 + λ) EA which applies in each iteration two different mutation rates. Half of the offspring population is generated with mutation rate r/(2n), the other half with mutation rate 2r/n. The parameter r is the current best mutation strength, which is updated after each iteration, with a bias towards the rate by which the best of the λ offspring has been sampled, cf. Algorithm 1 for details.
Algorithm 1: The 2-rate (1 + λ) EA r/2,2r with adaptive mutation rates proposed in [DGWY17] 1 Initialization: Sample x ∈ {0, 1} n uniformly at random and evaluate f (x); 2 Initialize r ← r init ; // Following [DGWY17] we use r init = 2; 3 Optimization: for t = 1, 2, 3, . . . do 4 for i = 1, . . . , λ/2 do 5 Sample (i) ∼ Bin >0 (n, r/(2n)), create y (i) ← flip (i) (x), and evaluate f (y (i) );
6 for i = λ/2 + 1, . . . , λ do 7 Sample (i) ∼ Bin >0 (n, 2r/n), create y (i) ← flip (i) (x)
, and evaluate f (y (i) ); 8 x * ← arg max{f (y (1) ), . . . , f (y (λ) )} (ties broken u.a.r.);
9 if f (x * ) ≥ f (x) then x ← x * ;
10 if x * has been created with mutation rate r/2 then s ← 3/4 else s ← 1/4; 11 Sample q ∈ [0, 1] u.a.r.; 12 if q ≤ s then r ← max{r/2, 2} else r ← min{2r, n/4};
Note that here and in the following we make use of the fact that standard bit mutation, which is traditionally defined by flipping each bit in a length-n bit string with some probability p (independently of all other decisions), can be equivalently described by first sampling a radius from the binomial distribution Bin(n, p) and then applying the flip operator, which flips pairwise different bits that are chosen from the index set [n] = {1, 2, . . . , n} uniformly at random.
Following the discussions and the notation introduced in [CD18,DW18,DYvR + 18] we enforce in this work that all offspring differ from their parents by at least one bit. We therefore require in lines 4 and 6 that the mutation strength is at least one. This is achieved by resampling if needed, or, equivalently, by sampling from the conditional binomial distribution For OneMax, RLS spends around 94% of the total optimization time in the regime in which k drift = 1, for LeadingOnes this fraction is still 50%. For the drift-maximizing/optimal RLSvariants flipping in each iteration k drift and k opt bits, respectively, these fractions are around 96% for OneMax and 64% for LeadingOnes.
Bin >0 (n, p) which assigns to each value k ∈ [n] a probability of Bin(n, p)(k)/(1 − (1 − p) n ) = n k p k (1 − p) n−k /(1 − (1 − p) n ).
In [DYvR + 18] we compared the fixed-target performance of the (1 + 50) EA >0 (i.e., the (1 + λ) EA using the conditional sampling rule introduced above) and the (1 + 50) EA r/2,2r on One-Max and LeadingOnes. These two classic optimization problems ask to maximize the func- Figure 1 we report similar empirical results for n = 10 000 (OneMax) and n = 2 000 (LeadingOnes) (the other results in the two figures will be addressed below). We observed in [DYvR + 18] that for both functions the (1 + 50) EA r/2,2r from [DGWY17] performs well for small target values, but drastically looses performance in the later stages of the optimization process.
tions {0, 1} n → {0}∪[n] which are defined via OneMax(x) = n i=1 x i and LeadingOnes(x) = max{i ∈ [0..n] | ∀j ≤ i : x j = 1}, respectively. In
Properties of the Benchmark Problems
Both OneMax and LeadingOnes have a long period during the optimization run in which flipping one bit is optimal.
For OneMax flipping one bit is widely assumed to be optimal as soon as f (x) ≥ 2n/3. Quite interestingly, however, this conjecture has not not been rigorously proven to date. It is only known that drift-maximizing mutation strengths are almost optimal [DDY16], in the sense that the overall expected optimization time of the elitist (1+1) algorithm using these rates in each step cannot be worse than the best-possible unary unbiased algorithm for OneMax by more than an additive o(n) lower order term [DDY16]. But even for the drift maximizer the statement that flipping one bit is optimal when f (x) ≥ 2n/3 has only be shown for an approximation, not the actual drift maximizer. Numerical evaluations for problem dimensions up to 10 000 nevertheless confirm that 1-bit flips are optimal when the OneMax-value exceeds 2n/3.
For LeadingOnes, on the other hand, it is well known that flipping one bit is optimal as soon as f (x) ≥ n/2 [Doe18a].
We display in Figure 2, which is adjusted from [Doe18b], the optimal and drift-maximizing mutation strength for LeadingOnes and OneMax, respectively. We also display in the same figure the expected time needed by RLS opt and RLS drift , the elitist (1+1) algorithm using in each step these mutation rates. We see that these algorithms spend around 96% (for OneMax) and 64% (for LeadingOnes), respectively, of their time in the regime where flipping one bit is (almost) optimal. These numbers are based on an exact computation for LeadingOnes and on an empirical evaluation of 500 independent runs for OneMax.
Implications for the (1 + 50) EA r/2,2r
Assume that in the regime of optimal one-bit flips the (1 + 50) EA r/2,2r has correctly identified that flipping one bit is optimal. It will hence use the smallest possible value for r, which is 2. In this case, half the offspring are sampled with (the for this algorithm optimal) mutation rate 1/n, while the other half of the offspring population is sampled with mutation rate 4/n, thus flipping on average more than four times the optimal number of bits. It is therefore non-surprising that in this regime (and already before) the gradient of the average fixed-target running time curves in Figures 1 are much worse for the (1 + 50) EA r/2,2r than for the (1 + 50) EA >0 .
Creating Half the Offspring with Optimal Mutation Rate
The observations made in the last section inspire our first algorithms, the (1 + λ) EA r,U (0,σr/n) defined via Algorithm 2. This algorithm samples half the offspring using as deterministic mutation strength the best mutation strength of the last iteration. The other offspring are sampled with a mutation rate that is sampled uniformly at random from the interval (0, σr/n).
Algorithm 2: The (1 + λ) EA r,U (0,σr/n) . In line 6 we denote by U (a, b) the uniform distribution in the interval (a, b). For σ = 2 we call this algorithm the (1 + λ) EA half .
1 Initialization: Sample x ∈ {0, 1} n uniformly at random and evaluate f (x); 2 Initialize r ← r init ; // we use r init = 2; 3 Optimization: for t = 1, 2, 3, . . . do 4 for i = 1, . . . , λ/2 do 5 Set (i) ← r, create y (i) ← flip (i) (x), and evaluate f (y (i) ); 6 for i = λ/2 + 1, . . . , λ do 7 Sample p (i) ∼ min{U (0, σr/n), 1}, (i) ∼ Bin >0 (n, p (i) ), create y (i) ← flip (i) (x), and evaluate f (y (i) );
8 i ← min j | f (y (j) ) = max{f (y (k) ) | k ∈ [n]} ; 9 r ← (i) ; 10 if f (y (i) ) ≥ f (x) then x ← y (i) ;
As we can see in Figure 1 this algorithm significantly improves the performance in those later parts of the optimization process. Normalized total optimization times for various problem dimensions are provided in Figures 3 and 4, respectively. We display data for σ = 2 only, and call this (1 + λ) EA r,U (0,σr/n) variant (1 + λ) EA half . We note that smaller values of σ, e.g., σ = 1.5 would give better results. The same effect would be observable when replacing the factor two in the (1 + λ) EA r/(2n),2r , i.e., when using a (1 + λ) EA r/(σn),σr rule with σ = 2. A detailed discussion of this effect is omitted here for reasons of space.
It is remarkable that on LeadingOnes the (1 + λ) EA half performs better than Randomized Local Search (RLS), the elitist (1+1) algorithm flipping in each iteration exactly one uniformly chosen bit. The slightly worse gradients for target values v > n/2 (which are a consequence of randomly sampling the mutation rate instead of using mutation strength one deterministically) are compensated for by the gains made in the initial phase of the optimization process, where the EA variants benefit from larger mutation rates. On OneMax the performance of the (1 + λ) EA half is better than that of the plain (1 + λ) EA >0 for both tested values λ = 50 and λ = 2.
We recall that it is well known that, both for OneMax and LeadingOnes, the optimal offspring population size in the regular (1 + λ) EA is λ = 1 [JDW05]. A monotonic dependence of the average optimization time on λ is conjectured (and empirically observed) but not formally proven. While for OneMax the impact of λ is significant, the dependency on λ is much less pronounced for LeadingOnes. Empirical results for both functions and a theoretical running time analysis for LeadingOnes can be found in [DYvR + 18]. For OneMax [GW17] offers a precise running time analysis of the (1 + λ) EA for broad ranges of offspring population sizes λ and mutation rates p = c/n. In light of the fact that the theoretical considerations in [DGWY17] required λ = ω(1), it is worthwhile to note that for all tested problem dimensions the (1+2) EA r/2,2r performs better on OneMax than the (1+50) EA r/2,2r . Note, however, that the inverse holds for LeadingOnes, cf. Figure 4. For this function it seems to be important that the number of offspring allows a better estimation of the better mutation rate. We will observe the same phenomenon for all other algorithms introduced below.
Normalized Standard Bit Mutation
In light of the results presented in the previous section, one may wonder if splitting the population into two halves is needed after all. We investigate this question by introducing the (1 + λ) EA norm. which in each iteration and for each i ∈ [λ] samples the mutation strength (i) from the normal distribution N (r, r(1 − r/n)) around the best mutation strength r of the previous iteration and rounding the sampled value to the closest integer. The reasons to replace the uniform distribution U (r/n − σ, r/n + σ) will be addressed below. As before we enforce (i) ≥ 1 by re-sampling if needed, thus effectively sampling the mutation strength from the conditional distribution N >0 (r, r(1 − r/n)). Algorithm 3 summarizes this algorithm.
Algorithm 3: The (1 + λ) EA norm. with normalized standard bit mutation 1 Initialization: Sample x ∈ {0, 1} n uniformly at random and evaluate f (x); 2 Initialize r ← r init ; // we use r init = 2; 3 Optimization: for t = 1, 2, 3, . . . do 4 for i = 1, . . . , λ do 5 Sample (i) ∼ min{N >0 (r, r(1 − r/n)), n}, create y (i) ← flip (i) (x), and evaluate f (y (i) );
6 i ← min j | f (y (j) ) = max{f (y (k) ) | k ∈ [n]} ; 7 r ← (i) ; 8 if f (y (i) ) ≥ f (x) then x ← y (i) ;
Note that the variance r(1 − r/n) of the unconditional normal distribution N (r, r(1 − r/n)) is identical to that of the unconditional binomial distribution Bin(n, r/n). We use the normal distribution here for reasons that will be explained in the next section. Note, however, that very similar results would be obtained when replacing in line 4 of Algorithm 3 the normal distribution N >0 (r, r(1 − r/n)) by the binomial one Bin >0 (n, r/n). We briefly recall that, by the central limit theorem, the (unconditional) binomial distribution converges to the (unconditional) normal distribution.
The empirical performance of the (1+50) EA norm. is comparable to that of the (1+50) EA half for both problems and all tested problem dimensions, cf. Figures 3 and 4. Note, however, that for λ = 2 the (1 + 2) EA norm. performs worse than the (1 + 2) EA half .
Interpolating Local and Global Search
As discussed above, all EA variants mentioned so far suffer from the variance of the random selection of the mutation rate, in particular in the long final part of the optimization process in which the optimal mutation strength is one. We therefore analyze a simple way to reduce this variance on the fly. To this end, we build upon the (1 + λ) EA norm. and introduce a counter c, which is initialized at zero. In each iteration, we check if the value of r changes. If so, the counter is re-set to zero. It is increased by one otherwise, i.e., if the value of r remains the same. We use this counter to self-adjust the variance of the normal distribution. To this end, we replace in line 4 of Algorithm 3 the conditional normal distribution N >0 (r, r(1 − r/n)) by the conditional normal distribution N >0 (r, F c r(1 − r/n)), where F < 1 is a constant discount factor. Algorithm 4 summarizes this (1 + λ) EA variant with normalized standard bit mutation and a self-adjusting choice of mean and variance.
Choice of F : We use F = 0.98 in all reported experiments. Preliminary tests suggest that values F < 0.95 are not advisable, since the algorithm may get stuck with sub-optimal mutation rates. This could be avoided by introducing a lower bound for the variance and/or by mechanisms taking into account whether or not an iteration has been successful, i.e., whether it has produced a strictly better offspring.
The empirical comparison suggests that the self-adjusting choice of the variance in the (1 + λ) EA var. improves the performance on OneMax further, cf. also Figure 5 for average fixed-target results for n = 10 000. For λ = 2 the average performance is comparable to, but slightly worse than that of RLS. For LeadingOnes, the (1 + 50) EA var. is comparable in performance to the (1 + 50) EA norm. , but we observe that for λ = 2 the (1 + λ) EA var. performs Algorithm 4: The (1 + λ) EA var. with normalized standard bit mutation and a selfadjusting choice of mean and variance 1 Initialization: Sample x ∈ {0, 1} n uniformly at random and evaluate f (x); 2 Initialize r ← r init ; // we use r init = 2; 3 Initialize c ← 0; 4 Optimization: for t = 1, 2, 3, . . . do 5 for i = 1, . . . , λ do 6 Sample (i) ∼ min{N >0 (r, F c r(1 − r/n)), n}, create y (i) ← flip (i) (x), and evaluate f (y (i) );
7 i ← min j | f (y (j) ) = max{f (y (k) ) | k ∈ [n]} ; 8 if r = (i) then c ← c + 1; else c ← 0; 9 r ← (i) ; 10 if f (y (i) ) ≥ f (x) then x ← y (i) ;
better. It is the only one among all tested EAs for which decreasing λ from 50 to 2 does not result in a significantly increased running time.
A Meta-Algorithm with Normalized Standard Bit Mutation
In the (1 + λ) EA var. we make use of the fact that a small variance in line 5 of Algorithm 4 results in a more concentrated distribution. The variance adjustment is thus an efficient way to steer the degree of randomness in the selection of the mutation rate. It allows to interpolate between deterministic and random mutation rates. In our experimentation we do not go beyond the variance of the binomial distribution, but in principle there is no reason to not regard larger variance as well. The question of how to best determine the degree of randomness in the choice of the mutation rate has, to the best of our knowledge, not previously been addressed in the EC literature. We believe that this idea carries good potential, since it demonstrates that local search with its deterministic search radius and evolutionary algorithms with their global search radii are merely two different configurations of the same meta-algorithm, and not two different algorithms as the general perception might indicate. To make this point very explicit, we introduce with Algorithm 5 a general meta-algorithm, of which local search with deterministic mutation strengths and EAs are special instantiations. Note that in this meta-model we use static parameter values, variants with adaptive mutation rates can be obtained by applying the usual parameter control techniques, as demonstrated above. Of course, the same normalization can be done for similar EAs, the technique is not restricted to elitist (1 + λ)-type algorithms. Likewise, the condition to flip at least one bit can be omitted, i.e., one can replace the conditional normal distribution N >0 (r, σ 2 ) in line 3 by the unconditional N (r, σ 2 ).
Discussion and Outlook
We have introduced in this work normalized standard bit mutation, which replaces the binomial choice of the mutation strength in standard bit mutation by a normal distribution. This normalization allows a straightforward way to control the variance of the distribution, which can now be adjusted independently of the mean. We have demonstrated that such an approach can be beneficial when optimizing classic benchmark problems such as LeadingOnes and OneMax. In future work, we plan to validate our approach for the fast-GA proposed in [DLMN17]. We are confident that variance control should be beneficial for that algorithm as well.
Our work has concentrated on OneMax and LeadingOnes, as two examples where the optimal mutation rate is stable for a long time. When applied in practice-where abrupt changes of the optimal mutation strengths may occur-our variance control mechanism needs to be modified so that the variance is increased if no strict progress has been observed for a sufficiently long period. We plan to investigate this question by studying concatenated jumpfunctions, i.e., functions for which one mutation strength is optimal for some significant number of iterations, followed by a situation in which a much larger number of bits need to be flipped in order to make progress.
Algorithm 5: The (1+λ) Meta-Algorithm with (static) normalized standard bit mutation. The RLS variant with deterministic search radius r and (1 + λ) EA using standard bit mutation with mutation rate r/n are identical to this algorithm with σ 2 = 0 and σ 2 = r(1 − r/n), respectively. Related to the point made in the last paragraph, we also note that the parameter control technique which we applied to adjust the mean of the sampling distribution for the mutation strength has an extremely short learning period, since we simply use the best mutation strength of the last iteration as mean for the sampling distribution of the next iteration. For more rugged fitness landscapes a proper learning, which takes into account several iterations, should be preferable.
We recall that multi-dimensional parameter control has not received much attention in the EC literature for discrete optimization problems [KHE15, DD19]. Our work falls into this category, and we have demonstrated a simple way to separate the control of the mean from that of the variance of the mutation strength distribution. We hope that our work inspires more research in this direction, since practical EAs tend to have many different parameters that need to be adjusted during the optimization process.
Finally, another avenue for further work is provided by the meta-algorithm presented in Section 5, which demonstrates that Randomized Local Search and evolutionary algorithms can be seen as two configurations of the meta-algorithm. Parameter control, or, in this context possibly more suitably referred to as online algorithm configuration, offers the possibility to interpolate between these algorithms (and even more drastically randomized heuristics). Given the significant advances in the context of algorithm configuration witnessed by the EC and machine learning communities, we believe that such meta-models carry significant potential to exploit and profit from advantages of different heuristics. Note here that the configuration of meta-algorithms offers much more flexibility than the algorithm selection approach classically taken in EC, e.g., in most works on hyper-heuristics.
[JDW05] LeadingOnes problem. AHT= average first hitting time, rsd= relative standard deviation. We have chosen to display target value n/2 = 1 000 because this is the point after which flipping one bit becomes optimal, i.e., advantages over RLS must result from the phase before reaching this target point. | 4,609 |
1907.06901 | 2960433490 | Recently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization. Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions. In this work, we propose RNN-Opt: an approach for learning RNN-based optimizers for optimizing real-parameter single-objective continuous functions under limited budget constraints. Existing approaches utilize an observed improvement based meta-learning loss function for training such models. We propose training RNN-Opt by using synthetic non-convex functions with known (approximate) optimal values by directly using discounted regret as our meta-learning loss function. We hypothesize that a regret-based loss function mimics typical testing scenarios, and would therefore lead to better optimizers compared to optimizers trained only to propose queries that improve over previous queries. Further, RNN-Opt incorporates simple yet effective enhancements during training and inference procedures to deal with the following practical challenges: i) Unknown range of possible values for the black-box function to be optimized, and ii) Practical and domain-knowledge based constraints on the input parameters. We demonstrate the efficacy of RNN-Opt in comparison to existing methods on several synthetic as well as standard benchmark black-box functions along with an anonymized industrial constrained optimization problem. | Our work falls under the category of real-parameter black-box global optimization @cite_12 . Traditional approaches for black-box optimization like covariance matrix adaptation evolution strategy (CMA-ES) @cite_24 , Nelder-Mead @cite_3 , and Particle Swarm Optimization (PSO) @cite_20 hand-design rules using heuristics (e.g. using nature-inspired genetic algorithms) to decide the next query point(s) given the observations made so far. Another category of approaches for global optimization of black-box functions include Bayesian optimization techniques @cite_6 @cite_9 @cite_5 . These approaches use observations (query and response) made thus far to approximate the black-box function via a surrogate (meta-) model, e.g. using a Gaussian Process @cite_10 , and then use this model to construct an acquisition function to decide the next query point. The acquisition function updates needed at each step are known to be costly @cite_14 . | {
"abstract": [
"We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to tradeoff exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.",
"The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a \"black art\" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.",
"A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.",
"",
"A new formulation for coordinate system independent adaptation of arbitrary normal mutation distributions with zero mean is presented. This enables the evolution strategy (ES) to adapt the correct scaling of a given problem and also ensures invariance with respect to any rotation of the fitness function (or the coordinate system). Especially rotation invariance, here resulting directly from the coordinate system independent adaptation of the mutation distribution, is an essential feature of the ES with regard to its general applicability to complex fitness functions. Compared to previous work on this subject, the introduced formulation facilitates an interpretation of the resulting mutation distribution, making sensible manipulation by the user possible (if desired). Furthermore it enables a more effective control of the overall mutation variance (expected step length).",
"Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.",
"This paper proposes a new method that extends the efficient global optimization to address stochastic black-box systems. The method is based on a kriging meta-model that provides a global prediction of the objective values and a measure of prediction uncertainty at every point. The criterion for the infill sample selection is an augmented expected improvement function with desirable properties for stochastic responses. The method is empirically compared with the revised simplex search, the simultaneous perturbation stochastic approximation, and the DIRECT methods using six test problems from the literature. An application case study on an inventory system is also documented. The results suggest that the proposed method has excellent consistency and efficiency in finding global optimal solutions, and is particularly useful for expensive systems.",
"This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB MULTIMIN, TOMLAB GLCCLUSTER, MCS and TOMLAB LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB OQNLP, NEWUOA, and TOMLAB MULTIMIN show superior performance in terms of refining a near-optimal solution.",
"A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described."
],
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2624086852",
"2131241448",
"2171074980",
"2950338507",
"2101677491",
"2192203593",
"2063180182",
"2160960847",
"2152195021"
]
} | Meta-Learning for Black-box Optimization | Several practical optimization problems such as process black-box optimization for complex dynamical systems pose a unique challenge owing to the restriction on the number of possible function evaluations. Such black-box functions do not have a simple closed form but can be evaluated (queried) at any arbitrary query point in the domain. However, evaluation of real-world complex processes is expensive and time consuming, therefore the optimization algorithm must optimize while employing as few real-world function evaluations as possible.
Most practical optimization problems are constrained in nature, i.e. have one or more constraints on the values of input parameters. In this work, we focus on real-parameter single-objective black-box optimization (BBO) where the goal is to obtain a value as close to the maximum value of the objective function as possible by adjusting the values of the real-valued continuous input parameters while ensuring domain constraints are not violated. We further assume a limited budget, i.e. assume that querying the black-box function is expensive and thus only a small number of queries can be made.
Efficient global optimization of expensive black-box functions [14] requires proposing the next query (input parameter values) to the black-box function based on past queries and the corresponding responses (function evaluations). BBO can be mapped to the problem of proposing the next query given past queries and the corresponding responses such that the expected improvement in the function value is maximized, as in Bayesian Optimization approaches [4]. While most research in optimization has focused on engineering algorithms catering to specific classes of problems, recent meta-learning [24] approaches, e.g. [2,18,5,27,7], cast design of an optimization algorithm as a learning problem rather than the traditional hand-engineering approach, and then, propose approaches to train neural networks that learn to optimize. In contrast to a traditional machine learning approach involving training of a neural network on a single task using training data samples so that it can generalize to unseen data samples from the same data distribution, here the neural network is trained on a distribution of similar tasks (in our case optimization tasks) so as to learn a strategy that generalizes to related but unseen tasks from a similar task distribution. The meta-learning approaches attempt to train a single network to optimize several functions at once such that the network can effectively generalize to optimize unseen functions.
Recently, [5] proposed a meta-learning approach wherein a recurrent neural network (RNN with gated units such as Long Short Term Memory (LSTM) [9]) learns to optimize a large number of diverse synthetic non-convex functions to yield a learned task-independent optimizer. The RNN iteratively uses the sequence of past queries and corresponding responses to propose the next query in order to maximize the observed improvement (OI) in the response value. We refer to this approach as RNN-OI in this work. Once the RNN is trained to optimize a diverse set of synthetic functions by using gradient descent, it is able to generalize well to solve unseen derivative-free black-box optimization problems [5,29]. Such learned optimizers are shown to be faster in terms of the time taken to propose the next query compared to Bayesian optimizers as they do not require any matrix inversion or optimization of acquisition functions, and also have lower regret values within the training horizon, i.e. the number of steps of the optimization process for which the RNN is trained to generate queries.
Key contributions of this work and the challenges addressed can be summarized as follows:
1. Regret-based loss function: We hypothesize that training an RNN optimizer using a loss function that minimizes the regret observed for a given number of queries more closely resembles the performance measure of an optimizer. So it is better than a loss function based on OI such as the one used in [5,29].
To this end, we propose a simple yet highly effective loss function that yields superior results than the existing OI loss for black-box optimization. Regret of the optimizer is the difference between the optimal value (maximum of the black-box function) and the realized maximum value.
2. Deal with lack of prior knowledge on range of the black-box function: In many practical optimization problems, it may be difficult to ascertain the possible range of values the function can take, and the range of values would vary across applications. On the other hand, neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will then adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training [27]. We, therefore, propose incremental normalization that dynamically normalizes the output (response) from the black-box function using the response values observed so far before the value is passed as an input to the RNN, and observe significant improvements in terms of regret by doing so.
3. Incorporate domain-constraints: Any practical optimization problem has a set of constraints on the input parameters. It is important that the RNN optimizer is penalized when it proposes query points outside the desired limits. We introduce a mechanism to achieve this by giving an additional feedback to the RNN whenever it proposes a query that violates domain constraints. In addition to regret-based loss, RNN is also trained to simultaneously minimize domain constraint violations. We show that an RNN optimizer trained in this manner attains lower regret values in fewer steps when subjected to domain constraints compared to an RNN optimizer not explicitly trained to utilize feedback.
We refer to the proposed approach as RNN-Opt. As a result of the above considerations, RNN-Opt can deal with an unknown range of function values and also incorporate domain constraints. We demonstrate that RNN-Opt works well on optimizing unseen benchmark black-box functions and outperforms RNN-OI in terms of the optimal value attained under a limited budget for 2-dimensional and 6-dimensional input spaces. We also perform extensive ablation experiments demonstrating the importance of each of the above-stated features in RNN-Opt.
The rest of the paper is organized as follows: We contrast our work to existing literature in Section 2, followed by defining the problem in Section 3. We present the details of our approach in Section 4, followed by experimental evaluation in Section 5, and conclude in Section 6.
Problem Overview
We consider learning an optimizer that can optimize (e.g., maximize) a blackbox function
f b : Θ → R, where Θ ⊆ R d is the domain of the input parameters.
We assume that the function f b does not have a closed-form representation, is costly to evaluate, and does not allow the computation of gradients. In other words, the optimizer can query the function f b at a point x to obtain a response y = f b (x), but it does not obtain any gradient information, and in particular it cannot make any assumptions on the analytical form of f b . The goal is to find x opt = arg max x∈Θ f b (x) within a limited budget, i.e. within a limited number of queries T that can be made to the black-box.
We consider training an optimizer f opt with parameters θ opt such that, given the queries x 1...t = x 1 , x 2 , . . . , x t and the corresponding responses y 1...t = y 1 , y 2 , . . . , y t from f b where y t = f b (x t ), f opt proposes the next query point x t+1 under a budget constraint of T queries, i.e. t ≤ T − 1:
x t+1 = f opt (x 1...t , y 1...t ; θ opt ).
(1)
RNN-Opt
We model f opt using an LSTM-based RNN. (For implementation, we use a variant of LSTMs as described in [28].) Recurrent Neural Networks (RNNs) with gated units such as Long Short Term Memory (LSTM) [9] units are a popular choice for sequence modeling to make predictions about future values given the past. They do so by maintaining a memory of all the relevant information from the sequence of inputs observed so far. In the meta-learning or training phase, a diverse set of synthetically-generated differentiable non-convex functions (refer Appendix A) with known global optima are used to train the RNN (using gradient descent). The RNN is then used to predict the next query in order to intelligently explore the search space given the sequence of previous queries and the function responses. The RNN is expected to learn to retain any information about previous queries and responses that is relevant to proposing the next query to minimize the regret as shown in Fig. 1.
RNN-Opt without Domain Constraints
Given a trained RNN-based optimizer and a differentiable function f g , inference in RNN-Opt follows the following iterative process for t = 1, . . . , T − 1: At each step t, the output of the final recurrent hidden layer of the RNN is used to generate the output via an affine transformation to finally obtain x t+1 .
h t+1 = f o (h t , x t , y t ; θ), (2) µ x t+1 , Σ x t+1 = W 2m,d (h t+1 ), (3) x t+1 ∼ N (µ x t+1 , Σ x t+1 ),(4)y t+1 = f g (x t+1 ),(5)
where f o represents the RNN with parameters θ, f g is the function to be optimized, W 2m,d defines the affine transformation of the final output (hidden state) h t+1 of the RNN. The parameters θ and W 2m,d together constitute θ opt . Instead of directly training f o to propose the next query x t+1 as in [5], we use a stochastic RNN to estimate µ x t+1 ∈ R d and Σ x t+1 ∈ R d×d as in Equation 3, then sample x t+1 from a multivariate Gaussian distribution N (µ x t+1 , Σ x t+1 ). Introducing randomness in the query generation process leads to better exploration compared to a deterministic model [29]. The first query x 1 is sampled from a uniform distribution over the domain of the function f g to be optimized. Once the network is trained, f g can be replaced by any black-box function f b that takes d-dimensional input.
For any synthetically generated function f g ∈ F, we assume x opt (approximate) can be found, e.g. using gradient-descent, since the closed form of the function is known. Hence, we assume that y opt of f g given by y opt = f g (x opt ) is known. Therefore, it is easy to determine the regret y opt − max i≤t y i after t iterations (queries) to the function f g . We can then define a regret-based loss function as follows:
L R = fg∈F T t=2 1 γ t ReLU(y opt − max i≤t y i ),(6)
where ReLU(x) = max(x, 0). Since the regret is expected to be high during initial iterations because of random initialization of x but desired to be low close to T , we give exponentially increasing importance to regret terms via a discount factor 0 < γ ≤ 1. In contrast to regret loss, OI loss used in RNN-OI is given by [5,29]:
L OI = fg∈F T t=2 1 γ t ReLU(y t − max i<t y i )(7)
It is to be noted that using L R as the loss function mimics a supervised scenario where the target y opt for each optimization task is known and explicitly used to guide the learning process. On the other hand, L OI mimics an unsupervised scenario where the target y opt is unknown and the learning process solely relies on the feedback about whether it is able to improve y t over iterations. It is important to note that once trained, the model requires neither y opt nor x opt during inference.
Incremental Normalization
We do not assume any constraint on the range of values the functions f g and f b can take. Although this feature is critical for most practical aspects, it poses a challenge on the training and inference procedures using RNN: Neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training [12,27]. This poses a challenge during both training and inference if we directly use y t as an input to the RNN. incremental normalization of function values is not used during inference. This behavior at inference time was noted 1 in [5], however, was not considered while training RNN-OI. In order to deal with any range of values that f g can take during training or that f b can take during inference, we consider incremental normalization while training such that y t in Eq. 2 is replaced byỹ t = yt−µt √
σ 2 t + such that h t+1 = f o (h t , x t ,ỹ t ; θ), where µ t = 1 t t i=1 y i , σ 2 t = 1 t t i=1 (y i − µ t ) 2 ,
RNN-Opt with Domain Constraints (RNN-Opt-DC)
Consider a constrained optimization problem of finding arg max x f b (x) subject to constraints given by c j (x) ≤ 0, j = 1, . . . , C, where C is the number of constraints. To ensure that the optimizer proposes queries that satisfy the domain constraints, or is at least able to receive feedback when it proposes a query that violates any domain constraints, we consider the following enhancements in RNN-Opt, as depicted in Fig. 3:
Loss L R Here fg is the function to be optimized, and fp is used to compute the penalty pt. Further, if pt = 0, actual value of fg, i.e. yt is passed to the loss function and RNN, else yt is set to yt−1.
1. Input an explicit feedback p t via a penalty function s.t. p t = f p (x t ) to the RNN that captures the extent to which a proposed query x t violates any of the C domain constraints. We consider the following instantiation of penalty function:
f p (x t ) = C j=1
ReLU(c j (x t )), i.e. for any j for which c j (x t ) > 0 a penalty equal to c j (x t ) is considered, while for any j with c j (x t ) ≤ 0 the contribution to penalty is 0. The real-valued penalty captures the cumulative extent of violation as well. Further, similar to normalizing y t , we also normalize p t incrementally and usep t as an additional input to the RNN, such that:
h t+1 = f o (h t , x t ,ỹ t ,p t ; θ).(8)
Further, whenever p t > 0, i.e. when one or more of the domain constraints are violated for the proposed query, we set y t = y t−1 rather than actually getting a response from the black-box. This is useful in practice: for example, when trying to optimize a complex dynamical system, getting a response from the system for such a query is not possible as it can be catastrophic. 2. During training, an additional domain constraint loss L D is considered that penalizes the optimizer if it proposes a query that does not satisfy one or more of the domain constraints.
L D = 1 C fg∈F T t=2 p t .(9)
The overall loss is then given by:
L = L R + λL D ,(10)
where λ controls how strictly the constraints on the domain of parameters should be enforced; higher λ implies stricter adherence to constraints. It is worth noting that the above formulation of incorporating domain constraints does not put any restriction on the number of constraints C nor on the nature of constraints in the sense that the constraints can be linear or non-linear in nature. Further, complex non-linear constraints based on domain knowledge can also be incorporated in a similar fashion during training, e.g. as used in [13,19]. Apart from optimizing (in our case, maximizing) f g , the optimizer is also being simultaneously trained to minimize f p .
Example of penalty function. Consider simple limit constraints on the input parameters such that the domain of the function f g is given by Θ = [x min , x max ], then we have:
f p (x t ) = d j=1 ReLU(x j t − x j max ) + ReLU(x j min − x j t ) ,(11)
where x j t denotes the j-th dimension of x t where x j min and x j max are the j-th elements of x min and x max , respectively.
Experimental Evaluation
We conduct experiments to evaluate the following: i. regret loss (L R ) versus OI loss (L OI ), ii. effect of including incremental normalization during training, and iii. ability of RNN-Opt trained with domain constraints using L (Eq. 10) to generate more feasible queries and leverage feedback to quickly adapt in case it proposes queries violating domain constraints.
For the unconstrained setting, we test RNN-Opt on i) standard benchmark functions for d = 2 and d = 6, and ii) 1280 synthetically generated GMM-DF functions (refer Appendix A) not seen during training. We choose the benchmark functions such as Goldstein, Rosenbrock, and Rastrigin (and the simple spherical function) that are known to be challenging for standard optimization methods. None of these functions were used for training any of the optimizers.
We use regret r t = y opt − max i≤t y i to measure the performance of any optimizer after t iterations, i.e. after proposing t queries. Lower values of r t indicate superior optimizer performance. We test all the optimizers under limited budget setting such that T = 10 × d. For each test function, the first query is randomly sampled from U (−4.0, 4.0), and we report average regret r t over 1280 random initializations. For synthetically generated GMM-DF functions, we report average regret over 1280 functions with one random initialization for each.
All RNN-based optimizers (refer Table 1) were trained for 8000 iterations using Adam optimizer [16] with an initial learning rate of 0.005. The network consists of two hidden layers with the number of LSTM units in each layer being chosen from {80, 120, 160} using a hold-out validation set of 1280 GMM-DF. Another set of 1280 randomly generated functions constitute the GMM-DF test set. An initial code base 2 developed using Tensorflow [1] was adapted to implement our algorithm. We used a batch size of 128, i.e. 128 randomly-sampled functions (refer Equation 12) are processed in one mini-batch for updating the parameters of LSTM.
1.0 N Y N N RNN-Opt-Basic L R 0.98 N Y N N RNN-Opt L R 0.98 Y Y N N RNN-Opt-P L R 0.98 Y Y N Y RNN-Opt-DC L R + λL D 0.98 Y Y Y Y
Observations
We make the following key observations for unconstrained optimization setting: 1. RNN-Opt is able to optimize black-box functions not seen during training, and hence, generalize. We compare RNN-Opt with RNN-OI and two standard black-box optimization algorithms CMA-ES [8] and Nelder-Mead [20]. RNN-OI uses x t , y t , and h t to get the next hidden state h t+1 , which is then used to get x t+1 (as in Eq 4), such that h t+1 = f o (h t , x t , y t ; θ), with OI loss as given in Eq. 7. From Fig. 4 (a)-(i), we observe that RNN-Opt outperforms all the baselines considered on most functions considered while being at least as good as the baselines in few remaining cases. Except for the simple convex spherical function, RNN-based optimizers outperform CMA-ES and Nelder-Mead under limited budget, i.e. with T = 20 for d = 2 and T = 60 for d = 6. We observe that trained optimizers outperform CMA-ES and Nelder-Mead for higher-dimensional cases (d = 6 here, as also observed in [5,29]).
2.
Regret-based loss is better than the OI loss. We compare RNN-Opt-Basic with RNN-OI (refer Table 1) where RNN-Opt-Basic differs from RNN-OI only in the loss function (and the discount factor, as discussed in next point). For fair comparison with RNN-OI, RNN-Opt-Basic does not include incremental normalization during training. From Fig. 4 (j)-(k), we observe that RNN-Opt-Basic (with γ = 0.98) performs better than RNN-OI during initial steps for d = 2 (while being comparable eventually) and across all steps for d = 6, proving the advantage of using regret loss over OI loss.
3. Significance of discount factor when using regret-based loss versus OI loss. From Fig. 4 (j)-(k), we also observe that the results of RNN-Opt and RNN-OI are sensitive to the discount factor γ (refer Eqs. 6 and 7). γ < 1 works better for RNN-Opt while γ = 1 (i.e. no discount) works better for RNN-OI. This can be explained as follows: the queries proposed initially (small t) are expected to be far from y opt due to random initialization, and therefore, have high initial regret. Hence, components of the loss term for smaller t should be given lower weightage in the regret-based loss. On the other hand, during later steps (close to T ), we would like the regret to be as low as possible, and hence a higher importance should be given to the corresponding terms in the regretbased loss. In contrast, RNN-OI is trained to keep improving irrespective of y opt , and hence giving equal importance to the contribution of each step to the OI loss works best.
4. Incremental normalization during training and inference to optimize functions with diverse range of values. We compare RNN-Opt-Basic and RNN-Opt, where RNN-Opt uses incremental normalization of inputs during training as well as testing (as described in Section 4.1) while RNN-Opt-Basic uses incremental normalization only during testing (refer Table 1). From Fig. 5, we observe that RNN-Opt performs significantly better than RNN-Opt-Basic proving the advantage of incorporating incremental normalization during training. Note that since most of the functions considered have large range of values, incremental normalization is by-default enabled for all RNN-based optimizers during testing to obtain meaningful results, as illustrated earlier in Fig. 2, especially for functions with large range, e.g. Rosenbrock.
RNN-Opt with Domain Constraints
To train RNN-Opt-DC, we generate synthetic functions with random limit constraints as explained in Section 4.2. The limits of the search space are set as [x opt − ∆x, x opt + ∆x] where ∆x j (j-th component of ∆x) is sampled from U (τ 1 , τ 2 ) (we use τ 1 = 1.0, τ 2 = 2.0 during training).
We use λ = 0.2 for RNN-Opt-DC. As a baseline, we use RNN-Opt with minor variation during inference time (with no change in training procedure) where, instead of passingỹ t as input to the RNN, we passỹ t −p t so as to capture penalty feedback. We call this baseline approach as RNN-Opt-P (refer Table 1). While RNN-Opt-DC is explicitly trained to minimize penalty p t explicitly, RNN-Opt-P captures the requirement of trying to maximize y t under a soft-constraint of minimizing p t only during inference time.
We use the standard quadratic (disk) constraint used to evaluate constrained optimization approaches, i.e. ||x|| 2 2 ≤ τ × d (we use τ = {0.5, 1.0, 2.0}) for Rosenbrock function. For GMM-DF, we generate random limit constraints on each dimension around the global optima, s.t. the optimal solution is still the same as the one without constraints, while the feasible search space varies randomly across functions. Limits of the domain is [x opt − ∆x, x opt + ∆x], where ∆x j (j-th component of ∆x) is sampled from U (τ 1 , τ 2 ) (we use τ 1 = {0.5, 1.0, 1.5}, τ 2 = {1.5, 2.0, 2.5}). We also consider two instances of (anonymized) non-linear surrogate model for a real-world industrial process built by subject-matter experts with six controllable input parameters (d = 6) as black-box functions, referred to as Industrial-1 and Industrial-2 in Fig. 6. This process imposes limit constraints on all six parameters guided by domain-knowledge. The ground-truth optimal value for these functions was obtained by querying the surrogate model 200k times via grid search. The regret results are averaged over runs assuming diverse environmental conditions. RNN-Opt-DC and RNN-Opt-P are not guaranteed to propose feasible queries at all steps because of the soft constraints during training and/or inference. Therefore, despite training the optimizers for T steps we unroll the RNNs up to a maximum of 5T steps and take the first T proposed queries that are feasible, i.e. satisfy domain constraints. For functions where optimizer is not able to propose T feasible queries in 5T steps, we replicate the regret corresponding to best solution for remaining steps. As shown in Fig. 6, we observe that RNN-Opt with domain constraints, namely, RNN-Opt-DC is able to effectively use explicit penalty feedback, and at least as good as RNN-Opt-P in all cases. As expected, we also observe that the performance of both optimizers degrades with increasing values of τ or τ 2 − τ 1 as the search space to be explored by the optimizer increases.
Conclusion and Future Work
Learning optimization algorithms under the meta-learning paradigm is an area of active research. In this work, we have shown that using regret directly as a loss for training optimizers using recurrent neural networks is possible, and that it yields better optimizers than those obtained using observed-improvement based loss. We have proposed useful extensions of practical importance to optimization algorithms for black-box optimization that allow dealing with diverse range of function values and handle domain constraints more effectively. One shortcoming of this approach is that a different optimizer needs to be trained for varying number of input parameters. In future, we plan to extend this work to train optimizers that can ingest input with varying and high number of parameters, e.g. by first proposing a change in a latent space and then estimating changes in actual input space as in [22,27]. Further, training optimizers for multi-objective optimization can be a useful extension.
A Generating Diverse Non-Convex Synthetic Functions
We generate synthetic non-convex continuous functions f g defined over Θ ⊆ R d via a Gaussian Mixture Model density function (GMM-DF, similar to [29]):
f g (x t ) = N i=1 c i (2π) k 2 |Σ i | 1 2 exp(− 1 2 (x t − µ i ) T Σ −1 i (x t − µ i )).(12)
In this work, we used GMM-DF instead of Gaussian Processes used in [5] for ease of implementation and faster response time to queries: Functions obtained in this manner are often non-convex and have multiple local minima/maxima. Sample plots for functions obtained over 2-D input space are shown in Fig. 7. We use c i ∼ N (0, 0.2), µ i ∼ U (−2.0, 2.0) and Σ i ∼ T runcatedN (0.9, 0.9/5) for d = 2, µ i ∼ U (−3.0, 3.0) and Σ i ∼ T runcatedN (3.0, 3.0/5) for d = 6 in our experiments (all covariance matrices are diagonal). For any function f g , we use an estimated valueŷ opt = max i f g (µ i ) (i = 1, 2, . . . , N ) instead of y opt . This assumes that the global maximum of the function is at the mean of one of the N Gaussian components. We validate this assumption by obtaining better estimates of the ground truth for y opt via grid search over randomly sampled 0.2M query points over the domain of f g . For 10k randomly sampled GMM-DF functions, we obtained an average error of 0.03 with standard deviation of 0.02 in estimating y opt , suggesting that the assumption is reasonable, and in practice, approximate values of y opt suffice to estimate the regret values for supervision. However, in general, y opt can also be obtained using gradient descent on f g . | 4,792 |
1907.06901 | 2960433490 | Recently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization. Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions. In this work, we propose RNN-Opt: an approach for learning RNN-based optimizers for optimizing real-parameter single-objective continuous functions under limited budget constraints. Existing approaches utilize an observed improvement based meta-learning loss function for training such models. We propose training RNN-Opt by using synthetic non-convex functions with known (approximate) optimal values by directly using discounted regret as our meta-learning loss function. We hypothesize that a regret-based loss function mimics typical testing scenarios, and would therefore lead to better optimizers compared to optimizers trained only to propose queries that improve over previous queries. Further, RNN-Opt incorporates simple yet effective enhancements during training and inference procedures to deal with the following practical challenges: i) Unknown range of possible values for the black-box function to be optimized, and ii) Practical and domain-knowledge based constraints on the input parameters. We demonstrate the efficacy of RNN-Opt in comparison to existing methods on several synthetic as well as standard benchmark black-box functions along with an anonymized industrial constrained optimization problem. | Recent work on Physics-guided deep learning @cite_22 @cite_16 incorporates domain knowledge in the learning process via additional loss terms. Such approaches can be useful in our setting if the optimizer network is to be trained from scratch for a given application. However, the purpose of building a generic optimizer that can be transferred to new applications requires incorporating domain constraints in a posterior manner during inference time when the optimizer is suggesting query points. This is not only useful to adapt the same optimizer to a new application but also useful in another practical scenario of adapting to a new set of domain constraints for a given application. ThermalNet @cite_0 uses a deep Q-network as an optimizer and uses an LSTM predictor for combustion optimization of a boiler in a power plant but does not handle domain constraints. Similar to our approach, ChemOpt @cite_23 uses an RNN based optimizer for chemical reaction optimization but does not address aspects related to handling an unknown range for the function being optimized and incorporating domain constraints. | {
"abstract": [
"Abstract This paper presents a combustion optimization system for coal-fired boilers that includes a trade-off between emissions control and boiler efficiency. Designing an optimizer for this nonlinear, multiple-input multiple-output problem is challenging. This paper describes the development of an integrated combustion optimization system called ThermalNet, which is based on a deep Q-network (DQN) and a long short-term memory (LSTM) module. ThermalNet is a highly automated system consisting of an LSTM–ConvNet predictor and a DQN optimizer. The LSTM–ConvNet extracts the features of boiler behavior from the distributed control system (DCS) operational data of a supercritical thermal plant. The DQN reinforcement learning optimizer contributes to the online development of policies based on static and dynamic states. ThermalNet establishes a sequence of control actions that both reduce emissions and simultaneously enhance fuel utilization. The internal structure of the DQN optimizer demonstrates a greater representation capacity than does the shallow multilayer optimizer. The presented experiments indicate the effectiveness of the proposed optimization system.",
"In recent years, the large amount of labeled data available has also helped tend research toward using minimal domain knowledge, e.g., in deep neural network research. However, in many situations, data is limited and of poor quality. Can domain knowledge be useful in such a setting? In this paper, we propose domain adapted neural networks (DANN) to explore how domain knowledge can be integrated into model training for deep networks. In particular, we incorporate loss terms for knowledge available as monotonicity constraints and approximation constraints. We evaluate our model on both synthetic data generated using the popular Bohachevsky function and a real-world dataset for predicting oxygen solubility in water. In both situations, we find that our DANN model outperforms its domain-agnostic counterpart yielding an overall mean performance improvement of 19.5 with a worst- and best-case performance improvement of 4 and 42.7 , respectively.",
"This paper proposes a physics-guided recurrent neural network model (PGRNN) that combines RNNs and physics-based models to leverage their complementary strengths and improve the modeling of physical processes. Specifically, we show that a PGRNN can improve prediction accuracy over that of physical models, while generating outputs consistent with physical laws, and achieving good generalizability. Standard RNNs, even when producing superior prediction accuracy, often produce physically inconsistent results and lack generalizability. We further enhance this approach by using a pre-training method that leverages the simulated data from a physics-based model to address the scarcity of observed data. The PGRNN has the flexibility to incorporate additional physical constraints and we incorporate a density-depth relationship. Both enhancements further improve PGRNN performance. Although we present and evaluate this methodology in the context of modeling the dynamics of temperature in lakes, it is applicable more widely to a range of scientific and engineering disciplines where mechanistic (also known as process-based) models are used, e.g., power engineering, climate science, materials science, computational chemistry, and biomedicine.",
"Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71 fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability."
],
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_22",
"@cite_23"
],
"mid": [
"2884274441",
"2913159621",
"2899017598",
"2774977638"
]
} | Meta-Learning for Black-box Optimization | Several practical optimization problems such as process black-box optimization for complex dynamical systems pose a unique challenge owing to the restriction on the number of possible function evaluations. Such black-box functions do not have a simple closed form but can be evaluated (queried) at any arbitrary query point in the domain. However, evaluation of real-world complex processes is expensive and time consuming, therefore the optimization algorithm must optimize while employing as few real-world function evaluations as possible.
Most practical optimization problems are constrained in nature, i.e. have one or more constraints on the values of input parameters. In this work, we focus on real-parameter single-objective black-box optimization (BBO) where the goal is to obtain a value as close to the maximum value of the objective function as possible by adjusting the values of the real-valued continuous input parameters while ensuring domain constraints are not violated. We further assume a limited budget, i.e. assume that querying the black-box function is expensive and thus only a small number of queries can be made.
Efficient global optimization of expensive black-box functions [14] requires proposing the next query (input parameter values) to the black-box function based on past queries and the corresponding responses (function evaluations). BBO can be mapped to the problem of proposing the next query given past queries and the corresponding responses such that the expected improvement in the function value is maximized, as in Bayesian Optimization approaches [4]. While most research in optimization has focused on engineering algorithms catering to specific classes of problems, recent meta-learning [24] approaches, e.g. [2,18,5,27,7], cast design of an optimization algorithm as a learning problem rather than the traditional hand-engineering approach, and then, propose approaches to train neural networks that learn to optimize. In contrast to a traditional machine learning approach involving training of a neural network on a single task using training data samples so that it can generalize to unseen data samples from the same data distribution, here the neural network is trained on a distribution of similar tasks (in our case optimization tasks) so as to learn a strategy that generalizes to related but unseen tasks from a similar task distribution. The meta-learning approaches attempt to train a single network to optimize several functions at once such that the network can effectively generalize to optimize unseen functions.
Recently, [5] proposed a meta-learning approach wherein a recurrent neural network (RNN with gated units such as Long Short Term Memory (LSTM) [9]) learns to optimize a large number of diverse synthetic non-convex functions to yield a learned task-independent optimizer. The RNN iteratively uses the sequence of past queries and corresponding responses to propose the next query in order to maximize the observed improvement (OI) in the response value. We refer to this approach as RNN-OI in this work. Once the RNN is trained to optimize a diverse set of synthetic functions by using gradient descent, it is able to generalize well to solve unseen derivative-free black-box optimization problems [5,29]. Such learned optimizers are shown to be faster in terms of the time taken to propose the next query compared to Bayesian optimizers as they do not require any matrix inversion or optimization of acquisition functions, and also have lower regret values within the training horizon, i.e. the number of steps of the optimization process for which the RNN is trained to generate queries.
Key contributions of this work and the challenges addressed can be summarized as follows:
1. Regret-based loss function: We hypothesize that training an RNN optimizer using a loss function that minimizes the regret observed for a given number of queries more closely resembles the performance measure of an optimizer. So it is better than a loss function based on OI such as the one used in [5,29].
To this end, we propose a simple yet highly effective loss function that yields superior results than the existing OI loss for black-box optimization. Regret of the optimizer is the difference between the optimal value (maximum of the black-box function) and the realized maximum value.
2. Deal with lack of prior knowledge on range of the black-box function: In many practical optimization problems, it may be difficult to ascertain the possible range of values the function can take, and the range of values would vary across applications. On the other hand, neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will then adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training [27]. We, therefore, propose incremental normalization that dynamically normalizes the output (response) from the black-box function using the response values observed so far before the value is passed as an input to the RNN, and observe significant improvements in terms of regret by doing so.
3. Incorporate domain-constraints: Any practical optimization problem has a set of constraints on the input parameters. It is important that the RNN optimizer is penalized when it proposes query points outside the desired limits. We introduce a mechanism to achieve this by giving an additional feedback to the RNN whenever it proposes a query that violates domain constraints. In addition to regret-based loss, RNN is also trained to simultaneously minimize domain constraint violations. We show that an RNN optimizer trained in this manner attains lower regret values in fewer steps when subjected to domain constraints compared to an RNN optimizer not explicitly trained to utilize feedback.
We refer to the proposed approach as RNN-Opt. As a result of the above considerations, RNN-Opt can deal with an unknown range of function values and also incorporate domain constraints. We demonstrate that RNN-Opt works well on optimizing unseen benchmark black-box functions and outperforms RNN-OI in terms of the optimal value attained under a limited budget for 2-dimensional and 6-dimensional input spaces. We also perform extensive ablation experiments demonstrating the importance of each of the above-stated features in RNN-Opt.
The rest of the paper is organized as follows: We contrast our work to existing literature in Section 2, followed by defining the problem in Section 3. We present the details of our approach in Section 4, followed by experimental evaluation in Section 5, and conclude in Section 6.
Problem Overview
We consider learning an optimizer that can optimize (e.g., maximize) a blackbox function
f b : Θ → R, where Θ ⊆ R d is the domain of the input parameters.
We assume that the function f b does not have a closed-form representation, is costly to evaluate, and does not allow the computation of gradients. In other words, the optimizer can query the function f b at a point x to obtain a response y = f b (x), but it does not obtain any gradient information, and in particular it cannot make any assumptions on the analytical form of f b . The goal is to find x opt = arg max x∈Θ f b (x) within a limited budget, i.e. within a limited number of queries T that can be made to the black-box.
We consider training an optimizer f opt with parameters θ opt such that, given the queries x 1...t = x 1 , x 2 , . . . , x t and the corresponding responses y 1...t = y 1 , y 2 , . . . , y t from f b where y t = f b (x t ), f opt proposes the next query point x t+1 under a budget constraint of T queries, i.e. t ≤ T − 1:
x t+1 = f opt (x 1...t , y 1...t ; θ opt ).
(1)
RNN-Opt
We model f opt using an LSTM-based RNN. (For implementation, we use a variant of LSTMs as described in [28].) Recurrent Neural Networks (RNNs) with gated units such as Long Short Term Memory (LSTM) [9] units are a popular choice for sequence modeling to make predictions about future values given the past. They do so by maintaining a memory of all the relevant information from the sequence of inputs observed so far. In the meta-learning or training phase, a diverse set of synthetically-generated differentiable non-convex functions (refer Appendix A) with known global optima are used to train the RNN (using gradient descent). The RNN is then used to predict the next query in order to intelligently explore the search space given the sequence of previous queries and the function responses. The RNN is expected to learn to retain any information about previous queries and responses that is relevant to proposing the next query to minimize the regret as shown in Fig. 1.
RNN-Opt without Domain Constraints
Given a trained RNN-based optimizer and a differentiable function f g , inference in RNN-Opt follows the following iterative process for t = 1, . . . , T − 1: At each step t, the output of the final recurrent hidden layer of the RNN is used to generate the output via an affine transformation to finally obtain x t+1 .
h t+1 = f o (h t , x t , y t ; θ), (2) µ x t+1 , Σ x t+1 = W 2m,d (h t+1 ), (3) x t+1 ∼ N (µ x t+1 , Σ x t+1 ),(4)y t+1 = f g (x t+1 ),(5)
where f o represents the RNN with parameters θ, f g is the function to be optimized, W 2m,d defines the affine transformation of the final output (hidden state) h t+1 of the RNN. The parameters θ and W 2m,d together constitute θ opt . Instead of directly training f o to propose the next query x t+1 as in [5], we use a stochastic RNN to estimate µ x t+1 ∈ R d and Σ x t+1 ∈ R d×d as in Equation 3, then sample x t+1 from a multivariate Gaussian distribution N (µ x t+1 , Σ x t+1 ). Introducing randomness in the query generation process leads to better exploration compared to a deterministic model [29]. The first query x 1 is sampled from a uniform distribution over the domain of the function f g to be optimized. Once the network is trained, f g can be replaced by any black-box function f b that takes d-dimensional input.
For any synthetically generated function f g ∈ F, we assume x opt (approximate) can be found, e.g. using gradient-descent, since the closed form of the function is known. Hence, we assume that y opt of f g given by y opt = f g (x opt ) is known. Therefore, it is easy to determine the regret y opt − max i≤t y i after t iterations (queries) to the function f g . We can then define a regret-based loss function as follows:
L R = fg∈F T t=2 1 γ t ReLU(y opt − max i≤t y i ),(6)
where ReLU(x) = max(x, 0). Since the regret is expected to be high during initial iterations because of random initialization of x but desired to be low close to T , we give exponentially increasing importance to regret terms via a discount factor 0 < γ ≤ 1. In contrast to regret loss, OI loss used in RNN-OI is given by [5,29]:
L OI = fg∈F T t=2 1 γ t ReLU(y t − max i<t y i )(7)
It is to be noted that using L R as the loss function mimics a supervised scenario where the target y opt for each optimization task is known and explicitly used to guide the learning process. On the other hand, L OI mimics an unsupervised scenario where the target y opt is unknown and the learning process solely relies on the feedback about whether it is able to improve y t over iterations. It is important to note that once trained, the model requires neither y opt nor x opt during inference.
Incremental Normalization
We do not assume any constraint on the range of values the functions f g and f b can take. Although this feature is critical for most practical aspects, it poses a challenge on the training and inference procedures using RNN: Neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training [12,27]. This poses a challenge during both training and inference if we directly use y t as an input to the RNN. incremental normalization of function values is not used during inference. This behavior at inference time was noted 1 in [5], however, was not considered while training RNN-OI. In order to deal with any range of values that f g can take during training or that f b can take during inference, we consider incremental normalization while training such that y t in Eq. 2 is replaced byỹ t = yt−µt √
σ 2 t + such that h t+1 = f o (h t , x t ,ỹ t ; θ), where µ t = 1 t t i=1 y i , σ 2 t = 1 t t i=1 (y i − µ t ) 2 ,
RNN-Opt with Domain Constraints (RNN-Opt-DC)
Consider a constrained optimization problem of finding arg max x f b (x) subject to constraints given by c j (x) ≤ 0, j = 1, . . . , C, where C is the number of constraints. To ensure that the optimizer proposes queries that satisfy the domain constraints, or is at least able to receive feedback when it proposes a query that violates any domain constraints, we consider the following enhancements in RNN-Opt, as depicted in Fig. 3:
Loss L R Here fg is the function to be optimized, and fp is used to compute the penalty pt. Further, if pt = 0, actual value of fg, i.e. yt is passed to the loss function and RNN, else yt is set to yt−1.
1. Input an explicit feedback p t via a penalty function s.t. p t = f p (x t ) to the RNN that captures the extent to which a proposed query x t violates any of the C domain constraints. We consider the following instantiation of penalty function:
f p (x t ) = C j=1
ReLU(c j (x t )), i.e. for any j for which c j (x t ) > 0 a penalty equal to c j (x t ) is considered, while for any j with c j (x t ) ≤ 0 the contribution to penalty is 0. The real-valued penalty captures the cumulative extent of violation as well. Further, similar to normalizing y t , we also normalize p t incrementally and usep t as an additional input to the RNN, such that:
h t+1 = f o (h t , x t ,ỹ t ,p t ; θ).(8)
Further, whenever p t > 0, i.e. when one or more of the domain constraints are violated for the proposed query, we set y t = y t−1 rather than actually getting a response from the black-box. This is useful in practice: for example, when trying to optimize a complex dynamical system, getting a response from the system for such a query is not possible as it can be catastrophic. 2. During training, an additional domain constraint loss L D is considered that penalizes the optimizer if it proposes a query that does not satisfy one or more of the domain constraints.
L D = 1 C fg∈F T t=2 p t .(9)
The overall loss is then given by:
L = L R + λL D ,(10)
where λ controls how strictly the constraints on the domain of parameters should be enforced; higher λ implies stricter adherence to constraints. It is worth noting that the above formulation of incorporating domain constraints does not put any restriction on the number of constraints C nor on the nature of constraints in the sense that the constraints can be linear or non-linear in nature. Further, complex non-linear constraints based on domain knowledge can also be incorporated in a similar fashion during training, e.g. as used in [13,19]. Apart from optimizing (in our case, maximizing) f g , the optimizer is also being simultaneously trained to minimize f p .
Example of penalty function. Consider simple limit constraints on the input parameters such that the domain of the function f g is given by Θ = [x min , x max ], then we have:
f p (x t ) = d j=1 ReLU(x j t − x j max ) + ReLU(x j min − x j t ) ,(11)
where x j t denotes the j-th dimension of x t where x j min and x j max are the j-th elements of x min and x max , respectively.
Experimental Evaluation
We conduct experiments to evaluate the following: i. regret loss (L R ) versus OI loss (L OI ), ii. effect of including incremental normalization during training, and iii. ability of RNN-Opt trained with domain constraints using L (Eq. 10) to generate more feasible queries and leverage feedback to quickly adapt in case it proposes queries violating domain constraints.
For the unconstrained setting, we test RNN-Opt on i) standard benchmark functions for d = 2 and d = 6, and ii) 1280 synthetically generated GMM-DF functions (refer Appendix A) not seen during training. We choose the benchmark functions such as Goldstein, Rosenbrock, and Rastrigin (and the simple spherical function) that are known to be challenging for standard optimization methods. None of these functions were used for training any of the optimizers.
We use regret r t = y opt − max i≤t y i to measure the performance of any optimizer after t iterations, i.e. after proposing t queries. Lower values of r t indicate superior optimizer performance. We test all the optimizers under limited budget setting such that T = 10 × d. For each test function, the first query is randomly sampled from U (−4.0, 4.0), and we report average regret r t over 1280 random initializations. For synthetically generated GMM-DF functions, we report average regret over 1280 functions with one random initialization for each.
All RNN-based optimizers (refer Table 1) were trained for 8000 iterations using Adam optimizer [16] with an initial learning rate of 0.005. The network consists of two hidden layers with the number of LSTM units in each layer being chosen from {80, 120, 160} using a hold-out validation set of 1280 GMM-DF. Another set of 1280 randomly generated functions constitute the GMM-DF test set. An initial code base 2 developed using Tensorflow [1] was adapted to implement our algorithm. We used a batch size of 128, i.e. 128 randomly-sampled functions (refer Equation 12) are processed in one mini-batch for updating the parameters of LSTM.
1.0 N Y N N RNN-Opt-Basic L R 0.98 N Y N N RNN-Opt L R 0.98 Y Y N N RNN-Opt-P L R 0.98 Y Y N Y RNN-Opt-DC L R + λL D 0.98 Y Y Y Y
Observations
We make the following key observations for unconstrained optimization setting: 1. RNN-Opt is able to optimize black-box functions not seen during training, and hence, generalize. We compare RNN-Opt with RNN-OI and two standard black-box optimization algorithms CMA-ES [8] and Nelder-Mead [20]. RNN-OI uses x t , y t , and h t to get the next hidden state h t+1 , which is then used to get x t+1 (as in Eq 4), such that h t+1 = f o (h t , x t , y t ; θ), with OI loss as given in Eq. 7. From Fig. 4 (a)-(i), we observe that RNN-Opt outperforms all the baselines considered on most functions considered while being at least as good as the baselines in few remaining cases. Except for the simple convex spherical function, RNN-based optimizers outperform CMA-ES and Nelder-Mead under limited budget, i.e. with T = 20 for d = 2 and T = 60 for d = 6. We observe that trained optimizers outperform CMA-ES and Nelder-Mead for higher-dimensional cases (d = 6 here, as also observed in [5,29]).
2.
Regret-based loss is better than the OI loss. We compare RNN-Opt-Basic with RNN-OI (refer Table 1) where RNN-Opt-Basic differs from RNN-OI only in the loss function (and the discount factor, as discussed in next point). For fair comparison with RNN-OI, RNN-Opt-Basic does not include incremental normalization during training. From Fig. 4 (j)-(k), we observe that RNN-Opt-Basic (with γ = 0.98) performs better than RNN-OI during initial steps for d = 2 (while being comparable eventually) and across all steps for d = 6, proving the advantage of using regret loss over OI loss.
3. Significance of discount factor when using regret-based loss versus OI loss. From Fig. 4 (j)-(k), we also observe that the results of RNN-Opt and RNN-OI are sensitive to the discount factor γ (refer Eqs. 6 and 7). γ < 1 works better for RNN-Opt while γ = 1 (i.e. no discount) works better for RNN-OI. This can be explained as follows: the queries proposed initially (small t) are expected to be far from y opt due to random initialization, and therefore, have high initial regret. Hence, components of the loss term for smaller t should be given lower weightage in the regret-based loss. On the other hand, during later steps (close to T ), we would like the regret to be as low as possible, and hence a higher importance should be given to the corresponding terms in the regretbased loss. In contrast, RNN-OI is trained to keep improving irrespective of y opt , and hence giving equal importance to the contribution of each step to the OI loss works best.
4. Incremental normalization during training and inference to optimize functions with diverse range of values. We compare RNN-Opt-Basic and RNN-Opt, where RNN-Opt uses incremental normalization of inputs during training as well as testing (as described in Section 4.1) while RNN-Opt-Basic uses incremental normalization only during testing (refer Table 1). From Fig. 5, we observe that RNN-Opt performs significantly better than RNN-Opt-Basic proving the advantage of incorporating incremental normalization during training. Note that since most of the functions considered have large range of values, incremental normalization is by-default enabled for all RNN-based optimizers during testing to obtain meaningful results, as illustrated earlier in Fig. 2, especially for functions with large range, e.g. Rosenbrock.
RNN-Opt with Domain Constraints
To train RNN-Opt-DC, we generate synthetic functions with random limit constraints as explained in Section 4.2. The limits of the search space are set as [x opt − ∆x, x opt + ∆x] where ∆x j (j-th component of ∆x) is sampled from U (τ 1 , τ 2 ) (we use τ 1 = 1.0, τ 2 = 2.0 during training).
We use λ = 0.2 for RNN-Opt-DC. As a baseline, we use RNN-Opt with minor variation during inference time (with no change in training procedure) where, instead of passingỹ t as input to the RNN, we passỹ t −p t so as to capture penalty feedback. We call this baseline approach as RNN-Opt-P (refer Table 1). While RNN-Opt-DC is explicitly trained to minimize penalty p t explicitly, RNN-Opt-P captures the requirement of trying to maximize y t under a soft-constraint of minimizing p t only during inference time.
We use the standard quadratic (disk) constraint used to evaluate constrained optimization approaches, i.e. ||x|| 2 2 ≤ τ × d (we use τ = {0.5, 1.0, 2.0}) for Rosenbrock function. For GMM-DF, we generate random limit constraints on each dimension around the global optima, s.t. the optimal solution is still the same as the one without constraints, while the feasible search space varies randomly across functions. Limits of the domain is [x opt − ∆x, x opt + ∆x], where ∆x j (j-th component of ∆x) is sampled from U (τ 1 , τ 2 ) (we use τ 1 = {0.5, 1.0, 1.5}, τ 2 = {1.5, 2.0, 2.5}). We also consider two instances of (anonymized) non-linear surrogate model for a real-world industrial process built by subject-matter experts with six controllable input parameters (d = 6) as black-box functions, referred to as Industrial-1 and Industrial-2 in Fig. 6. This process imposes limit constraints on all six parameters guided by domain-knowledge. The ground-truth optimal value for these functions was obtained by querying the surrogate model 200k times via grid search. The regret results are averaged over runs assuming diverse environmental conditions. RNN-Opt-DC and RNN-Opt-P are not guaranteed to propose feasible queries at all steps because of the soft constraints during training and/or inference. Therefore, despite training the optimizers for T steps we unroll the RNNs up to a maximum of 5T steps and take the first T proposed queries that are feasible, i.e. satisfy domain constraints. For functions where optimizer is not able to propose T feasible queries in 5T steps, we replicate the regret corresponding to best solution for remaining steps. As shown in Fig. 6, we observe that RNN-Opt with domain constraints, namely, RNN-Opt-DC is able to effectively use explicit penalty feedback, and at least as good as RNN-Opt-P in all cases. As expected, we also observe that the performance of both optimizers degrades with increasing values of τ or τ 2 − τ 1 as the search space to be explored by the optimizer increases.
Conclusion and Future Work
Learning optimization algorithms under the meta-learning paradigm is an area of active research. In this work, we have shown that using regret directly as a loss for training optimizers using recurrent neural networks is possible, and that it yields better optimizers than those obtained using observed-improvement based loss. We have proposed useful extensions of practical importance to optimization algorithms for black-box optimization that allow dealing with diverse range of function values and handle domain constraints more effectively. One shortcoming of this approach is that a different optimizer needs to be trained for varying number of input parameters. In future, we plan to extend this work to train optimizers that can ingest input with varying and high number of parameters, e.g. by first proposing a change in a latent space and then estimating changes in actual input space as in [22,27]. Further, training optimizers for multi-objective optimization can be a useful extension.
A Generating Diverse Non-Convex Synthetic Functions
We generate synthetic non-convex continuous functions f g defined over Θ ⊆ R d via a Gaussian Mixture Model density function (GMM-DF, similar to [29]):
f g (x t ) = N i=1 c i (2π) k 2 |Σ i | 1 2 exp(− 1 2 (x t − µ i ) T Σ −1 i (x t − µ i )).(12)
In this work, we used GMM-DF instead of Gaussian Processes used in [5] for ease of implementation and faster response time to queries: Functions obtained in this manner are often non-convex and have multiple local minima/maxima. Sample plots for functions obtained over 2-D input space are shown in Fig. 7. We use c i ∼ N (0, 0.2), µ i ∼ U (−2.0, 2.0) and Σ i ∼ T runcatedN (0.9, 0.9/5) for d = 2, µ i ∼ U (−3.0, 3.0) and Σ i ∼ T runcatedN (3.0, 3.0/5) for d = 6 in our experiments (all covariance matrices are diagonal). For any function f g , we use an estimated valueŷ opt = max i f g (µ i ) (i = 1, 2, . . . , N ) instead of y opt . This assumes that the global maximum of the function is at the mean of one of the N Gaussian components. We validate this assumption by obtaining better estimates of the ground truth for y opt via grid search over randomly sampled 0.2M query points over the domain of f g . For 10k randomly sampled GMM-DF functions, we obtained an average error of 0.03 with standard deviation of 0.02 in estimating y opt , suggesting that the assumption is reasonable, and in practice, approximate values of y opt suffice to estimate the regret values for supervision. However, in general, y opt can also be obtained using gradient descent on f g . | 4,792 |
1901.05506 | 2911003065 | MAPF is the problem of finding paths for multiple agents such that every agent reaches its goal and the agents do not collide. Most prior work on MAPF were on grid, assumed all actions cost the same, agents do not have a volume, and considered discrete time steps. In this work we propose a MAPF algorithm that do not assume any of these assumptions, is complete, and provides provably optimal solutions. This algorithm is based on a novel combination of SIPP, a continuous time single agent planning algorithms, and CBS, a state of the art multi-agent pathfinding algorithm. We analyze this algorithm, discuss its pros and cons, and evaluate it experimentally on several standard benchmarks. | dRRT* is a MAPF algorithm designed for continuous spaces @cite_25 . It is a sample-based technique that is asymptotically complete and optimal. is optimal and complete, and is designed to run over a discrete graph. ORCA @cite_19 @cite_27 and ALAN @cite_18 are also MAPF algorithms designed for continuous space. They are fast and distributed, but do not provide optimality or completeness guarantees. | {
"abstract": [
"In this paper we address the problem of motion planning for multiple robots. We introduce a prioritized method, based on a powerful method for motion planning in dynamic environments, recently developed by the authors. Our approach is generically applicable: there is no limitation on the number of degrees of freedom of each of the robots, and robots of various types - for instance free-flying robots and articulated robots - can be used simultaneously. Results show that high-quality paths can be produced in less than a second of computation time, even in confined environments involving many robots. We examine three issues in particular in this paper: the assignment of priorities to the robots, the performance of prioritized planning versus coordinated planning, and the influence of the extent by which the robot motions are constrained on the performance of the method. Results are reported in terms of both running time and the quality of the paths produced.",
"We present the hybrid reciprocal velocity obstacle for collision-free and oscillation-free navigation of multiple mobile robots or virtual agents. Each robot senses its surroundings and acts independently without central coordination or communication with other robots. Our approach uses both the current position and the velocity of other robots to compute their future trajectories in order to avoid collisions. Moreover, our approach is reciprocal and avoids oscillations by explicitly taking into account that the other robots sense their surroundings as well and change their trajectories accordingly. We apply hybrid reciprocal velocity obstacles to iRobot Create mobile robots and demonstrate direct, collision-free, and oscillation-free navigation.",
"",
"Finding asymptotically-optimal paths in multi-robot motion planning problems could be achieved, in principle, using sampling-based planners in the composite configuration space of all of the robots in the space. The dimensionality of this space increases with the number of robots, rendering this approach impractical. This work focuses on a scalable sampling-based planner for coupled multi-robot problems that provides asymptotic optimality. It extends the dRRT approach, which proposed building roadmaps for each robot and searching an implicit roadmap in the composite configuration space. This work presents a new method, dRRT* , and develops theory for scalable convergence to optimal paths in multi-robot problems. Simulated experiments indicate dRRT* converges to high-quality paths while scaling to higher numbers of robots where the naive approach fails. Furthermore, dRRT* is applicable to high-dimensional problems, such as planning for robot manipulators"
],
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_18",
"@cite_25"
],
"mid": [
"2007193228",
"2097639646",
"",
"2729017031"
]
} | Multi-Agent Pathfinding (MAPF) with Continuous Time | MAPF is the problem of finding paths for multiple agents such that every agent reaches its goal and the agents do not collide. MAPF has topical applications in warehouse management (Wurman, D'Andrea, and Mountz 2008), airport towing (Morris et al. 2016), autonomous vehicles, robotics (Veloso et al. 2015), and digital entertainment . While finding a solution to MAPF can be done in polynomial time (Kornhauser, Miller, and Spirakis 1984), solving MAPF optimally is NP Hard under several common assumptions (Surynek 2010;Yu and LaValle 2013).
Nevertheless, AI researchers in the past years have made substantial progress in finding optimal solutions to a growing number of agents and scenarios Sharon et al. 2013;Wagner and Choset 2015;Standley 2010;; Barták et al. 2017;Yu and LaValle 2012). While it seems that research on MAPF has matured enough to provide real industry values, most prior work has made several simplifying assumptions that precluded their wide-spread application. Indeed, most prior work on optimal MAPF assumed that (1) time is discretized into time steps, (2) the duration of move actions and wait actions is one time step, and (3) in every time step each agent occupies exactly a single location. In fact, most prior work performed empirical evaluation only on 4-connected grids.
We propose the first MAPF algorithm that does not rely on any of these assumptions and is sound, complete, and provides provably optimal solutions. This algorithm is based The paper was submitted to ICAPS'19 on a novel combination of SIPP (Phillips and Likhachev 2011), a continuous-time single-agent pathfinding algorithm, and CBS , a state-of-the-art multiagent pathfinding algorithm. We call the resulting algorithm CCBS.
We are not the first to study MAPF variants that are more general than basic MAPF. Indeed, several recent works adapted existing MAPF algorithms such as ICTS (Sharon et al. 2013) and CBS (Sharon et al. 2015) to richer MAPF settings (Walker, Sturtevant, and Felner 2018;Li et al. 2019). Table 1 provides an overview of such prior works and its relation to CCBS. See a more detailed discussion of Table 1 in the related work section.
We analyze CCBS, discuss its pros and cons, and evaluate it experimentally on several standard benchmarks. The results show that CCBS is able to solve optimally MAPF problems in practice. As expected, CCBS is slower than CBS, since the latter ignores agents' geometry, discretizes time, and considers a smallest set of actions. For the same reasons, CCBS finds significantly better solutions in practice. Since CCBS considers agents' geometric shape and continuous time, the cost of collision detection in CCBS is significantly higher than in CBS. To mitigate this, we propose a history-based heuristic, that attempts to avoid some collision detection checks by guessing which pair of agents are likely to have a conflict. We discuss the relation between this heuristic and the concept of cardinal conflicts , and propose a simple hybrid heuristic that combines these methods and works well.
Problem Definition
The problem we address in this work is fundamentally the MAPF variant called MAPF R , introduced by Walker et al. (2018). A MAPF R is defined by a weighted directed graph G = (V, E), and a set of agents indexed by 1,. . ., k. Each agent i has a geometric shape, an initial location s i , and a goal location g i . While the algorithm we propose in this work can handle agents of any geometric shape, we will assume that all agents are open-disks of some radii, to avoid the need to reason about agents' orientation.
The set of vertices V represents a set of locations that the agents can occupy. We assume that any two agents, when standing still, can occupy any two vertices in the graph without colliding, i.e. their bodies will not overlap. For graphs based on grids and disk-shaped agents, this means the radius of every agent is less than or equal to half of the cell size. When an agent is at a location v ∈ V , it can either perform a move action or a wait action. A move action moves the agent along an edge (v, v ) ∈ E, and a wait action means the agents stays in v. Every action has a duration. The duration of a move action is the weight of the edge the agent is traversing. The duration of a wait action can be any positive real value. Thus, every agent has an infinite number of wait moves in every location.
A plan for an agent i is a sequence of actions π i such that if i executes this sequence of actions then it will reach its goal. A set of plans, one for each agent, is called a joint plan. A solution to a MAPF R is a joint plan such that if all agents start to execute their respective plans at the same time, then all agents will reach their goal locations without colliding with each other. In this work we focus on finding cost-optimal solutions. To define cost-optimality of a MAPF R solution, we first define the cost of a plan π i to be the sum of the durations of its constituent actions. Several forms of solution cost-optimality have been discussed in MAPF research. Most notable are makespan and SOC, where the makespan is the max over the costs of the constituent plans and SOC is their sum. The problem we address in this work is to find a solution to a given MAPF R problem that is optimal w.r.t its SOC, that is, no other solution has a lower SOC.
Conflict-Based Search with Continuous Times
In this section, we introduce CCBS. Since CCBS is based on the CBS algorithm, we first provide relevant background on CBS.
Conflict Based Search (CBS) CBS (Sharon et al. 2015) is a complete and optimal MAPF solver, designed for standard MAPF, i.e., where time is discretized and all actions have the same duration. It solves a given MAPF problem by finding plans for each agent separately, detecting conflicts between these plans, and resolving them by replanning for the individual agents subject to specific constraints.
The typical CBS implementation considers two types of conflicts: a vertex conflict and an edge conflict. A vertex conflict between plans π i and π j is defined by a tuple i, j, v, t and means that according to these plans agents i and j plan to occupy v at the same time t. An edge conflict is defined similarly by a tuple i, j, e, t , and means that accordig to π i and π j both agents plan to traverse the edge e ∈ E at the same time, from opposite directions. A CBS vertex-constraint is defined by a tuple i, v, t and means that agent i is prohibited from occupying vertex v at t. A CBS edge-constraint is defined similarly by a tuple i, e, t , where e ∈ E. To guarantee completeness and optimality, CBS runs two search algorithms: a low-level search algorithm that finds paths for individual agents, and a highlevel search algorithm that chooses which constraints to add.
CBS: Low-Level Search
The low-level search in CBS can be any pathfinding algorithm that can find an optimal plan for an agent that is consistent with a given set of CBS constraints. To adapt single-agent pathfinding algorithms such as A * to consider CBS constraints, the search space must also consider the time dimension since a CBS constraint i, v, t blocks location v only at a specific time t. For MAPF problems, where time is discretized, this means that a state in this single-agent search space is a pair (v, t), representing that the agent is in location v at time t. Expanding such a state generates states of the form (v , t+1), where v is either equal to v, representing a wait action, or equal to one of the locations adjacent to v. States generated by actions that violate the given set of CBS constraints, are pruned. Running A * on this search space will return the lowest-cost path to the agent's goal that is consistent with the given set of CBS constraints, as required. This adaptation of textbook A * is very simple, and indeed most papers on CBS do not report it and just say that the low-level search of CBS is A * .
CBS: High-Level Search
The high-level search algorithm in CBS works on a constraint tree (CT). The CT is a binary tree, in which each node N contains:
1. N.constraints: A set of CBS constraints imposed on the agents (N.constraints) 2. N.Π: A joint plan consistent with these CBS constraints.
3. N.cost: The SOC of N.Π.
Generating a node N in the CT means finding N.Π for N.constraints and setting N.cost to be the SOC of N.Π. If the joint plan does not contain any conflict, then N is a goal. Expanding a non-goal node N in the CT means choosing a CBS conflict i, j, x, t that exists in N.Π (where x is either a vertex or an edge), and generating two nodes N i and N j . Both nodes have the same set of constraints as N , plus a new constraint that is added to resolve the conflict: N i adds the constraint i, x, t and N j adds the constraint j, x, t . CBS searches the CT in a best-first manner, expanding in every iteration the CT node N with the lowest N.cost.
From CBS to CCBS
CCBS follows the CBS framework: it has a low-level search algorithm that finds plans for individual agents, and a highlevel search algorithm that imposes constraints on the lowlevel search. The main differences between CCBS and CBS are:
• For conflict detection, CCBS uses a geometry-aware collision detection mechanism.
• To resolve conflicts, CCBS imposes constraints over action-time pairs instead of location-time pairs.
• For the low-level search, a pathfinding algorithm is used that considers continuous time and agents' shape.
Next, we explain these differences in details.
Conflict Detection in CCBS Since actions in standard CBS implementation have unit duration, identifying conflicts is relatively straightforward: iterate over every time step t and check if there is a vertex (or an edge) that more than one agent is planning to occupy in time t. By contrast, in CCBS actions can have any duration and thus iterating over time steps in meaningless. Also, CCBS considers the shape agents. This means that agents may conflict even if they do not occupy the same vertex/edge at the same time. For example, consider the graph depicted in Figure 1. Agents i and j occupy locations A and C. If at the same time i moves along the edge AD and j moves along the edge CB, then a collision will occur. Such a "criss-cross" conflict is not considered in standard CBS.
CCBS addresses all the above by defining CCBS conflicts as conflicts between actions.
Definition 1 (CCBS Conflict). A CCBS conflict w.r.t. a pair of plans π i and π j is defined by a tuple a i , t i , a j , t j , representing that if agent i executes a i at time t i and agent j executes a j at time t j then they will collide.
When the timing of a i and a j is clear from context, we omit t i and t j and define a conflict as a pair a i , a j . There are various ways to detect collisions between agents with volume in a continuous space. There are standard methods to do this by analyzing the geometric properties of the agents' movement and shape (Guy and Karamouzas 2015). CCBS is agnostic to the particular collision detection mechanism that is used.
Resolving Conflicts in CCBS The high-level search in CCBS is runs a best-first search like regular CBS, selecting in every iteration a leaf node N in the CT that has the smallest N.cost. If N is not a goal node, it means it has at least one CCBS conflict. The high-level search expands N by choosing one of the CCBS conflicts (a i , a j ) and generating two new CT nodes, N i and N j : N i adds a constraint to agent i and applies the low-level search to find a new plan for i, and N j adds a constraint to agent j and applies the low-level search to find a new plan for j.
A constraint in CCBS is defined by a tuple i, a i , [t 1 , t 2 ) , and represents that agent i cannot start to perform action a i in the time range [t 1 , t 2 ). Note that a i can be either a move action or a wait action. Next, we describe which constraints to add to N i and to N j . Let t i,1 be the point in time when a i starts according to π i . It then computes the first point in time after t i,1 that i can perform a i without conflicting with a j . We denote this point by t i,2 . Computing t i,2 can be done by analyzing the kinematics and geometry of the agents. For simplicity, we computed t i,2 by applying the conflict detection mechanism multiple time, for different values of t i,2 , starting from t i,1 and incrementing by some small ∆ > 0. Let t j,1 and t j,2 denote the corresponding time points for agent j and action a j . CCBS adds the constraint
i, a i , [t i,1 , t i,2 ) to N i j, a j , [t j,1 , t j,2 ) to N j .
For example, assume that we are running CCBS and the high-level search chooses to expand a CT node by resolving the conflict depicted in Figure 1. Agent i plans to start moving along AD at time 5 and agent j plans to start moving along CB at time 5.5. Thus, t i,1 = 5 and t j,1 = 5.5. Assume that the duration required to traverse AD and to traverse CB is the same. Therefore, t j,2 will be smaller than t i,2 . This is because agent j needs to wait a smaller amount of time to avoid agent i starting at t i,1 compared to the amount of time agent i needs to avoid wait in order to avoid agent j starting at time t j,1 , since i starts earlier and their respective move actions have the same duration. For our example, assume that t i,2 = 8 and t j,2 = 7.5. Using these values of t i,1 , t i,2 , t j,1 , and t j,2 , CCBS will generate two new CT nodes: one with the additional constraint i, AD, [5, 8) the other with the additional constraint j, CB, [5.5, 7.5) .
The CCBS Low-Level Search The main challenges when implementing the low-level search of CCBS is that time continuous, and thus adding the time dimension results in a continuous search space. That is, the number of wait actions in every location is infinite. Moreover, CCBS constraints may specify that a wait action cannot be performed in some given time interval, which may require reducing the duration of a wait action.
To resolve this, we use the SIPP algorithm (Phillips and Likhachev 2011) as the low-level search algorithm. SIPP is an algorithm for single-agent path finding with dynamic moving obstacles. The core idea of SIPP is to identify collision-free time intervals for every location v ∈ V . Using time intervals instead of specific time points allows using a discrete search algorithm.
Specifically, SIPP applies an A * -based algorithm, searching in the space of (location, time interval) pairs. The output of SIPP is a plan, i.e., a sequence of actions, that move the agent from its initial location to its goal. SIPP is complete and is guaranteed to find a time-minimal solution.
We chose SIPP as a low-level search algorithm because it already is designed to consider time intervals, and in CCBS the constraints are also over time intervals.
Theoretical Properties
Next, we prove that CCBS is sound, complete, and optimal. To do so, we define the notion of a sound pair of constraints in a similar way to Atzmon et al. (2018). Definition 2 (Sound Pair of Constraints). For a given MAPF R problem, a pair of constraints is sound iff in every optimal solution it holds that at least one of these constraints hold. Lemma 1. For any CCBS conflict a i , a j , the pair of CCBS constraints i, a i , [t i,1 , t i,2 ) and j, a j , [t j,1 , t j,2 ) is a sound pair of constraints.
Proof. By contradiction, assume that there exists ∆ i ∈ (0, t i,2 − t i,1 ] and ∆ j ∈ (0, t j,2 − t j,1 ] such that perform a i at t i,1 + ∆ i and a j at t j,1 + ∆ j does not create a conflict. That is, a i , t i,1 + ∆ i , a j , t j,1 + ∆ j is not a conflict (Def. 1).
By definition, of t j,2 :
∀t ∈ [t j,1 , t j,2 ) : a i , t i,1 , a j , t is a conflict. ∀t ∈ [t j,1 + ∆ j , t j,2 ) : a i , t i,1 + ∆ j , a j , t is a conflict.
By definition of ∆ i and ∆ j :
a i , t i,1 + ∆ i , a j , t j,1 + ∆ j is not a conflict Therefore, ∆ i < ∆ j . Similarly, by definition of of t i,2 : ∀t ∈ [t i,1 , t i,2 ) : a i , t, a j , t j,1 is a conflict. ∀t ∈ [t i,1 + ∆ i , t i,2 ) : a i , t, a j , t j,1 + ∆ i is a conflict.
Therefore, by definition of ∆ i and ∆ j we have that ∆ j < ∆ i , which leads to a contradiction.
Theorem 1. CCBS sound, complete, and is guaranteed to return an optimal solution.
The proof Theorem 1 relies on Lemma 1 and directly follows Atzmon et al.'s proof for k-robust CBS (Atzmon et al. 2018).
Conflict Detection and Selection Heuristics
As noted above, conflict detection in CCBS is more complex than in regular CBS. Indeed, in our experiments we observed that conflict detection took a significant portion of time. To speedup the conflict detection, we only checked conflicts between actions that overlap in time and may overlap geometrically. In addition, we implemented two heuristics for speeding up the detection process. We emphasize that these heuristics do not compromise our guarantee for soundness, completeness, and optimality.
The first heuristic we used, which we refer to as the history heuristic, keeps track of the number of times conflicts have been found between agents i and j, for every pair of agents (i, j). Then, it checks first for conflicts between pair of agents with a high number of past conflicts. Then, when a conflict is found the search for conflicts is immediately halted. That found conflict is then stored in the CT node, and if that CT node will be expanded then it will generate CT nodes that are aimed to resolve this conflict. This implements the intuition that pairs of agents that have conflicted in the past are more likely to also conflict in the future.
We have found this history heuristic to be very effective in practice for reducing the time allocated for conflict detection. Using this heuristic, however, has some limitations. Prior work has established that to intelligently choosing which conflict to resolve when expanding a CT node can have a huge impact on the size of the CT and on the overall runtime . Specifically, introduced the notion of cardinal conflicts, which are conflict that any way to resolve them will result in increasing the SOC. Semi-cardinal conflicts are conflicts that resolving them by replanning for one of the involved agents will increases the solution cost, but replanning for the other involved agents do not increase solution cost.
For CBS, choosing to resolve first cardinal conflicts, and then semi-cardinals, yielded significant speedups . However, to detect cardinal and semi-cardinal conflicts, one needs to identify all conflicts, while the advantage of the heuristic is that we can halt the search for conflicts before identifying all conflicts.
To this end, we proposed a second hybrid heuristic approach. Initially, we detect all conflicts and choose only cardinal conflicts. However, if a node N does not contain any cardinal or semi-cardinal conflict, then for all nodes in the CT subtree beneath it we switch to use the history heuristic. This hybrid approach worked well in our experiments, but fully exploring this tradeoff between fast conflict detection and smart conflict selection is a topic for future work.
Experimental Results
Following most prior work on MAPF, we have conducted experiments on grids. Agents can move from the center of one grid cell to the center of another grid cell. The size of every cell is 1 × 1, and the shape of every agent is a an open disk which radius equals sqrt(2)/4. This specific value was chosen to allow comparison with CBS, since it is the maximal radius that allows agents to safely perform moves in which agents follow each other. 1 To allow non-unit edge costs, we allowed the agents to move in a single move action to every cell located in their 2 k neighborhood, where k is a parameter (Rivera, Hernández, and Baier 2017). Moving from one cell to the other is only allowed if the agent can move safely to the target cell without colliding with other agents or obstacles, where the geometry of the agents and obstacles are considered. The cost of a move corresponds to the Euclidean distance between the grid centers. Figure 2 illustrates such a 2 k neighborhood. Increasing k means a search space with higher branching factor, but also makes lower cost paths possible. As a heuris- Figure 2: Illustration of the 2 k neighborhood for k = 2, 3, 4, and 5. tics, we pre-computed the all-pairs shortest-path distance between every pair of locations in the map, which is a perfect heuristic for single agent search.
SOC
Open Grids
For the first set of experiments we used a 10 × 10 open grid, placing agents' start and goal locations randomly. We run experiments with 4, 5, . . . , 20 agents, where for each number of agents we created 250 different problems. We then solved each of these problems with CCBS having k = 2, 3, 4, and 5. The file CCBS.mp4 in the supplementary material shows an animation of the solution found by CCBS for a problem with 13 agents and different values of k. Table 2 shows the results of this set of experiments. Every row shows results for a different number of agents, as indicated on the left-most column. The four right-most columns show the success rate, i.e., the ratio of problems solved by the CCBS under a timeout of 60 seconds, out of a total of 250 problems. Data points marked by "-" indicate settings where the success rate was lower than 0.4. The next four columns show the average SOC, averaged over the problems solved by all CCBS instances that had a success rate larger than 0.4. The results show clearly that indeed increasing k yields solutions with lower SOC, as expected. The absolute difference in SOC when when moving from k = 2 to k = 3 is the largest, and it grows as we add move agents. For example, for problems with 16 agents, moving from k = 2 to k = 3 yields an improvement of 17.2 SOC, and for problems with 17 agents the gain of moving to k = 3 is 17.9 SOC. Increasing k further exhibits a diminishing return effect, where the largest average SOC gain when moving from k = 4 to k = 5 is at 0.5.
Increasing k, however, has also the effect of increasing the branching factor, which in turns means that path-finding becomes harder. Indeed, the success rate of k = 5 is significantly lower compared to k = 4. An exception to this is the transition from k = 2 to k = 3, where we observed a slight advantage in success rate for k = 3 for problems with a small number of agents. For example, with 6 agents the success rate of k = 2 is 0.99 while it is 1.00 for k = 3. An explanation for this is that increasing k also means that plans for each agent can be shorter, which helps to speedup the search. Thus, increasing k introduces a tradeoff w.r.t. the problem-solving difficulty: the resulting search space for the low-level search is shallower but wider. For denser problems, i.e., with more agents, k = 2 is again better in terms of success rate, as more involved paths must be found by the low-level search. Figure 3 shows the tradeoff of increasing k by showing the average gain, in terms of SOC, of using CCBS for different values of k over CCBS with k = 2. The x-axis is the number of agents, and the y-axis is the gain, in percentage. We only provide data points for configurations with a success rate of at least 40%. As can be seen, increasing k increases the gain over CBS, where for k = 4 and k = 5 the gain was over 20%. Increasing k also decreases the success rate, and thus the data series for larger k value "disappears" after a smaller number of agents.
We also compared the performance of CCBS with k = 2 and a standard CBS implementation. Naturally, standard CBS was faster than CCBS, as its underlying solver is A * on a 4-connected grid, detecting collisions is trivial, and it has only unit-time wait actions. However, even for k = 2, CCBS is able to find better solutions, i.e., solutions of lower SOC. This is because in some cases, an agent can start to move after waiting less than a unit time step. To see this phenomenon, consider the example in Figure 4. There are three agents, 1,2, and 3 in an open 2×4 grid. The left-most grid shows the initial locations of the agents, and the right-most grid shows their goal locations. The small arrows in the agents indicate the direction each agent is about to move to. Consider first the plan created by CBS, which is shown on the top row of Figure 4. In CBS, every action takes unit duration. Since agent 3 cannot move upwards at time t = 0 without colliding with agent 1, it will have to wait for time t = 1 before starting to move. By contrast, in CCBS a wait action can have an arbitrary duration, and thus agent 3 can start to move upwards safely earlier than in CBS, at time t = 0.707. See the file C-CBSvsCBS.gif in the supplementary material for an animation of this example. These cases, where CCBS with k = 2 finds a better solution compared to standard CBS, are not rare. However, the advantage in terms of SOC, in all our experiments, was very small.
Dragon Age Maps
Next, we experimented with a much larger grid, taken from the Dragon Age: Origin (DAO) game and made available in the movingai repository (Sturtevant 2012). Specifically, we used the den520d map, shown to the right of Table 3, which was also used by prior work on CBS . Start and goal states were chosen randomly, and we create 250 problems for every number of agents. Table 3 shows the results obtained for CCBS with k = 2, 3, and 4, in the same format as Table 2. The same overall trends are observed: increasing k reduces the SOC and decreases the success rate. Figure 5 shows the average runtime required to solve the instances solved by all values of k. Interestingly, here we observe that k = 3 was the fastest on average. Similar to the better success rate in the open grid experiments, we explain this by the fact that increasing k also yields shorter paths to the goals, which helps decrease runtime.
Conflict Detection and Resolution Heuristics
In all the experiments so far we have used CCBS with the hybrid conflict detection and selection heuristic described earlier in the paper. Here, we evaluate the benefit of using this heuristic. We compared CCBS with this heuristic against the following: (1) Vanilla: CCBS that chooses arbitrarily which actions to check first for conflicts, (2) Cardinals: CCBS that identifies all conflicts and chooses cardinal conflicts, and (3) History: CCBS that uses the history heuristic to choose where to search for conflicts first, and resolves the first conflict it finds. Table 4 shows results for experiments run on the den520d DAO map. We explored the following points in the space of possible problem parameters: 20 agents with k=2, 3, and 4, and 25 agents with k=2 and k=3. For every configuration we create and run CCBS on 1,000 in-stances. The table shows the success rate (the row labelled "Success"), the average runtime in seconds over instances solved by all algorithms ("Time"), and the average number of high-level nodes expanded by CCBS ("HL exp."). The results show that the proposed hybrid heuristic yields the best success rate. When comparing History to Cardinals, we see that History is faster but the number of high-level nodes expanded by Cardinals is smaller. This follows our motivation for the hybrid heuristic: the choice of which conflicts to resolve taken by Cardinals is important in minimizing the size of the CT, while detecting all conflicts can be too time consuming. The proposed hybrid heuristic enjoys the complementary benefits of History and Cardinals, as can be seen by its fast runtime and small number of high-level expanded nodes. Thus, we used it in all our experiments.
Conclusion and Future Work
CCBS is an algorithm for solving MAPF problems that allows continuous time, actions with non-uniform duration, and agents and obstacles with a geometric shape. It follows the CBS framework, uses SIPP as a low-level solver, and uses unique types of conflicts and constraints. We prove that CCBS is sound, complete, and optimal. To the best of our knowledge, CCBS is the first MAPF algorithm to provide optimality guarantees for such a broad range of MAPF settings.
Our experimental results showed that CCBS can solve actual MAPF problems and indeed finds solutions significantly better than CBS. However, current results were based on grid maps that are extended by considering 2 k neighborhoods. We chose grids as a domain to allow natural comparison with existing solvers, but CCBS can work on arbitrary graphs. Indeed, this is a topic for future work.
This work also highlighted that conflict detection becomes a bottleneck when solving MAPF R problems. We suggested a hybrid heuristic for reducing these cost. However, we expect that future work can apply meta-reasoning techniques to decide when and how much to invest in conflict detection throughout the search. | 5,275 |
1901.05375 | 2963550527 | Recent research on face detection, which is focused primarily on improving accuracy of detecting smaller faces, attempt to develop new anchor design strategies to facilitate increased overlap between anchor boxes and ground truth faces of smaller sizes. In this work, we approach the problem of small face detection with the motivation of enriching the feature maps using a density map estimation module. This module, inspired by recent crowd counting density estimation techniques, performs the task of estimating the per pixel density of people faces present in the image. Output of this module is employed to accentuate the feature maps from the backbone network using a feature enrichment module before being used for detecting smaller faces. The proposed approach can be used to complement recent anchor-design based novel methods to further improve their results. Experiments conducted on different datasets such as WIDER, FDDB and Pascal-Faces demonstrate the effectiveness of the proposed approach. | Zhang al @cite_0 proposed a single image-based method that involved multi-column network to extract features at different scales. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Onoro-Rubio and L 'o pez-Sastre in @cite_62 addressed the scale issue by proposing a scale aware counting model called Hydra CNN to estimate the object density maps. Sam al @cite_16 trained a Switching-CNN network to automatically choose the most optimal regressor among several independent regressors for a particular input patch. More recently, Sindagi and Patel @cite_3 proposed Contextual Pyramid CNN (CP-CNN), where they demonstrated significant improvements by fusing local and global context through classification networks. | {
"abstract": [
"This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.",
"In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.",
"We present a novel method called Contextual Pyramid CNN (CP-CNN) for generating high-quality crowd density and count estimation by explicitly incorporating global and local contextual information of crowd images. The proposed CP-CNN consists of four modules: Global Context Estimator (GCE), Local Context Estimator (LCE), Density Map Estimator (DME) and a Fusion-CNN (F-CNN). GCE is a VGG-16 based CNN that encodes global context and it is trained to classify input images into different density classes, whereas LCE is another CNN that encodes local context information and it is trained to perform patch-wise classification of input images into different density classes. DME is a multi-column architecture-based CNN that aims to generate high-dimensional feature maps from the input image which are fused with the contextual information estimated by GCE and LCE using F-CNN. To generate high resolution and high-quality density maps, F-CNN uses a set of convolutional and fractionally-strided convolutional layers and it is trained along with the DME in an end-to-end fashion using a combination of adversarial loss and pixellevel Euclidean loss. Extensive experiments on highly challenging datasets show that the proposed method achieves significant improvements over the state-of-the-art methods.",
"We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd."
],
"cite_N": [
"@cite_0",
"@cite_62",
"@cite_3",
"@cite_16"
],
"mid": [
"2463631526",
"2519281173",
"2963035940",
"2741077351"
]
} | DAFE-FD: Density Aware Feature Enrichment for Face Detection | Face detection is an important step in many computer vision related tasks such as face alignment [39,56], face tracking [55], expression analysis [52], recognition and verification [51], synthesis [8,54]. Several challenges are encountered in face detection such as variations in pose, illumination, scale etc. Earlier CNN-based methods [53,50,70,30], although mostly successful in handling variations in pose and illumination, performed poorly when detecting smaller faces. Recent methods [32,65,29,67], based on CNN-based object detection frameworks such as Faster-RCNN or SSD, have focused particularly on smaller faces and have demonstrated promising results. In order to detect wide range of scales, these methods propose a twopronged approach: (i) multi-scale detection and (ii) new anchor design strategies. In case of multi-scale detection, detectors are placed on different conv layers of the backbone network (VGG-16 [46] or ResNet [13]) to improve the dis-crepancies between object sizes and receptive fields. Although this approach provided significant improvements over the earlier single-scale methods, it is not capable of detecting extremely small sized faces (of the order 15 × 15). This stems from the fact that these methods are anchor-based approaches where detections are performed by classifying a pre-defined set of anchors generated by tiling a set of boxes with different scales and aspect rations on the image. While such approaches are relatively more robust in complicated scenes and provide computational advantages since inference time is independent of number of objects/faces (for single shot methods), their performance degrades significantly when used on smaller sized objects. The degradation is primarily due to a low overlap of ground truth boxes with the pre-defined anchor boxes and a mismatch between receptive fields of the feature maps and the smaller objects [65]. In order to overcome these drawbacks, recent methods have attempted to develop new anchor design strategies that involve intelligent selection of anchor scales and improved anchor matching strategy [32,65].
While these recent methods address the drawbacks of anchor design or perform multi-scale detection, they do not emphasize on enhancing the feature maps for improving detection rates of small faces. To overcome this, we infuse information from crowd density maps to enrich the feature maps for addressing the problem of small face de-tection. Crowd density maps, originally used for counting in crowded scenarios, contain location information which can be exploited for improving detector performance. These density maps are especially helpful in the case of small faces, where traditional anchor-based classification loss may not be sufficient. Hence, we use density map based loss to provide additional supervision. Previous work [42,41] have demonstrated considerable improvements by incorporating crowd density maps for applications like tracking. In this work, we propose to improve the feature maps by employing a density estimator module that performs the task of estimating the per pixel count of number of faces in the image. Fig. 1(a) illustrates sample density estimation results using [66] along with the corresponding ground-truth. Fig. 1(b) illustrates sample detection results by using the proposed density enrichment module into the detection network.
In the recent past, several CNN-based counting approaches [48,66,43,47,49] have demonstrated a dramatic improvement in error rate across various datasets [19,63,66]. It is important to note that these datasets consist of images with wide range of scales of people including extremely tiny faces/heads. Considering the success of density estimation based counting approaches especially in images containing extremely small faces, we propose to leverage such techniques for the purpose of detecting smaller faces. Specifically, we incorporate a density estimator module whose output is used to enrich the features of the backbone network particularly for detecting small faces. This is in part inspired by earlier work that use segmentation or attention for improving detection performance [14,3,67]. For fusing information from this module, we employ a feature enrichment module (FEM). In Section 3.3, we discuss why simple feature fusion techniques such as concatenation or addition do not suffice and explain the need for a specific fusion technique (FEM). Through various experiments on different datasets [60,20,57], we demonstrate the effectiveness of the proposed approach. Furthermore, we present the results of ablation study to verify the improvements obtained using different modules. Note that the proposed method is complementary to the new anchor design strategies and hence, it can be used in conjunction with improved anchor designs to further improve the performance.
Proposed method
The proposed network architecture, shown in Fig. 2, is a single stage detector based on VGG-16 architecture. The base network is built on Region Proposal Network (RPN) [13], which is a fully convolutional single stage network and takes an image of any size as input. However, unlike RPN that uses a single detector on conv5 layer, we use multiple detectors (D 1 ,D 2 ,D 3 and D 4 ) on multiple conv layers [5]. These detectors, owing to the different receptive fields of the different conv layers, are better suited to handle various scales of objects, thereby improving the robustness of the network to different scales of faces present in the input image. However, in contrast to [5] that places the detectors on the conv layers of the base-network, we instead place the detectors on feature maps fused from multiple conv layers. In order to combine the feature maps, we employ a simple Feature Fusion Module (FFM) that effectively leverages semantic information present in different conv layers. Further, each detector consists of a Context Aggregation Module (CAM) followed by two sibling sub-networks: classification and a bounding box regression layer. The classification layer produces a score that represents the probability of finding a face defined by a specific anchor-box at a particular location on the image (similar to [13]). The set of anchor boxes are formed similar to [13]. The bounding box regression layer computes the offsets with respect to the anchor boxes. These offsets are used to calculate the bounding-box co-ordinates of the predicted face.
Most importantly, the proposed network consists of a Density Estimator Module (DEM) that is the primary contribution of this work. This module predicts the density map associated with a particular input image and is incorporated into the detection network with the motivation of enriching the feature maps from conv layers before being used for small face detection. Recent methods [32,65] employ new anchor design strategies to improve the detection of smaller faces and the feature maps are learned only through classification and bounding box regression loss, however, no specific emphasis is laid on the enhancement of feature maps. Considering this deficit, we propose to enrich the feature maps through an additional loss function from the density estimator module. This is also, partly, motivated by several earlier work [12,24] that have employed multi-task learning to improve detection or classification performance. DEM is inspired by the success of recent CNN-based methods [63,34,48,66,43] for crowd counting which involve counting people in crowded images through density map regression. Furthermore, we propose a new fusion mechanism called Feature Enrichment Module to seamlessly combine the feature maps from conv layer of the base network with the output of DEM.
Feature Fusion Module (FFM)
Recent multi-scale object detection networks [26,5] use multiple detectors on different conv layers. Although this technique provides considerable robustness to different scales, however, the detectors do not have access to feature maps from higher conv layers which have important semantic information. In order to leverage this high-level information, we employ a feature fusion module which takes input from i th and i + 1 th conv layers and combines them as shown in Fig. 3(a). First, the dimensionality of the feature maps of both conv layers is first reduced to 128 channels using 1×1 convolution. Since the dim-reduced feature maps from i + 1 th conv layer have lower resolution, they are upsampled using bilinear interpolation and then added to the dim-reduced feature maps from i th conv layer. This is similar to [32], however, we extend this idea to add additional fusion modules to improve the performance. The proposed network has two fusion modules F F M 1 and F F M 2 . F F M 1 fuses feature maps from conv3 and conv4, whereas F F M 2 fuses feature maps from conv4 and conv5.
Multi-scale detectors
Multi-scale detection approaches [26,5], that use multiple detectors on top of different conv layers, are known to introduce considerable robustness to scale variations and often perform as well as single scale detectors based on multi-image pyramid, thus providing additional advantage of computational efficiency. By adding detectors on earlier conv layers, these methods are able to match the receptive field sizes of the layers with objects of smaller sizes, thereby increasing the overlap between the anchor boxes and ground-truth boxes. Based on this idea, we add detectors D 1 , D 2 , D 3 and D 4 . However, different from these earlier approaches that directly feed the output of conv layers to the detectors, we employ a different strategy as shown in Table 1. Each detector is constructed as shown in Fig. 3(b).
Additionally, each detector is equipped with a Context Aggregation Module (shown in Fig. 3 (b)) that integrates context information surrounding candidate bounding boxes. Context information has been used in several earlier work [69,32] to improve the performance of detection systems. Zhu et al. [69] concatenated features pooled from larger windows and demonstrated significant improvement. Najibi et al. [32] used additional 5×5 and 7×7 convolutional filters to increase the receptive field size, in a way, imitating the strategy of pooling features from larger windows. While they achieved appreciable improvements, the use of large filter sizes results in more computations. Hence, we replace these large filters with atrous convolutions of size 3×3 [35,18,35] and different dilation factors. With the help of atrous convolutions, we are able to enlarge the receptive field size with minimal increase in computations.
Density Estimator Module
Recent crowd counting methods [63,34,48,66,43], that employ CNN-based density estimation techniques, have demonstrated promising results in complex scenarios. These techniques perform the task of counting people by estimating the density maps which represent the per pixel count of people in the image (as shown in Fig. 1). For training, the ground-truth density map (D)for an input image is calculated using
D(x) = xg∈S N (x − x g , σ)
, where σ is scale parameter of 2D Gaussian kernel and S is the set of all points at which people are located. Most crowd counting datasets provide 2d location of people in the input images as annotations. Fig. 1 illustrates a few sample input and ground-truth density map pairs along with corresponding density map estimated using a recent technique [48]. It can be observed that, in spite of heavy occlusions and presence of extremely small scales, these recent techniques are able to estimate high quality density maps and count with reasonably low error.
While the success of these methods is attributed mostly to the use of advanced CNN architectures, reformulating the problem of counting as a density map regression also played an important role in their success. As compared to the earlier detection-based counting approaches [15,21], these recent methods are able to achieve success due to the reformulation. By reformulating, these methods are able to avoid the problems of occlusion and tiny scales by letting the network take care of such variations. In this work, we explore the use of density estimation to incorporate robustness towards occlusion and tiny scales in the face detection network. In part, this contribution is also inspired by recent methods [12,24,38] that learn multiple related tasks using multi-task learning. These methods have demonstrated considerable gains in performance when they train their network to perform additional auxiliary tasks.
To incorporate the task of density estimation in the detection network, we include a density estimator module. Recent crowd counting and density estimation approaches [66,43,48,2] are based on multi-scale and multi-column networks, where the input image is processed by different CNN columns with varied receptive field sizes. The use of different columns results in increased robustness towards scale variations. Motivated by these approaches, we construct the density estimator module as shown in Fig. 2(b). Instead of processing the input images through different networks as in [66], we use feature maps from the base network, thereby minimizing the computations. Our strategy is to mimic the multi column networks structures [66] by considering feature maps conv1, conv2 and conv3 layers of VGG-16, which correspond to different receptive field sizes. DEM first downsamples the feature maps from conv1 and conv2 layers using max-pooling to match the size of feature maps from conv3 layer. After resampling, the dimensionality of the feature maps is reduced to minimize computations and memory requirement, followed by additional convolutions and concatenation. The concatenated feature maps are processed by 1×1 conv layer to produce the final density map. Following loss function is used to obtain the network weights:
(a) (b) (c) (d) (e)L den = 1 N N i=1 F d (X i , Θ) − D i 2 ,
where, N is number of training samples, X i is the i th input image, F d (X i , Θ) is the estimated density, D i is the i th ground-truth density and Θ corresponds to network weights.
Feature Enrichment Module. We use the output of DEM to enhance the feature maps from conv3 layer in order to improve detection rates of smaller faces. Since the detector on conv3 has the smallest scale and is responsible for detecting the smaller faces, we choose to fuse information from DEM into conv3 feature maps. Various fusion techniques, such as feature concatenation or multiplication or addition, are available to incorporate information from DEM into the face detector network. However, these methods are not necessarily effective. Since the feature maps produced by DEM are used for density estimation, they have largely different range as compared to feature maps corresponding to conv layers from the detection network and hence, they cannot be directly fused with feature maps from conv3 layer through simple techniques such as addition or concatenation. As pointed out in [28], this problem is commonly encountered in networks that attempt to combine feature maps from different conv layers [27]. Liu et al. [28] introduce a L2-normalization based scaling technique to overcome this problem. Although this method is successfully used in dif-ferent works [69], it did not perform promisingly in our case for the following reasons. First, the range of the feature maps from DEM is vastly different from that of conv3 feature maps and this gap is significantly wider as compared to other problems [69] where [28] has worked successfully. Second, the intermediate feature maps from the DEM have significantly low number of channels and hence, their dimensionality needs to increased to match that of feature maps from conv3 layer in order to perform a addition or multiplication based fusion.
Based on these considerations, we propose a simple Feature Enrichment Module (FEM) that avoids the challenges discussed above. Instead of using intermediate feature maps from DEM, we directly employ its density map output. The feature maps (f 3 ) from conv3 of the base-network are modified using the estimated density map as follows:
f 3 = f 3 + αf d ,
where, α is a learnable scaling factor and f d is f d replicated 256 times to match the dimensionality of conv3 feature maps. Fig. 4 illustrates feature maps from conv3 layer before and after enrichment. It can be easily observed from this figure that the features at the location of small faces get enhanced while those at other locations get suppressed.
Loss function
The weights of the proposed network are learned my minimizing the following multi-task loss function: L = L cls + λ b L box + λ d L den , where, L cls is face classification loss, L box is bounding-box regression loss and L den is density estimation loss. L cls and L box are defined as follows:
L cls = 4 m=1 1 N c m i Am l ce (p i , p i ) (1) L box = 4 m=1 1 N r m i Am p i l reg (t i , t i ),(2)
where, l ce is standard cross entropy error, m indexes over the four detectors D 1 -D 4 , A m are the set of anchors in detector D m , p i and p i are ground-truth and predicted labels respectively for the i th anchor box, N c m is the number of anchors selected in the detector D m and is used to normalize the classification loss, l reg is bounding box regression loss for each positively labelled anchor box. Similar to [13], the regression space is parametrized with a log-space shift and a scale invariant translation. Smooth l 1 loss is used as l reg . In this new space, t i is the regression target and t i is predicted co-ordinates. N r m is the number of positively labelled anchor boxes that are selected for the computing the loss and is used to normalize the bounding box loss. λ b and λ d are scaling factors to balance the loss function.
Training
Training details. The network is trained on a single GPU using stochastic gradient descent (momentum = 0.9 and weight decay = 0.0005) for 120k iterations. The learning rate is initially set to 0.001 and is dropped by a factor of 10 at 100k and 115k iterations. Anchor boxes are generated using the scales shown in Table 1 with a base anchor size of 16 pixels. Anchor boxes are labelled positively if their overlap (intersection over union) with ground truth boxes is greater than 0.5 and are negatively labelled if the overlap is below 0.3. A total of 256 anchor boxes per detector are selected for each image to compute the loss. The selection is performed using online hard example mining (OHEM) technique [45], where negatively labelled anchors with highest scores and positively labelled with lowest scores are selected. Such a selection procedure results in faster and stable training as compared to random selection [45]. The ground-truth density maps for training DEM are obtained using the method described in Section 3.3. The face annotations provided by the datasets are used to compute the points where faces are located and hence, no extra annotations are required. For inference, 1000 best scoring anchors from each detector are selected as detections, followed by a non maximal suppression (NMS) with a threshold of 0.3.
Dataset details. The network is trained using WIDER dataset [60] which consists of 32,203 images with 393,703 annotated faces. The dataset presents a variety of challenges such as wide variations in scale and difficult occlusions. It is divided into training, validation and test set using a 40:10:50 ratio. For evaluation purpose, the dataset has been further divided into three categories: Easy, Medium and Hard. The detector performance is measured using mean average precision (mAP) with a intersection over union (IoU) threshold of 0.5.
Experiments and Results
In this section, we discuss details of the experiments and results on different datasets. Additionally, we present the results of an ablative study on WIDER validation set to explain the effect of different modules present in the proposed network.
WIDER
As discussed earlier, WIDER dataset consists of validation and test splits. We use the validation set to perform an ablative study to explain the effects of different modules in the proposed network. For this study, we use a single scale of the input image (no multi-image pyramid) similar to [32]. In addition, comparison of results on validation and test set with recent methods is presented. Ablation study. To understand the effects of different modules in the proposed network, we experimented with 3 broad configurations as shown in Table 2. The results of these configurations are analyzed below: (i) Baseline: This configuration uses VGG-16 as the basenetwork along with feature fusion module and 4 detectors D 1 -D 4 . Results of this network is considered as baseline performance and through addition of different modules, we demonstrate the improvements with respect to this baseline.
(ii) Baseline with context: Earlier work [32,69] have already demonstrated the importance of incorporating context in the detection network. Similar observations are made in our experiments. By using a context processing module similar to [32], an improvement of 1.2% in the mean average precision (mAP) score for hard faces is obtained. Further, the use of atrous based context aggregation increased the mAP score by another 0.6% resulting in an overall improvement of 1.8%.
(iii) Baseline with context and DEM. In this case, we analyze the effect of incorporating DEM into the detection network. First, we experimented with different ways of integrating the feature maps from DEM into detection network through feature addition and concatenation, where the feature maps from the penultimate layer of DEM are expanded through 1×1 convolutions to match the dimensionality of conv3 feature maps, followed by addition/multiplication of these two feature maps. It can be observed from Table 2, that these two configurations do not result in any improvement of the mAP scores. This is primarily due to vast difference in the scales of the feature maps (as discussed in Section 3.3). Next, we added the feature enrichment module (FEM) to enhance the conv3 feature maps. This resulted in an overall improvement of 0.8% in mAP score for hard faces as compared to the baseline with context (CAM), thus demonstrating the significance of the proposed feature enrichment module and density estimator. Furthermore, in order to ensure that the improvements obtained are due to density estimation loss, we conducted another experiment with λ d = 0 and no changes with respect to the baseline with CAM configuration was observed. Comparison with other methods. We compare the results of the proposed method with recent state-of-the-art methods such as SSH [32], Face-MagNet [44], S3FD [65], HR [17], CMS-RCNN [69], MT-CNN [64], LDCF [33], Faceness [59] and Multiscale Cascaded CNN [60]. For the validation set, the results of the proposed method are obtained using single-scale inference as well as image-pyramid based reference (as shown in Table 4). It can be observed that DAFE-FD using single-scale inference achieves superior results as compared to HR that is based on image pyramid. Furthermore, DAFE-FD (single-scale) achieves better results as compared to SSH-single-scale (recent best method) in all the subsets of WIDER dataset. Specifically, an improvement of 1.8% in case of "hard" set is obtained. Further improvements are attained by using pyramid-based inference and the proposed method is able to outperform SSHpyramid and achieve comparable results with respect to S3FD. It is important to note that S3FD is based on singleshot detection approach and it involves extra detectors and feature maps from conv6 and conv7 layers in addition to the use of data augmentation based on multi-scale cropping and photometric distortion [16]. In spite of these additional factors in case of S3FD, DAFE-FD achieves comparable performance with respect to S3FD on the validation set, while obtaining better results on the test set as described below.
The average precision scores of the proposed method on the test set of WIDER dataset are shown in Table 4 and the corresponding precision-recall curves are shown in Fig. 5. It can be clearly observed that DAFE-FD outperforms existing state-of-the-art methods on the "hard" subset while achieving comparable or better performance on the other subsets. Detection results are shown in Fig. 6. More results are provided in the supplementary material. [20] (b) FDDB continuous score [20](c) Pascal faces Pascalfaces [57]. Note that HR/HR-ER [17] uses FDDB for training and evaluate using 10-fold cross-validation. S3FD [65] and Conv3D [25] generate ellipses to reduce localization error. Moreover, in case of S3FD, the authors manually annotate many unlabelled faces in FDDB dataset that results in improved performance. In contrast to these methods, we use FDDB and Pascal faces for testing only and employ rectangular bounding box to evaluate the results.
FDDB
This dataset consists of 2,845 images with a total of 5,171 annotated faces. Fig. 7 (a) and (b) shows comparison of ROC curves for different methods ( S3FD [65], HR/HR-ER [17], Faster RCNN, UnitBox [62], MT-CNN [64], D2MFD [37], Conv3D [25], Hyperface [38] and Headhunter [30]) with the proposed method in discrete and continuous mode respectively. We use rectangular bounding boxes for evaluation as opposed to HR, S3FD and Conv3D that use elliptical regression to reduce localization error. Also, in contrast to HR that is trained on FDDB, we do not use images in FDDB for training purpose. In spite of lacking these additional features, DAFE-FD achieves consistently better performance in case of discrete scores and is comparable to other methods in case of continuous scores. Although S3FD obtains slightly better performance, it is important to consider that the authors manually annotated several unlabelled faces in the FDDB dataset which results in increased performance.
Pascal Faces
This dataset [57] consists of 851 images with a total of 1,355 labelled faces and it is a subset of the PASCAL person layout dataset [9]. Fig. 7(c) shows the comparison of precision-recall curves on this dataset for different methods with the proposed method. The proposed DAFE-FD method outperforms existing methods such as S3FD [65], Faceness [59], DPM [70], Headhunter [30] and many others.
Computational Time
Since the proposed method is a single stage detector, it performs nearly as fast as recent state-of-the-art detectors. The inference speed is measured using Titan X (Pascal) with cuDNN. In case of FDDB/PASCAL dataset, the average computational time required by DAFE-FD is 50 msec/image for a resolution of 400×800, thus achieving a real time processing frame rate. In case of WIDER dataset, the inference time is 190 msec/image and is measured for single-scale with a resolution of 1200×1600. In order to understand the computational overhead introduced by the density estimator module, we measured the inference speed of DAFE-FD without DEM (Baseline (ii) in Section 4.1 ) to be 178 msec/image. Thus, it can be noted that the use of density estimator modules results in minimal computational overhead while achieving increased performance.
Conclusions
We proposed a feature enrichment technique to improve the performance of small face detection. In contrast to existing methods that employ new strategies to improve anchor design, we instead focus on enriching the feature maps directly which is inspired by crowd counting/density estimation techniques that estimate the per pixel density of people/faces present in an image. Experiments conducted on different datasets, such as WIDER, Pascal-faces and FDDB, demonstrate considerable gains in performance due to the use of proposed density enrichment module. Additionally, the proposed method is complementary to recent improvements in anchor designs and hence, it can be used to obtain further improvements. | 4,434 |
1907.06565 | 2957156153 | We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in [1] to defend neural networks against @math -norm and @math -norm attacks. Concretely, for a signal that is approximately sparse in some transform domain and has been perturbed with noise, we provide guarantees for accurately recovering the signal in the transform domain. We can then use the recovered signal to reconstruct the signal in its original domain while largely removing the noise. Our results are general as they can be directly applied to most unitary transforms used in practice and hold for both @math -norm bounded noise and @math -norm bounded noise. In the case of @math -norm bounded noise, we prove recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For the case of @math -norm bounded noise, we provide recovery guarantees for BP. These guarantees theoretically bolster the defense framework introduced in [1] for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate this defense framework using both IHT and BP against the One Pixel Attack [21], Carlini-Wagner @math and @math attacks [3], Jacobian Saliency Based attack [18], and the DeepFool attack [17] on CIFAR-10 [12], MNIST [13], and Fashion-MNIST [27] datasets. This expands beyond the experimental demonstrations of [1]. | The authors of @cite_0 introduced the CRD framework which inspired this work. The main theorem (Theorem 2.2) of @cite_0 is an analog of our Theorem and provides a similar bound the approximation error for recovery via IHT. First note that the statement of the Theorem 2.2 of @cite_0 is missing the required hypothesis @math . This hypothesis appears in Lemma 3.6 of @cite_0 , which is used to prove Theorem 2.2, but it appears to have been accidentally dropped from the statement of Theorem 2.2. We note that, by making the constants explicit, the proof of Lemma 3.6 of @cite_0 gives the same restricted isometry property that we do in Theorem . Therefore, the guarantees we obtain for IHT are essentially the same as in @cite_0 . The main difference is that, to derive recovery guarantees for IHT from the restricted isometry property, we utilize Theorem below (which is a modified version of Theorem 6.18 of @cite_15 ) while the authors of @cite_0 utilize Theorem 3.4 in @cite_0 (which is taken from @cite_19 ). | {
"abstract": [
"We give a new algorithm for approximating the Discrete Fourier transform of an approximately sparse signal that is robust to worst-case L0 corruptions, namely that some coordinates of the signal can be corrupt arbitrarily. Our techniques generalize to a wide range of linear transformations that are used in data analysis such as the Discrete Cosine and Sine transforms, the Hadamard transform, and their high-dimensional analogs. We use our algorithm to successfully defend against worst-case L0 adversaries in the setting of image classification. We give experimental results on the Jacobian-based Saliency Map Attack (JSMA) and the CW L0 attack on the MNIST and Fashion-MNIST datasets as well as the Adversarial Patch on the ImageNet dataset.",
"",
"At the intersection of mathematics, engineering, and computer science sits the thriving field of compressive sensing. Based on the premise that data acquisition and compression can be performed simultaneously, compressive sensing finds applications in imaging, signal processing, and many other domains. In the areas of applied mathematics, electrical engineering, and theoretical computer science, an explosion of research activity has already followed the theoretical results that highlighted the efficiency of the basic principles. The elegant ideas behind these principles are also of independent interest to pure mathematicians.A Mathematical Introduction to Compressive Sensing gives a detailed account of the core theory upon which the field is build. With only moderate prerequisites, it is an excellent textbook for graduate courses in mathematics, engineering, and computer science. It also serves as a reliable resource for practitioners and researchers in these disciplines who want to acquire a careful understanding of the subject. A Mathematical Introduction to Compressive Sensing uses a mathematical perspective to present the core of the theory underlying compressive sensing."
],
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_15"
],
"mid": [
"2890343472",
"",
"143004564"
]
} | Recovery Guarantees for Compressible Signals with Adversarial Noise | Signal measurements are often corrupted due to measurement errors and can even be corrupted due to adversarial noise injection. Supposing some structure on the measurement mechanism, is it possible for us to retrieve the original signal from a corrupted measurement? Indeed, it is generally possible to do so using the theory of Compressive Sensing [3] if certain constraints on the measurement mechanism and the signal hold. In order to make the question more concrete, let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input [12,23]. However, it was found in [25] that neural networks can be easily forced into making incorrect predictions with high-confidence by adding adversarial perturbations to their inputs; see also [24,9,19,4].
Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either 0 -norm or 2 -norm) and often imperceptible to human beings. For this class of machine learning tasks, we show that it is possible to recover original inputs from adversarial inputs and defend the neural network.
In this paper, we first provide recovery guarantees for compressible signals that have been corrupted by noise bounded in either 0 -norm or 2 -norm. Then we extend the framework introduced in [1] to defend neural networks against 0 -norm and 2 -norm attacks. In the case of 0 -norm attacks on neural networks, the adversary can perturb a bounded number of elements in the input but has no restriction on how much each element is perturbed in absolute value. In the case of 2 -norm attacks, the adversary can perturb as many elements as they choose as long as the 2 -norm of the perturbation vector is bounded. Our recovery guarantees cover both cases and provide a partial theoretical explanation for the robustness of the defense framework against adversarial inputs. Our contributions can be summarized as follows:
1. We provide recovery guarantees for IHT and BP when the noise budget is bounded in 0 -norm.
2. We provide recovery guarantees for BP when the noise budget is bounded in the 2 -norm. 3. We extend the framework introduced in [1] to defend neural networks against 0 -norm bounded and 2 -norm bounded attacks.
The paper is organized as follows. We present the defense framework introduced in [1], which we call Compressive Recovery Defense (CRD), in Section 3.1. We present our main theoretical results (i.e. the recovery guarantees) in Section 3.2 and compare these results to related work in Section 3.3. We establish the Restricted Isometry Property (RIP) in Section 4 provide the proofs of our main results in Sections 5 and 6. We show that CRD can be used to defend against 0 -norm and 2 -norm bounded attacks in Section 7 and conclude the paper in Section 8.
Notation
Let x be a vector in C N and let S ⊆ {1, . . . , N } with S = {1, . . . , N } \ S. The support of x, denoted by supp(x), is set of indices of the non-zero entries of x, that is, supp(x) = {i ∈ {1, . . . , N } : x i = 0}. The 0 -norm of x, denoted x 0 , is defined to be the number of non-zero entries of x, i.e. x 0 = card(supp(x)). We say that x is k-sparse if x 0 ≤ k. We denote by x S either the sub-vector in C S consisting of the entries indexed by S or the vector in C N that is formed by starting with x and setting the entries with indices outside of S to zero. For example, if x = [4, 5, −9, 1] and S = {1, 3}, then x S is either [4, −9] or [4, 0, −9, 0]. In the latter case, note x S = x − x S . It will always be clear from the context which meaning is intended. If A ∈ C m×N is a matrix, we denote by A S the column sub-matrix of A consisting of the columns indexed by S.
We use x h(k) to denote a k-sparse vector in C N consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5, −9, 1] then x h(2) = [0, 5, −9, 0]. Note that x h(k) may not be uniquely defined. In contexts where a unique meaning for x h(k) is needed, we can choose x h(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define
x t(k) = x − x h(k) . Let x = x 1 x 2 ∈ C 2n with x 1 , x 2 ∈ C n , then x is called (k, t)-sparse if x 1 is k-sparse and x 2 is t-sparse. We define x h(k,t) = x 1 h(k) x 2 h(t)
, which is a (k, t)-sparse vector in C 2n . Again, x h(k,t) may not be uniquely defined, but when a unique meaning for x h(k,t) is needed (such as in Algorithm 1), we can choose x h(k,t) out of all possible candidates according to a predefined rule.
Main Results
In this section we outline the problem and the framework introduced in [1], state our main theorems, and compare our results to related work.
Compressive Recovery Defense (CRD)
Consider an image classification problem in which a machine learning classifier takes an image re-constructed from its largest Fourier co-efficients as input and outputs a classification decision. Let x ∈ C n be the image vector (we can assume the image is of size √ n × √ n for instance). Then, letting F ∈ C n×n be the unitary Discrete Fourier Transform (DFT) matrix, we get the Fourier coefficients of x asx = F x.
It is well known that natural images are approximately sparse in the frequency domain and therefore we can assume thatx is k-sparse, that is ||x|| 0 ≤ k. In our example of the image classification problem, this means that our machine learning classifier can accept as input the image reconstructed fromx h(k) , and still output the correct decision. That is, the machine learning classifier can accept F * x h(k) as input and still output the correct decision. Now, suppose an adversary corrupts the original image and we observe y = x + e. Noting that y can also be written as y = F * x + e, we are interested in recovering an approximation x # tox h(k) upon observing y, such that when we feed F * x # as input to the classifier, it can still output the correct classification decision.
More generally, this basic framework can be used for adversarial inputs u = v + d in any input domain, as long as there exists a matrix A such that u = Av + d, wherev is approximately sparse and ||d|| p ≤ η for some p, η ≥ 0. If we can recover an approximation v # tov with bounds on the recovery error, then we can use v # to reconstruct an approximation Av # to v with controlled error.
This general framework was proposed by [1]. Moving forward, we refer to this general framework as Compressive Recovery Defense (CRD) and utilize it to defend neural networks against 0 and 2 -norm attacks. As observed in [1], x [0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineering attack. In the case of Algorithm 2, the minimization problem can be posed as a Second Order Cone Programming (SOCP) problem and it appears non-trivial to create a reverse engineering attack that will retain the adversarial noise through the recovery and reconstruction process.
Results
Our main results are stated below. Theorem 1 and Theorem 2 provide bounds on the recovery error with Algorithm 1 and Algorithm 2 respectively when the noise is bounded in 0 -norm. Theorem 3 covers the case when the noise is bounded in the 2 -norm. We start with providing bounds on the approximation error using IHT when the noise is bounded in 0 -norm.
Theorem 1. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, wherex, e ∈ C n , and e is t-sparse. Let 1 ≤ k ≤ n be integer and define
ρ = √ 27 ckt n , τ (1 − ρ) = √ 3 1 + 2 ckt n .
Then for any solution x [T ] = IHT (y, A, k, t, T ) of Algorithm 1 we have the error bound
x [T ] −x h(k) 2 ≤ ρ T x h(k) 2 2 + e 2 2 + τ ( x t(k) 2 ),(1)
where we write
x [T ] = x [T ] e [T ]
withx [T ] , e [T ] ∈ C n . Moreover, if 0 < ρ < 1, then for any 0 < < 1 and any
T ≥ log(1/ ) + log( x h(k) 2 2 + e 2 2 ) log(1/ρ) + 1
we have
x [T ] −x h(k) ∞ ≤ 2ct n τ x t(k) 2 + (2) x [T ] −x h(k) 2 ≤ 4ckt n τ x t(k) 2 +(3)
The result above applies to unitary transformations such as the Fourier Transform, Cosine Transform, Sine Transform, Hadamard Transform, and other wavelet transforms. Since the constant in the above bound can be made arbitrarily small, the recovery error in equations (2) and (3) depends primarily on x t(k) 2 which is small for sparse signals.
Next, we consider the recovery error when using BP instead of IHT. Providing bounds BP is useful as there are cases 1 when (i) BP provides recovery guarantees against a larger 0 noise budget than IHT and (ii) BP leads to a better reconstruction than IHT.
Theorem 2. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, and let 1 ≤ k, t ≤ n be positive integers. Define
δ k,t = ckt n , β = max{k, t}c n , θ = √ k + t (1 − δ k,t ) β, τ = 1 + δ k,t 1 − δ k,t
If 0 < δ k,t < 1 and 0 < θ < 1, then for a solution x # = BP(y, A, ||x t(k) || 2 ) of Algorithm 2, we have the error bound
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 (4) where we write x # = x # e # withx # , e # ∈ C n .
Our final result covers the case when the noise is bounded in 2 -norm. Note that the result covers all unitary matrices and removes the restriction on the magnitude of their elements. We will utilize this result in defending against 2 -norm attacks.
Theorem 3. Let F ∈ C n×n be a unitary matrix and let y = Fx + e, wherex ∈ C n is k-sparse and e ∈ C n . If ||e|| 2 ≤ η, then for a solution x # = BP(y, F, η) of Algorithm 2, we have the error bound
||x # −x|| 1 ≤ 4 √ kη (5) ||x # −x|| 2 ≤ 6η(6)
1 As shown in Section 7.1.1 and Section 7.2.2
Restricted Isometry Property
All of our recovery guarantees are based on the following theorem which establishes a restricted isometry property for certain structured matrices. First, we give some definitions.
(1 − δ) x 2 2 ≤ Ax 2 2 ≤ (1 + δ) x 2 2
for all x ∈ M .
Definition 5. We define M k to be the set of all k-sparse vectors in C N and define S k to be the collection of subsets of {1, . . . , N } of cardinality less than or equal to k. Note that S k is the collection of supports of vectors in M k . Similarly, we define M k,t to be the set of (k, t)-sparse vectors in C 2n . In other words, M k,t is the following subset of C 2n :
M k,t = x = x 1 x 2 ∈ C 2n : x 1 ∈ C n , x 2 ∈ C n , ||x 1 || 0 ≤ k, ||x 2 || 0 ≤ t
We define S k,t to be the following collection of subsets of {1, . . . , 2n}:
S k,t = {S 1 ∪ S 2 : S 1 ⊆ {1, . . . , n} , S 2 ⊆ {n + 1, . . . , 2n} , card(S 1 ) ≤ k, card(S 2 ) ≤ t}
Note that S k,t is the collection of supports of vectors in M k,t .
Theorem 6. Let A = [F I] ∈ C n×2n , where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Then 1 − ckt n x 2 2 ≤ Ax 2 2 ≤ 1 + ckt n x 2 2 (7)
for all x ∈ M k,t . In other words, A satisfies the M k,t -RIP property with constant ckt n .
Proof. In this proof, if B denotes an matrix in C n×n , then λ 1 (B), . . . , λ n (B) denote the eigenvalues of B ordered so that |λ 1 (B)| ≤ · · · ≤ |λ n (B)|. It suffices to fix an S = S 1 ∪ S 2 ∈ S k,t and prove (7) for all non-zero x ∈ C S . Since A * S A S is normal, there is an orthonormal basis of eigenvectors u 1 , . . . , u n for A * S A S , where u i corresponds to the eigenvalue λ i (A * S A S ). For any non-zero
x ∈ C S , we have x = n i=1 c i u i for some c i ∈ C, so Ax 2 2 x 2 2 = A * S A S x, x x, x = n i=1 λ i (A * S A S )c 2 i n i=1 c 2 i .(8)
Thus it will suffice to prove that |λ i (A * S A S ) − 1| ≤ ckt n for all i. Moreover,
|λ i (A * S A S ) − 1| = |λ i (A * S A S − I)| = λ i (A * S A S − I) * (A * S A S − I)(9)
where the last equality holds because A * S A S − I is normal. By combining (8) and (9), we see that (7) will hold upon showing that the eigenvalues of (A * S A S − I) * (A * S A S − I) are bounded by ckt/n. So far we have not used the structure of A, but now we must.
Observe that (A * S A S − I) * (A * S A S − I)
is a block diagonal matrix with two diagonal blocks of the form X * X and XX * . Therefore the three matrices (A * S A S − I) * (A * S A S − I), X * X, and XX * have the same non-zero eigenvalues. Moreover, X is simply the matrix F S 1 with those rows not indexed by S 2 deleted. The hypotheses on F imply that the entries of X * X satisfy |(X * X) ij | ≤ ct n . So the Gershgorin disc theorem implies that each eigenvalue λ of X * X and (hence) of (A * S A S − I) * (A * S A S − I) satisfies |λ| ≤ ckt n .
Algorithm 1 (k, t)-Iterative Hard Thresholding
Input: The observed vector y ∈ C n , the measurement matrix A ∈ C n×2n , and positive integers k, t, T ∈ Z + Output:
x [T ] ∈ M k,t 1: procedure IHT(y, A, k, t, T ) 2: x [0] ← 0 3: for i ∈ [0, . . . , T ] do 4: x [i+1] ← (x [i] + A * (y − Ax [i] )) h(k,t) 5: return x [T ]
Iterative Hard Thresholding
Now we utilize the result of Theorem 6 to prove recovery guarantees for the following Iterative Hard Thresholding algorithm.
Theorem 7. Let A ∈ C n×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ 3 is a M 3k,3t -RIP constant for A and that δ 2 is a M 2k,2t -RIP constant for A. Let x ∈ C 2n , r ∈ C n , y = Ax + r, and S ∈ S k,t . Letting x [T ] = IHT (y, A, k, t, T ), we have the approximation error bound x [T ] − x S 2 ≤ ρ T x [0] − x S 2 + τ Ax S + r 2 where ρ = √ 3δ 3 and (1 − ρ)τ = √ 3 √ 1 + δ 2 . In particular, if δ 3 < 1/ √ 3, then (1 − ρ)τ ≤ √ 3 √ 1 + δ 3 < 2.
18 and ρ < 1; the latter implies that the first term on the right goes to zero as T goes to ∞.
Theorem 7 is a modification of Theorem 6.18 of [8]. More specifically, Theorem 6.18 of [8] considers M 3k , M 2k , and S k in place of M 3k,3t and M 2k,2t and S k,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 7.
Proof of Theorem 1. . Theorem 6 implies that the statement of Theorem 7 holds with δ 3 = c·3k·3t n and δ 2 = c·2k·2t n .
Noting that y = A x h(k) e + Fx t(k) , where x h(k) e ∈ M k,t , set x [T ] = IHT (y, A, k, t, T ) and apply Theorem 7 with x = x h(k) e , r = Fx t(k) , and S = supp(x). Letting x [T ] = x [T ] e [T ] , use the facts that x [T ] −x h(k) 2 ≤ x [T ] − x S 2 and Fx t(k) 2 = x t(k) 2 . That will give (1). Now let (T − 1) = log(1/ )+log( √ x h(k) 2 2 + e 2 2 ) log(1/ρ) , which gives ρ (T −1) x h(k) 2 2 + e 2 2 ≤ . Not- ing that ||e [T −1] − e|| 2 ≤ τ ||x t(k) || 2 + ,
we can use the same reasoning as used in [1]. We first
define z := F * (y − e [T −1] ) which meansx [T ] = z h(k) and since Fx + e = F z + e [T −1] , we havê x − z = F * (e [T −1] − e).
Since the support of (e [T −1] − e) is at most 2t and since |F ij | 2 ≤ c n , we can use the fact that for a 2t-sparse vector v, ||v|| 1 ≤ √ 2t||v|| 2 to get the bound: (2). We get (3) by noting thatx h(k) − z h(k) is 2k sparse and therefore:
|(F * (e [T −1] − e)) i | ≤ n j=1 |(F * ij | |(e [T −1] − e) j | ≤ 2ct n ||(e [T −1] − e))|| 2 ≤ 2ct n τ ||x t(k) || 2 + for any i ∈ [n], . Therefore, ||x − z|| ∞ ≤ 2ct n τ ||x t(k) || 2 + and consequently ||x h(k) − z h(k) || ∞ ≤ 2ct n τ ||x t(k) || 2 + which is||x h(k) − z h(k) || 2 ≤ √ 2k||x h(k) − z h(k) || ∞ ≤ 4ckt n τ ||x t(k) || 2 +
Basis Pursuit
Next we introduce the Basis Pursuit algorithm and prove its recovery guarantees for 0 -norm and 2 -norm noise.
Algorithm 2 Basis Pursuit
Input: The observed vector y ∈ C n , where y = Ax + e, the measurement matrix A ∈ C n×N , and the norm of the error vector η such that ||e|| 2 ≤ η Output:
x # ∈ C N 1: procedure BP(y, A, η) 2: x # ← arg min z∈C N ||z|| 1 subject to||Az − y|| 2 ≤ η 3: return x #
We begin by stating some definitions that will be required in the proofs of the main theorems.
Definition 8. The matrix A ∈ C m×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S 1 ≤ ρ v S 1 + τ Av Definition 9. The matrix A ∈ C m×N satisfies the q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S q ≤ 1 s 1−1/q ρ v S 1 + τ Av Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the cited work).
Theorem 10 (Theorem 4.33 in [8]). Let a 1 , . . . , a N be the columns of A ∈ C m×N , let x ∈ C N with s largest absolute entries supported on S, and let y = Ax + e with ||e|| 2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
A * S A S − I 2→2 ≤ δ, max l∈S A * S a l 2 ≤ β,
and that there exists a vector u = A * h ∈ C N with h ∈ C m such that
u S − sgn(x S ) 2 ≤ γ, u S || ∞ ≤ θ, and h 2 ≤ τ √ s.
If ρ := θ + βγ (1−δ) < 1, then a minimizer x # of z 1 subject to Az − y 2 ≤ η satisfies:
x # − x 2 ≤ 2 (1 − ρ) 1 + β (1 − δ) x S 1 + 2(µγ + τ √ s) 1 − ρ 1 + β 1 − δ + 2µ η where µ := √ 1+δ 1−δ and sgn(x) i = 0, x i = 0 1, x i > 0 −1. x i < 0 .
We will need another Lemma before proving Theorem 2.
Lemma 11. Let A ∈ C n×2n , if ||Ax|| 2 2 ≤ (1 + δ)||x|| 2 2 for all x ∈ M k,t , then, ||A * S A S − I|| 2→2 ≤ δ, for any S ∈ S k,t .
Proof. Let S ∈ S k,t be given. Then for any x ∈ C S , we have
||A S x|| 2 2 − ||x|| 2 2 ≤ δ||x|| 2 2
We can re-write this as :
||A S x|| 2 2 − ||x|| 2 2 = A S x, A S x − x, x = (A * S A S − I)x, x . Noting that A * S A S − I is Hermitian, we have: ||A * S A S − I|| 2→2 = max x∈C S \{0} (A * S A S − I)x, x ||x|| 2 2 ≤ δ
Proof of Theorem 2. We will derive equation (4) by showing that the matrix A satisfies all the hypotheses in Theorem 10 for every vector in M k,t .
First note that by Theorem 6, A satisfies the M k,t -RIP property with constant δ k,t := ckt n . Therefore, by Lemma 11, for any S ∈ S k,t , we have A * S A S − I 2→2 ≤ δ k,t . Since A * S A S is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δ k,t , 1 + δ k,t ]. Since δ k,t < 1 by assumption, A * S A S is injective. Thus, we can set: h = A S (A * S A S ) −1 sgn(x S ) and get:
||h|| 2 = ||A S (A * S A S ) −1 sgn(x S )|| 2 ≤ ||A S || 2→2 ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ≤ τ √ k + t where τ = √ 1+δ k,t
1−δ k,t and we have used the following facts: since A * S A S − I 2→2 ≤ δ k,t < 1, we get that ||(A * S A S ) −1 || 2→2 ≤ 1 1−δ k,t and that the largest singular value of A S is less than 1 + δ k,t . Now let u = A * h, then ||u S − sgn(x S )|| 2 = 0. Now we need to bound the value ||u S || ∞ . Denoting row j of A * S A S by the vector v j , we see that it has at most max{k, t} non-zero entries and that |(v j ) l | 2 ≤ c n for l = 1, . . . , (k + t). Therefore, for any element (u S ) j , we have: 1−δ k,t β, we get ||u S || ∞ ≤ θ < 1 and also observe that max l∈S A * S a l 2 ≤ β. Therefore, all the hypotheses of Theorem 10 have been satisfied. Note that
|(u S ) j | = | (A * S A S ) −1 sgn(x S ), (v j ) * | ≤ ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ||v j || 2 ≤ √ k + t 1 − δ k,ty = Fx + e = A x h(k) e + Fx t(k) , where x h(k) e ∈ M kt . Therefore, setting x # = BP(y, A, ||x t(k) || 2 ),
we use the fact ||Fx t(k) || 2 = ||x t(k) || 2 combined with the bound in Theorem 10 to get (4):
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 where we write x # = x # e # withx # , e # ∈ C n .
We note that since Algorithm 2 is not adapted to the structure of the matrix A in the statement of Theorem 2, one can expect the guarantees to be weaker. We now focus on proving Theorem 3. In order to do so, we will need to state a some lemmas that will be used in the main proof.
Lemma 12. If a matrix A ∈ C m×N satisfies the 2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the 1 robust null space property for S with constants 0 < ρ < 1, τ := τ √ s > 0.
Proof. For any v ∈ C N , ||v S || 2 ≤ ρ √ s ||vS|| 1 + τ ||Av||. Then, using the fact that ||v S || 1 ≤ √ s||v S || 2 , we get:||v S || 1 ≤ ρ||vS|| 1 + τ √ s||Av||.
Lemma 13 (Theorem 4.20 in [8]). If a matrix A ∈ C m×N satisfies the 1 robust null space property (with respect to ||.||) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
||z − x|| 1 ≤ 1 + ρ 1 − ρ (||z|| 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ 1 − ρ ||A(z − x)|| for all z, x ∈ C N .
Lemma 14 (Proposition 2.3 in [8]). For any p > q > 0 and x ∈ C n ,
inf z∈M k x − z p ≤ 1 (k) 1 q − 1 p ||x|| q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n]
and v ∈ C n , we have
||v S || 2 ≤ ρ √ k ||v S || 1 + τ ||v|| 2 = ρ √ k ||v S || 1 + τ ||F v|| 2(10)
where τ = 1. Therefore, F satisfies the 2 robust null space property for all S ⊆ [n] with card(S) ≤ k. Next, using Lemma 12 we get ||v S || 1 ≤ ρ||vS|| 1 + τ √ k||F v|| 2 for all v ∈ C n . Now let x # = BP(y, F, η), then we know ||x # || 1 ≤ ||x|| 1 , where x is k-sparse. Then letting S ⊆ [n] be the support of x and using the fact that ||x S || 2 = 0 and Lemma 13 , we get: Table 2: Recovery performance of Algorithm 1 on 0 -norm bounded noise.
||x # − x|| 1 ≤ 1 + ρ 1 − ρ (||x # || 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 4τ √ k 1 − ρ ||e|| 2 ≤ 4τ √ k 1 − ρ η
Letting ρ → 0 and recalling that τ = 1 gives (5). Now let S be the support of the k largest entries in
x # − x. Note (x # − x) S 2 = inf z∈M k (x # − x) − z 2 .
Then, using Lemma 14 and (10), we see that
||x # − x|| 2 ≤ ||(x # − x) S || 2 + ||(x # − x) S || 2 ≤ 1 √ k ||(x # − x)|| 1 + ρ √ k ||(x # − x) S || 1 + τ ||F (x # − x)|| 2 ≤ 1 + ρ √ k ||(x # − x)|| 1 + 2τ η ≤ 4τ (1 + ρ) (1 − ρ) η + 2τ η = 4τ (1 + ρ) (1 − ρ) + 2τ η
Recalling τ = 1 and letting ρ → 0 gives the desired result.
Experiments
We first analyze how our recovery guarantees perform in practice (Section 7.1) and then show that CRD can be used to defend neural networks against 0 -norm attacks (Section 7.2) as well as 2 -norm attacks (Section 7.3). All of our experiments are conducted on CIFAR-10 [13], MNIST [14], and Fashion-MNIST [28] datasets with pixel values of each image normalized to lie in [0, 1]. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ R n×n and F T ∈ R n×n respectively. That is, for an adversarial image y ∈ R √ n× √ n , such that, y = x + e, we letx = F x, and x = F Tx , where x,x ∈ R n and e ∈ R n is the noise vector (bounded either in 0 or 2 -norm). For an adversarial image y ∈ R √ n× √ n×c , that contains c channels, we perform recovery on each channel independently by considering y m = x m + e m , wherê x m = F x m , x m = F Tx m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT co-efficients used for reconstruction of each channel, and the value t denotes the 0 noise budget for each channel.
We now outline the neural network architectures used for experiments in Section 7.2 and 7.3. For CIFAR-10, we use the network architecture of [10] Fashion-MNIST datasets is provided in Table 1. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and Fashion-MNIST. In both cases, we use a cross-entropy loss function. We implement the following training procedure: for every training image x, we first generatex h(k) = (F x) h(k) , and then reconstruct the image x = F Tx h(k) . We then use both x and x to train the network. For instance, in MNIST we get 60000 original training images and 60000 reconstructed training images, for a total of 120000 training images. The code to reproduce our experiments is available here: https://github.com/jasjeetIM/recovering_ compressible_signals.
Recovery Error
Since recovery guarantees for Algorithm 1 and Algorithm 2 have been proved theoretically, our aim is to examine how close the recovery error is to the upper bound in practice. Each experiment is conducted on a subset of 500 data points sampled uniformly at random from the respective dataset. We first provide the experimental results for the case of 0 -norm bounded noise in Section 7.1.1 and then for the case of 2 -norm bounded noise in Section 7.1.2.
0 noise
For each data point x i ∈ R n , i = 1, 2, . . . , 500, we construct a noise vector e i ∈ R n as follows: we first sample an integer t i from a uniform distribution over the set {1, . . . , t}, where t is the allowed 0 noise budget. Next, we select an index set S t i ⊂ [n] uniformly at random, such that card(S t i ) = t i .
Orig. Acc. OPA. Acc Corr. Acc. 77.4% 0.0% 68.3% Table 5: Effectiveness of CRD against OPA. The first column lists the accuracy of the network on original images and the OPA Acc. columns shows the network's accuracy on adversarial images. The Corr. Acc. column shows the accuracy of the network on images reconstructed using Algorithm 1. Figure 2: Reconstruction quality of images using Algorithm 1. The first row shows the original images while the second row shows reconstruction from the largest 275 DCT co-efficients recovered using Algorithm 1.
Then for each j ∈ S t i , we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1) and (e i ) l = 0 for l / ∈ S t i We then set y i = x i + e i as the observed noisy vector. The first metric we report is δ p := 1
500 500 i=1 ||(x # i ) h(k) − (x i ) h(k) || p , where x # i 2 is the recovered vector for the noisy measurement y i , (x i ) h(k) = (F x i ) h(k)
and the average is taken over the 500 points sampled from the dataset. This measures the average magnitude of the recovery error for the respective algorithm in p norm. In order to relate this value to the upper bound on the recovery error, we also report ∆ p := 1
500 500 i=1 (Υ i − ||(x # i ) h(k) − (x i ) h(k) || p ),
where Υ i is the guaranteed upper bound (as per our Theorems 1 and 2) for y i . Using δ p and ∆ p , we aim to capture how much smaller the recovery error is than the upper bound for these datasets. Finally, we also report t avg := 1 500 500 i=1 t i .
Recovery with Algorithm 1
We set k = 4 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 3. For CIFAR-10, we set k = 5 and are allowed a noise budget of t = 3. That is, the number k of largest co-efficients used in each experiment is roughly equal to the 0 noise budget used. We note that k values have been chosen to meet our computational constraints. As such, any other values that fit the hypotheses of Theorem 1 would work just as well. The results in Table 2 show that on average, the recovery error is well below the upper bound for each dataset. This is quantified by ∆ ∞ and ∆ 2 that show a large difference between the upper bound and the observed error for all three datasets. We will utilize this observation in Section 7.2 when we show that recovery works well even when t is outside the theoretical constraints of Theorem 1.
Recovery with Algorithm 2
We implement Algorithm 2 using the open source library CVXPY [6]. We set k = 8 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 8. For CIFAR-10, we set k = 10 and are allowed a noise budget of t = 8. We observe the results in Table 3 and note once again that the 2 Note that x # i isx [T ] in the statement of Theorem 1 andx # in the statement of Theorem 2.
recovery error is well below the upper bound. This observation will also be useful in Section 7.2, where we will show that recovery error of Algorithm 2 is small for values of t that are well outside the theoretical constraints of Theorem 2.
2 noise
Now we consider the case when the noise vector e i , i = 1, 2, . . . 500 is only bounded in 2 -norm. This case is covered by the guarantees provided in Theorem 3. First we describe the procedure used to construct each noise vector. For each e i , i = 1, 2 . . . 500, we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1). Since there is no restriction on how small k needs to be, we set k = 75 for CIFAR-10 and k = 40 for MNIST and Fashion-MNIST. We report δ 1 , δ 2 , ∆ 1 , ∆ l 2 and since the noise budget is in 2 -norm, we also report 2avg := 500 i=1 ||e i || 2 . The results are shown in Table 4. As was the case in the Section 7.1.1, the recovery error is well below the upper bound here as well. This observation will be useful in Section 7.3 where we are able to create high quality reconstructions for 2 -norm bounded attacks.
Making note of the results from Section 7.1.1 and 7.1.2, we now show that CRD can be used to defend against 0 -norm and 2 -norm bounded adversarial inputs.
Defense against 0 -norm attacks
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) [22] for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in our guarantees, hence allowing us to test how well CRD works within our gaurantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other 0 -norm bounded attacks: Carlini and Wagner (CW) 0 -norm attack [4] and the Jacobian based Saliency Map Attack (JSMA) [19]. For the latter two attacks, we test CRD on the all three datasets. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset.
One Pixel Attack
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses in Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an 0 noise budget of t = 3 per image. The 0 noise budget of t = 3 allows us to use k = 275 for recovery with Algorithm 1. Even though OPA only perturbs one pixel per image, Table 5 shows that it is very effective against natural images and forces the network to misclassify all correctly classified inputs. Figure 1 shows that adversarial images created using OPA are visually almost indistinguishable from the original images. We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images.
In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. The reconstructed images from Algorithm 1 can be seen in the second row of Figure 2. These reconstructions are then compared to the original images presented in the first row of the same figure.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as Table 6: Network performance on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the t avg column lists the average adversarial budget for each attack, Orig. Acc. column lists the accuracy of the network on the original inputs, the Acc. columns shows the accuracy on adversarial inputs, the IHT-Acc. and the BP-Acc. columns list the accuracy of the network on inputs that have been corrected using Algorithm 1 and Algorithm 2 respectively.
input to the network and report its accuracy in Table 5. We note that network performance does indeed improve as network accuracy goes from 0.0% to 68.3% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
CW-0 Attack and JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW 0 -norm attack and JSMA. Since these two attacks do not necessarily satisfy the required hypotheses on t for Theorem 1 and Theorem 2, we call upon the results of Section 7.1.1 to test if CRD is still able to defend the network against these attacks. For instance, in the case of the CW-0 attack, there is no way to pre-specify a fixed adversarial noise budget since the attack iteratively reduces the number of perturbed pixels until it is no longer effective. For JSMA one can pre-specify an adversarial budget, but as noted in [1], JSMA is only effective with larger values of t. However, even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that CRD is still able to defend the network. We observe that this is related to the behaviour of the RIP of a matrix for "most" 3 vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
To begin our analysis, we show adversarial images for MNIST and Fashion-MNIST created by CW-0 and JSMA in Figure 3. The first row contains the original test images while the second and the third rows show the adversarial images. We show adversarial images for the CIFAR-10 dataset in Figure 4. Next, we follow the procedure described in Section 7.2.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2. For MNIST and Fashion-MNIST, we show the reconstructions of Algorithm 1 in Figure 5 and for Algorithm 2 in Figure 6. For CIFAR-10, we show the reconstructions for Algorithm 1 in Figure 7 and for Algorithm 2 in Figure 8. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 6.
Note that while the network accuracy for all datasets improves substantially using CRD, for Algorithm 1, network accuracy on reconstructed images for CIFAR-10 remains considerably lower than accuracy on original images. We observe that a possible reason may be the difference in prop- erties of DCT co-efficients of MNIST/Fashion-MNIST data versus data from CIFAR-10. Consider the definition of (k, )-sparse adapted from [1]: a (k, )-sparse vector x ∈ C n follows the constraint ||x t(k) || 2 ≤ ||x h(k) || 2 . The point of this definition is that smaller values of mean the vector x is closer to being k-sparse. We notice that the average value of for DCT co-efficients for CIFAR-10 is approximately 0.30 while that for MNIST is 1.06 and Fashion-MNIST is approximately 0.89, where k = 0.05n . Based on our limited experimental results, it may be hypothesized that Algorithm 1 works well for larger values of , when k, t do not do not fit the constraints of Theorem 1. However, a deeper investigation is required to understand what makes Algorithm 1 perform poorly for CIFAR-10.
Defense against 2 -norm attacks
In the case of 2 -norm bounded attacks, we use the CW 2 -norm attack [4] and the Deepfool attack [18] as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (5) and (6) are applicable for recovery in all experiments of this section. Figure 9 shows examples of each attack for the CIFAR-10 dataset while adversarial images for MNIST and Fashion-MNIST are presented in Figure 10.
The reconstruction quality for MNIST and Fashion-MNIST is shown in Figure 11 and for CIFAR-10 we show the reconstruction quality in Figure 12. It can be noted that reconstruction Table 7: Accuracy of our network on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the 2avg column lists the average 2 -norm of the attack vector, Acc. columns list the accuracy of the network on the original and adversarial inputs, and the Corr. Acc. columns lists the accuracy of the network once the inputs have been corrected using CRD.
using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 7 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used. We can conclude that CRD is able to defend neural networks against 2 -norm bounded attacks.
Original CW 2 Deepfool Figure 10: Adversarial images for MNIST and Fashion-MNIST datasets for 2 -norm bounded attacks. The first row lists the original images for the MNIST and Fashion MNIST dataset. The second row shows adversarial images created using the CW 2 -norm attack and the third row shows adversarial images created using the Deepfool attack.
Original CW 2 Deepfool Figure 11: Reconstruction from adversarial images using Algorithm 2. The first row shows the original images while the second and the third rows show the reconstruction of the adversarial images after recovering the largest 40 co-efficients using Algorithm 2. 7.4 Which recovery algorithm to use for 0 -norm attacks As shown in Section 7.1.1, Algorithm 1 and Algorithm 2 lead to high quality reconstructions for 0 -norm bounded attacks. Hence, it is conceivable that CRD using either algorithm should be able to provide a good defense. However, we found that reconstructions using Algorithm 2 led to better network accuracy for CIFAR-10 than Algorithm 1 while Algorithm 1 outperformed Algorithm 2 for MNIST and Fashion-MNIST. Therefore, the algorithm to use may be dependent on the dataset in question. The next question to examine is which algorithm is faster in practice. Since Algorithm 2 is not technically an algorithm, its runtime is dependent on the actual method used to solve the optimization problem. For instance, we use Second Order Cone Programming (SOCP) from CVXPY [6] for solving the minimization problem in Algorithm 2. In our experiments, we noticed that the runtime of Algorithm 2 slows considerably for larger values of n. However, Algorithm 1 does not face this issue (there is a slowdown but it is much smaller than Algorithm 2). Therefore, if speed is important, it may be beneficial to use Algorithm 1 as opposed to Algorithm 2 for recovery in the case of 0 -norm attacks.
Conclusion
We provided recovery guarantees for corrupted signals in the case of 0 -norm bounded and 2 -norm bounded noise. We then experimentally verified these guarantees and showed that for the datasets used, recovery error was considerably lower than the upper bounds of our theorems. We were able to utilize these observations in CRD and improve the performance of neural networks substantially in the case of 0 -norm bounded noise as well as 2 -norm bounded noise. While 0 -norm attacks don't necessarily satisfy the constraints required for our guarantees, we showed that CRD is still able to provide a good defense for values of t much larger than allowed by Theorems 1 and 2.
In the case of 2 -norm bounded adversaries, the guarantees of Theorem 3 were applicable in all experiments and CRD was shown to improve network performance for all attacks. | 8,526 |
1907.06565 | 2957156153 | We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in [1] to defend neural networks against @math -norm and @math -norm attacks. Concretely, for a signal that is approximately sparse in some transform domain and has been perturbed with noise, we provide guarantees for accurately recovering the signal in the transform domain. We can then use the recovered signal to reconstruct the signal in its original domain while largely removing the noise. Our results are general as they can be directly applied to most unitary transforms used in practice and hold for both @math -norm bounded noise and @math -norm bounded noise. In the case of @math -norm bounded noise, we prove recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For the case of @math -norm bounded noise, we provide recovery guarantees for BP. These guarantees theoretically bolster the defense framework introduced in [1] for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate this defense framework using both IHT and BP against the One Pixel Attack [21], Carlini-Wagner @math and @math attacks [3], Jacobian Saliency Based attack [18], and the DeepFool attack [17] on CIFAR-10 [12], MNIST [13], and Fashion-MNIST [27] datasets. This expands beyond the experimental demonstrations of [1]. | Other works that provide guarantees include @cite_1 and @cite_20 where the authors frame the problem as one of regularizing the Lipschitz constant of a network and provide a lower bound on the norm of the perturbation required to change the classifier decision. The authors of @cite_13 use robust optimization to perturb the training data and provide a training procedure that updates parameters based on worst case perturbations. A similar approach to @cite_13 is @cite_12 in which the authors use robust optimization to provide lower bounds on the norm of adversarial perturbations on the training data. In @cite_26 , the authors use techniques from Differential Privacy @cite_25 in order to augment the training procedure of the classifier to improve robustness to adversarial inputs. Another approach using randomization is @cite_10 in which the authors add i.i.d Gaussian noise to the input and provide guarantees of maintaining classifier predictions as long as the @math -norm of the attack vector is bounded by a function that depends on the output of the classifier. | {
"abstract": [
"Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks, but they either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired formalism, that provides a rigorous, generic, and flexible foundation for defense.",
"We propose a powerful second-order attack method that outperforms existing attack methods on reducing the accuracy of state-of-the-art defense models based on adversarial training. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model. To this end, we introduce a framework that allows one to obtain a certifiable lower bound on the prediction accuracy against adversarial examples. We conduct experiments to show the effectiveness of our attack method. At the same time, our defense models obtain higher accuracies compared to previous works under our proposed attack.",
"Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier without any loss in prediction performance.",
"We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most important feature of Parseval networks is to maintain weight matrices of linear and convolutional layers to be (approximately) Parseval tight frames, which are extensions of orthogonal matrices to non-square matrices. We describe how these constraints can be maintained efficiently during SGD. We show that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10 100 and Street View House Numbers (SVHN), while being more robust than their vanilla counterpart against adversarial examples. Incidentally, Parseval networks also tend to train faster and make a better usage of the full capacity of the networks.",
"Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.",
"The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.",
"We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains."
],
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_1",
"@cite_20",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2883108656",
"2891262038",
"2963540169",
"2964294232",
"2767075075",
"2027595342",
"2766462876"
]
} | Recovery Guarantees for Compressible Signals with Adversarial Noise | Signal measurements are often corrupted due to measurement errors and can even be corrupted due to adversarial noise injection. Supposing some structure on the measurement mechanism, is it possible for us to retrieve the original signal from a corrupted measurement? Indeed, it is generally possible to do so using the theory of Compressive Sensing [3] if certain constraints on the measurement mechanism and the signal hold. In order to make the question more concrete, let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input [12,23]. However, it was found in [25] that neural networks can be easily forced into making incorrect predictions with high-confidence by adding adversarial perturbations to their inputs; see also [24,9,19,4].
Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either 0 -norm or 2 -norm) and often imperceptible to human beings. For this class of machine learning tasks, we show that it is possible to recover original inputs from adversarial inputs and defend the neural network.
In this paper, we first provide recovery guarantees for compressible signals that have been corrupted by noise bounded in either 0 -norm or 2 -norm. Then we extend the framework introduced in [1] to defend neural networks against 0 -norm and 2 -norm attacks. In the case of 0 -norm attacks on neural networks, the adversary can perturb a bounded number of elements in the input but has no restriction on how much each element is perturbed in absolute value. In the case of 2 -norm attacks, the adversary can perturb as many elements as they choose as long as the 2 -norm of the perturbation vector is bounded. Our recovery guarantees cover both cases and provide a partial theoretical explanation for the robustness of the defense framework against adversarial inputs. Our contributions can be summarized as follows:
1. We provide recovery guarantees for IHT and BP when the noise budget is bounded in 0 -norm.
2. We provide recovery guarantees for BP when the noise budget is bounded in the 2 -norm. 3. We extend the framework introduced in [1] to defend neural networks against 0 -norm bounded and 2 -norm bounded attacks.
The paper is organized as follows. We present the defense framework introduced in [1], which we call Compressive Recovery Defense (CRD), in Section 3.1. We present our main theoretical results (i.e. the recovery guarantees) in Section 3.2 and compare these results to related work in Section 3.3. We establish the Restricted Isometry Property (RIP) in Section 4 provide the proofs of our main results in Sections 5 and 6. We show that CRD can be used to defend against 0 -norm and 2 -norm bounded attacks in Section 7 and conclude the paper in Section 8.
Notation
Let x be a vector in C N and let S ⊆ {1, . . . , N } with S = {1, . . . , N } \ S. The support of x, denoted by supp(x), is set of indices of the non-zero entries of x, that is, supp(x) = {i ∈ {1, . . . , N } : x i = 0}. The 0 -norm of x, denoted x 0 , is defined to be the number of non-zero entries of x, i.e. x 0 = card(supp(x)). We say that x is k-sparse if x 0 ≤ k. We denote by x S either the sub-vector in C S consisting of the entries indexed by S or the vector in C N that is formed by starting with x and setting the entries with indices outside of S to zero. For example, if x = [4, 5, −9, 1] and S = {1, 3}, then x S is either [4, −9] or [4, 0, −9, 0]. In the latter case, note x S = x − x S . It will always be clear from the context which meaning is intended. If A ∈ C m×N is a matrix, we denote by A S the column sub-matrix of A consisting of the columns indexed by S.
We use x h(k) to denote a k-sparse vector in C N consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5, −9, 1] then x h(2) = [0, 5, −9, 0]. Note that x h(k) may not be uniquely defined. In contexts where a unique meaning for x h(k) is needed, we can choose x h(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define
x t(k) = x − x h(k) . Let x = x 1 x 2 ∈ C 2n with x 1 , x 2 ∈ C n , then x is called (k, t)-sparse if x 1 is k-sparse and x 2 is t-sparse. We define x h(k,t) = x 1 h(k) x 2 h(t)
, which is a (k, t)-sparse vector in C 2n . Again, x h(k,t) may not be uniquely defined, but when a unique meaning for x h(k,t) is needed (such as in Algorithm 1), we can choose x h(k,t) out of all possible candidates according to a predefined rule.
Main Results
In this section we outline the problem and the framework introduced in [1], state our main theorems, and compare our results to related work.
Compressive Recovery Defense (CRD)
Consider an image classification problem in which a machine learning classifier takes an image re-constructed from its largest Fourier co-efficients as input and outputs a classification decision. Let x ∈ C n be the image vector (we can assume the image is of size √ n × √ n for instance). Then, letting F ∈ C n×n be the unitary Discrete Fourier Transform (DFT) matrix, we get the Fourier coefficients of x asx = F x.
It is well known that natural images are approximately sparse in the frequency domain and therefore we can assume thatx is k-sparse, that is ||x|| 0 ≤ k. In our example of the image classification problem, this means that our machine learning classifier can accept as input the image reconstructed fromx h(k) , and still output the correct decision. That is, the machine learning classifier can accept F * x h(k) as input and still output the correct decision. Now, suppose an adversary corrupts the original image and we observe y = x + e. Noting that y can also be written as y = F * x + e, we are interested in recovering an approximation x # tox h(k) upon observing y, such that when we feed F * x # as input to the classifier, it can still output the correct classification decision.
More generally, this basic framework can be used for adversarial inputs u = v + d in any input domain, as long as there exists a matrix A such that u = Av + d, wherev is approximately sparse and ||d|| p ≤ η for some p, η ≥ 0. If we can recover an approximation v # tov with bounds on the recovery error, then we can use v # to reconstruct an approximation Av # to v with controlled error.
This general framework was proposed by [1]. Moving forward, we refer to this general framework as Compressive Recovery Defense (CRD) and utilize it to defend neural networks against 0 and 2 -norm attacks. As observed in [1], x [0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineering attack. In the case of Algorithm 2, the minimization problem can be posed as a Second Order Cone Programming (SOCP) problem and it appears non-trivial to create a reverse engineering attack that will retain the adversarial noise through the recovery and reconstruction process.
Results
Our main results are stated below. Theorem 1 and Theorem 2 provide bounds on the recovery error with Algorithm 1 and Algorithm 2 respectively when the noise is bounded in 0 -norm. Theorem 3 covers the case when the noise is bounded in the 2 -norm. We start with providing bounds on the approximation error using IHT when the noise is bounded in 0 -norm.
Theorem 1. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, wherex, e ∈ C n , and e is t-sparse. Let 1 ≤ k ≤ n be integer and define
ρ = √ 27 ckt n , τ (1 − ρ) = √ 3 1 + 2 ckt n .
Then for any solution x [T ] = IHT (y, A, k, t, T ) of Algorithm 1 we have the error bound
x [T ] −x h(k) 2 ≤ ρ T x h(k) 2 2 + e 2 2 + τ ( x t(k) 2 ),(1)
where we write
x [T ] = x [T ] e [T ]
withx [T ] , e [T ] ∈ C n . Moreover, if 0 < ρ < 1, then for any 0 < < 1 and any
T ≥ log(1/ ) + log( x h(k) 2 2 + e 2 2 ) log(1/ρ) + 1
we have
x [T ] −x h(k) ∞ ≤ 2ct n τ x t(k) 2 + (2) x [T ] −x h(k) 2 ≤ 4ckt n τ x t(k) 2 +(3)
The result above applies to unitary transformations such as the Fourier Transform, Cosine Transform, Sine Transform, Hadamard Transform, and other wavelet transforms. Since the constant in the above bound can be made arbitrarily small, the recovery error in equations (2) and (3) depends primarily on x t(k) 2 which is small for sparse signals.
Next, we consider the recovery error when using BP instead of IHT. Providing bounds BP is useful as there are cases 1 when (i) BP provides recovery guarantees against a larger 0 noise budget than IHT and (ii) BP leads to a better reconstruction than IHT.
Theorem 2. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, and let 1 ≤ k, t ≤ n be positive integers. Define
δ k,t = ckt n , β = max{k, t}c n , θ = √ k + t (1 − δ k,t ) β, τ = 1 + δ k,t 1 − δ k,t
If 0 < δ k,t < 1 and 0 < θ < 1, then for a solution x # = BP(y, A, ||x t(k) || 2 ) of Algorithm 2, we have the error bound
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 (4) where we write x # = x # e # withx # , e # ∈ C n .
Our final result covers the case when the noise is bounded in 2 -norm. Note that the result covers all unitary matrices and removes the restriction on the magnitude of their elements. We will utilize this result in defending against 2 -norm attacks.
Theorem 3. Let F ∈ C n×n be a unitary matrix and let y = Fx + e, wherex ∈ C n is k-sparse and e ∈ C n . If ||e|| 2 ≤ η, then for a solution x # = BP(y, F, η) of Algorithm 2, we have the error bound
||x # −x|| 1 ≤ 4 √ kη (5) ||x # −x|| 2 ≤ 6η(6)
1 As shown in Section 7.1.1 and Section 7.2.2
Restricted Isometry Property
All of our recovery guarantees are based on the following theorem which establishes a restricted isometry property for certain structured matrices. First, we give some definitions.
(1 − δ) x 2 2 ≤ Ax 2 2 ≤ (1 + δ) x 2 2
for all x ∈ M .
Definition 5. We define M k to be the set of all k-sparse vectors in C N and define S k to be the collection of subsets of {1, . . . , N } of cardinality less than or equal to k. Note that S k is the collection of supports of vectors in M k . Similarly, we define M k,t to be the set of (k, t)-sparse vectors in C 2n . In other words, M k,t is the following subset of C 2n :
M k,t = x = x 1 x 2 ∈ C 2n : x 1 ∈ C n , x 2 ∈ C n , ||x 1 || 0 ≤ k, ||x 2 || 0 ≤ t
We define S k,t to be the following collection of subsets of {1, . . . , 2n}:
S k,t = {S 1 ∪ S 2 : S 1 ⊆ {1, . . . , n} , S 2 ⊆ {n + 1, . . . , 2n} , card(S 1 ) ≤ k, card(S 2 ) ≤ t}
Note that S k,t is the collection of supports of vectors in M k,t .
Theorem 6. Let A = [F I] ∈ C n×2n , where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Then 1 − ckt n x 2 2 ≤ Ax 2 2 ≤ 1 + ckt n x 2 2 (7)
for all x ∈ M k,t . In other words, A satisfies the M k,t -RIP property with constant ckt n .
Proof. In this proof, if B denotes an matrix in C n×n , then λ 1 (B), . . . , λ n (B) denote the eigenvalues of B ordered so that |λ 1 (B)| ≤ · · · ≤ |λ n (B)|. It suffices to fix an S = S 1 ∪ S 2 ∈ S k,t and prove (7) for all non-zero x ∈ C S . Since A * S A S is normal, there is an orthonormal basis of eigenvectors u 1 , . . . , u n for A * S A S , where u i corresponds to the eigenvalue λ i (A * S A S ). For any non-zero
x ∈ C S , we have x = n i=1 c i u i for some c i ∈ C, so Ax 2 2 x 2 2 = A * S A S x, x x, x = n i=1 λ i (A * S A S )c 2 i n i=1 c 2 i .(8)
Thus it will suffice to prove that |λ i (A * S A S ) − 1| ≤ ckt n for all i. Moreover,
|λ i (A * S A S ) − 1| = |λ i (A * S A S − I)| = λ i (A * S A S − I) * (A * S A S − I)(9)
where the last equality holds because A * S A S − I is normal. By combining (8) and (9), we see that (7) will hold upon showing that the eigenvalues of (A * S A S − I) * (A * S A S − I) are bounded by ckt/n. So far we have not used the structure of A, but now we must.
Observe that (A * S A S − I) * (A * S A S − I)
is a block diagonal matrix with two diagonal blocks of the form X * X and XX * . Therefore the three matrices (A * S A S − I) * (A * S A S − I), X * X, and XX * have the same non-zero eigenvalues. Moreover, X is simply the matrix F S 1 with those rows not indexed by S 2 deleted. The hypotheses on F imply that the entries of X * X satisfy |(X * X) ij | ≤ ct n . So the Gershgorin disc theorem implies that each eigenvalue λ of X * X and (hence) of (A * S A S − I) * (A * S A S − I) satisfies |λ| ≤ ckt n .
Algorithm 1 (k, t)-Iterative Hard Thresholding
Input: The observed vector y ∈ C n , the measurement matrix A ∈ C n×2n , and positive integers k, t, T ∈ Z + Output:
x [T ] ∈ M k,t 1: procedure IHT(y, A, k, t, T ) 2: x [0] ← 0 3: for i ∈ [0, . . . , T ] do 4: x [i+1] ← (x [i] + A * (y − Ax [i] )) h(k,t) 5: return x [T ]
Iterative Hard Thresholding
Now we utilize the result of Theorem 6 to prove recovery guarantees for the following Iterative Hard Thresholding algorithm.
Theorem 7. Let A ∈ C n×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ 3 is a M 3k,3t -RIP constant for A and that δ 2 is a M 2k,2t -RIP constant for A. Let x ∈ C 2n , r ∈ C n , y = Ax + r, and S ∈ S k,t . Letting x [T ] = IHT (y, A, k, t, T ), we have the approximation error bound x [T ] − x S 2 ≤ ρ T x [0] − x S 2 + τ Ax S + r 2 where ρ = √ 3δ 3 and (1 − ρ)τ = √ 3 √ 1 + δ 2 . In particular, if δ 3 < 1/ √ 3, then (1 − ρ)τ ≤ √ 3 √ 1 + δ 3 < 2.
18 and ρ < 1; the latter implies that the first term on the right goes to zero as T goes to ∞.
Theorem 7 is a modification of Theorem 6.18 of [8]. More specifically, Theorem 6.18 of [8] considers M 3k , M 2k , and S k in place of M 3k,3t and M 2k,2t and S k,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 7.
Proof of Theorem 1. . Theorem 6 implies that the statement of Theorem 7 holds with δ 3 = c·3k·3t n and δ 2 = c·2k·2t n .
Noting that y = A x h(k) e + Fx t(k) , where x h(k) e ∈ M k,t , set x [T ] = IHT (y, A, k, t, T ) and apply Theorem 7 with x = x h(k) e , r = Fx t(k) , and S = supp(x). Letting x [T ] = x [T ] e [T ] , use the facts that x [T ] −x h(k) 2 ≤ x [T ] − x S 2 and Fx t(k) 2 = x t(k) 2 . That will give (1). Now let (T − 1) = log(1/ )+log( √ x h(k) 2 2 + e 2 2 ) log(1/ρ) , which gives ρ (T −1) x h(k) 2 2 + e 2 2 ≤ . Not- ing that ||e [T −1] − e|| 2 ≤ τ ||x t(k) || 2 + ,
we can use the same reasoning as used in [1]. We first
define z := F * (y − e [T −1] ) which meansx [T ] = z h(k) and since Fx + e = F z + e [T −1] , we havê x − z = F * (e [T −1] − e).
Since the support of (e [T −1] − e) is at most 2t and since |F ij | 2 ≤ c n , we can use the fact that for a 2t-sparse vector v, ||v|| 1 ≤ √ 2t||v|| 2 to get the bound: (2). We get (3) by noting thatx h(k) − z h(k) is 2k sparse and therefore:
|(F * (e [T −1] − e)) i | ≤ n j=1 |(F * ij | |(e [T −1] − e) j | ≤ 2ct n ||(e [T −1] − e))|| 2 ≤ 2ct n τ ||x t(k) || 2 + for any i ∈ [n], . Therefore, ||x − z|| ∞ ≤ 2ct n τ ||x t(k) || 2 + and consequently ||x h(k) − z h(k) || ∞ ≤ 2ct n τ ||x t(k) || 2 + which is||x h(k) − z h(k) || 2 ≤ √ 2k||x h(k) − z h(k) || ∞ ≤ 4ckt n τ ||x t(k) || 2 +
Basis Pursuit
Next we introduce the Basis Pursuit algorithm and prove its recovery guarantees for 0 -norm and 2 -norm noise.
Algorithm 2 Basis Pursuit
Input: The observed vector y ∈ C n , where y = Ax + e, the measurement matrix A ∈ C n×N , and the norm of the error vector η such that ||e|| 2 ≤ η Output:
x # ∈ C N 1: procedure BP(y, A, η) 2: x # ← arg min z∈C N ||z|| 1 subject to||Az − y|| 2 ≤ η 3: return x #
We begin by stating some definitions that will be required in the proofs of the main theorems.
Definition 8. The matrix A ∈ C m×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S 1 ≤ ρ v S 1 + τ Av Definition 9. The matrix A ∈ C m×N satisfies the q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S q ≤ 1 s 1−1/q ρ v S 1 + τ Av Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the cited work).
Theorem 10 (Theorem 4.33 in [8]). Let a 1 , . . . , a N be the columns of A ∈ C m×N , let x ∈ C N with s largest absolute entries supported on S, and let y = Ax + e with ||e|| 2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
A * S A S − I 2→2 ≤ δ, max l∈S A * S a l 2 ≤ β,
and that there exists a vector u = A * h ∈ C N with h ∈ C m such that
u S − sgn(x S ) 2 ≤ γ, u S || ∞ ≤ θ, and h 2 ≤ τ √ s.
If ρ := θ + βγ (1−δ) < 1, then a minimizer x # of z 1 subject to Az − y 2 ≤ η satisfies:
x # − x 2 ≤ 2 (1 − ρ) 1 + β (1 − δ) x S 1 + 2(µγ + τ √ s) 1 − ρ 1 + β 1 − δ + 2µ η where µ := √ 1+δ 1−δ and sgn(x) i = 0, x i = 0 1, x i > 0 −1. x i < 0 .
We will need another Lemma before proving Theorem 2.
Lemma 11. Let A ∈ C n×2n , if ||Ax|| 2 2 ≤ (1 + δ)||x|| 2 2 for all x ∈ M k,t , then, ||A * S A S − I|| 2→2 ≤ δ, for any S ∈ S k,t .
Proof. Let S ∈ S k,t be given. Then for any x ∈ C S , we have
||A S x|| 2 2 − ||x|| 2 2 ≤ δ||x|| 2 2
We can re-write this as :
||A S x|| 2 2 − ||x|| 2 2 = A S x, A S x − x, x = (A * S A S − I)x, x . Noting that A * S A S − I is Hermitian, we have: ||A * S A S − I|| 2→2 = max x∈C S \{0} (A * S A S − I)x, x ||x|| 2 2 ≤ δ
Proof of Theorem 2. We will derive equation (4) by showing that the matrix A satisfies all the hypotheses in Theorem 10 for every vector in M k,t .
First note that by Theorem 6, A satisfies the M k,t -RIP property with constant δ k,t := ckt n . Therefore, by Lemma 11, for any S ∈ S k,t , we have A * S A S − I 2→2 ≤ δ k,t . Since A * S A S is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δ k,t , 1 + δ k,t ]. Since δ k,t < 1 by assumption, A * S A S is injective. Thus, we can set: h = A S (A * S A S ) −1 sgn(x S ) and get:
||h|| 2 = ||A S (A * S A S ) −1 sgn(x S )|| 2 ≤ ||A S || 2→2 ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ≤ τ √ k + t where τ = √ 1+δ k,t
1−δ k,t and we have used the following facts: since A * S A S − I 2→2 ≤ δ k,t < 1, we get that ||(A * S A S ) −1 || 2→2 ≤ 1 1−δ k,t and that the largest singular value of A S is less than 1 + δ k,t . Now let u = A * h, then ||u S − sgn(x S )|| 2 = 0. Now we need to bound the value ||u S || ∞ . Denoting row j of A * S A S by the vector v j , we see that it has at most max{k, t} non-zero entries and that |(v j ) l | 2 ≤ c n for l = 1, . . . , (k + t). Therefore, for any element (u S ) j , we have: 1−δ k,t β, we get ||u S || ∞ ≤ θ < 1 and also observe that max l∈S A * S a l 2 ≤ β. Therefore, all the hypotheses of Theorem 10 have been satisfied. Note that
|(u S ) j | = | (A * S A S ) −1 sgn(x S ), (v j ) * | ≤ ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ||v j || 2 ≤ √ k + t 1 − δ k,ty = Fx + e = A x h(k) e + Fx t(k) , where x h(k) e ∈ M kt . Therefore, setting x # = BP(y, A, ||x t(k) || 2 ),
we use the fact ||Fx t(k) || 2 = ||x t(k) || 2 combined with the bound in Theorem 10 to get (4):
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 where we write x # = x # e # withx # , e # ∈ C n .
We note that since Algorithm 2 is not adapted to the structure of the matrix A in the statement of Theorem 2, one can expect the guarantees to be weaker. We now focus on proving Theorem 3. In order to do so, we will need to state a some lemmas that will be used in the main proof.
Lemma 12. If a matrix A ∈ C m×N satisfies the 2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the 1 robust null space property for S with constants 0 < ρ < 1, τ := τ √ s > 0.
Proof. For any v ∈ C N , ||v S || 2 ≤ ρ √ s ||vS|| 1 + τ ||Av||. Then, using the fact that ||v S || 1 ≤ √ s||v S || 2 , we get:||v S || 1 ≤ ρ||vS|| 1 + τ √ s||Av||.
Lemma 13 (Theorem 4.20 in [8]). If a matrix A ∈ C m×N satisfies the 1 robust null space property (with respect to ||.||) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
||z − x|| 1 ≤ 1 + ρ 1 − ρ (||z|| 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ 1 − ρ ||A(z − x)|| for all z, x ∈ C N .
Lemma 14 (Proposition 2.3 in [8]). For any p > q > 0 and x ∈ C n ,
inf z∈M k x − z p ≤ 1 (k) 1 q − 1 p ||x|| q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n]
and v ∈ C n , we have
||v S || 2 ≤ ρ √ k ||v S || 1 + τ ||v|| 2 = ρ √ k ||v S || 1 + τ ||F v|| 2(10)
where τ = 1. Therefore, F satisfies the 2 robust null space property for all S ⊆ [n] with card(S) ≤ k. Next, using Lemma 12 we get ||v S || 1 ≤ ρ||vS|| 1 + τ √ k||F v|| 2 for all v ∈ C n . Now let x # = BP(y, F, η), then we know ||x # || 1 ≤ ||x|| 1 , where x is k-sparse. Then letting S ⊆ [n] be the support of x and using the fact that ||x S || 2 = 0 and Lemma 13 , we get: Table 2: Recovery performance of Algorithm 1 on 0 -norm bounded noise.
||x # − x|| 1 ≤ 1 + ρ 1 − ρ (||x # || 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 4τ √ k 1 − ρ ||e|| 2 ≤ 4τ √ k 1 − ρ η
Letting ρ → 0 and recalling that τ = 1 gives (5). Now let S be the support of the k largest entries in
x # − x. Note (x # − x) S 2 = inf z∈M k (x # − x) − z 2 .
Then, using Lemma 14 and (10), we see that
||x # − x|| 2 ≤ ||(x # − x) S || 2 + ||(x # − x) S || 2 ≤ 1 √ k ||(x # − x)|| 1 + ρ √ k ||(x # − x) S || 1 + τ ||F (x # − x)|| 2 ≤ 1 + ρ √ k ||(x # − x)|| 1 + 2τ η ≤ 4τ (1 + ρ) (1 − ρ) η + 2τ η = 4τ (1 + ρ) (1 − ρ) + 2τ η
Recalling τ = 1 and letting ρ → 0 gives the desired result.
Experiments
We first analyze how our recovery guarantees perform in practice (Section 7.1) and then show that CRD can be used to defend neural networks against 0 -norm attacks (Section 7.2) as well as 2 -norm attacks (Section 7.3). All of our experiments are conducted on CIFAR-10 [13], MNIST [14], and Fashion-MNIST [28] datasets with pixel values of each image normalized to lie in [0, 1]. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ R n×n and F T ∈ R n×n respectively. That is, for an adversarial image y ∈ R √ n× √ n , such that, y = x + e, we letx = F x, and x = F Tx , where x,x ∈ R n and e ∈ R n is the noise vector (bounded either in 0 or 2 -norm). For an adversarial image y ∈ R √ n× √ n×c , that contains c channels, we perform recovery on each channel independently by considering y m = x m + e m , wherê x m = F x m , x m = F Tx m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT co-efficients used for reconstruction of each channel, and the value t denotes the 0 noise budget for each channel.
We now outline the neural network architectures used for experiments in Section 7.2 and 7.3. For CIFAR-10, we use the network architecture of [10] Fashion-MNIST datasets is provided in Table 1. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and Fashion-MNIST. In both cases, we use a cross-entropy loss function. We implement the following training procedure: for every training image x, we first generatex h(k) = (F x) h(k) , and then reconstruct the image x = F Tx h(k) . We then use both x and x to train the network. For instance, in MNIST we get 60000 original training images and 60000 reconstructed training images, for a total of 120000 training images. The code to reproduce our experiments is available here: https://github.com/jasjeetIM/recovering_ compressible_signals.
Recovery Error
Since recovery guarantees for Algorithm 1 and Algorithm 2 have been proved theoretically, our aim is to examine how close the recovery error is to the upper bound in practice. Each experiment is conducted on a subset of 500 data points sampled uniformly at random from the respective dataset. We first provide the experimental results for the case of 0 -norm bounded noise in Section 7.1.1 and then for the case of 2 -norm bounded noise in Section 7.1.2.
0 noise
For each data point x i ∈ R n , i = 1, 2, . . . , 500, we construct a noise vector e i ∈ R n as follows: we first sample an integer t i from a uniform distribution over the set {1, . . . , t}, where t is the allowed 0 noise budget. Next, we select an index set S t i ⊂ [n] uniformly at random, such that card(S t i ) = t i .
Orig. Acc. OPA. Acc Corr. Acc. 77.4% 0.0% 68.3% Table 5: Effectiveness of CRD against OPA. The first column lists the accuracy of the network on original images and the OPA Acc. columns shows the network's accuracy on adversarial images. The Corr. Acc. column shows the accuracy of the network on images reconstructed using Algorithm 1. Figure 2: Reconstruction quality of images using Algorithm 1. The first row shows the original images while the second row shows reconstruction from the largest 275 DCT co-efficients recovered using Algorithm 1.
Then for each j ∈ S t i , we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1) and (e i ) l = 0 for l / ∈ S t i We then set y i = x i + e i as the observed noisy vector. The first metric we report is δ p := 1
500 500 i=1 ||(x # i ) h(k) − (x i ) h(k) || p , where x # i 2 is the recovered vector for the noisy measurement y i , (x i ) h(k) = (F x i ) h(k)
and the average is taken over the 500 points sampled from the dataset. This measures the average magnitude of the recovery error for the respective algorithm in p norm. In order to relate this value to the upper bound on the recovery error, we also report ∆ p := 1
500 500 i=1 (Υ i − ||(x # i ) h(k) − (x i ) h(k) || p ),
where Υ i is the guaranteed upper bound (as per our Theorems 1 and 2) for y i . Using δ p and ∆ p , we aim to capture how much smaller the recovery error is than the upper bound for these datasets. Finally, we also report t avg := 1 500 500 i=1 t i .
Recovery with Algorithm 1
We set k = 4 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 3. For CIFAR-10, we set k = 5 and are allowed a noise budget of t = 3. That is, the number k of largest co-efficients used in each experiment is roughly equal to the 0 noise budget used. We note that k values have been chosen to meet our computational constraints. As such, any other values that fit the hypotheses of Theorem 1 would work just as well. The results in Table 2 show that on average, the recovery error is well below the upper bound for each dataset. This is quantified by ∆ ∞ and ∆ 2 that show a large difference between the upper bound and the observed error for all three datasets. We will utilize this observation in Section 7.2 when we show that recovery works well even when t is outside the theoretical constraints of Theorem 1.
Recovery with Algorithm 2
We implement Algorithm 2 using the open source library CVXPY [6]. We set k = 8 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 8. For CIFAR-10, we set k = 10 and are allowed a noise budget of t = 8. We observe the results in Table 3 and note once again that the 2 Note that x # i isx [T ] in the statement of Theorem 1 andx # in the statement of Theorem 2.
recovery error is well below the upper bound. This observation will also be useful in Section 7.2, where we will show that recovery error of Algorithm 2 is small for values of t that are well outside the theoretical constraints of Theorem 2.
2 noise
Now we consider the case when the noise vector e i , i = 1, 2, . . . 500 is only bounded in 2 -norm. This case is covered by the guarantees provided in Theorem 3. First we describe the procedure used to construct each noise vector. For each e i , i = 1, 2 . . . 500, we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1). Since there is no restriction on how small k needs to be, we set k = 75 for CIFAR-10 and k = 40 for MNIST and Fashion-MNIST. We report δ 1 , δ 2 , ∆ 1 , ∆ l 2 and since the noise budget is in 2 -norm, we also report 2avg := 500 i=1 ||e i || 2 . The results are shown in Table 4. As was the case in the Section 7.1.1, the recovery error is well below the upper bound here as well. This observation will be useful in Section 7.3 where we are able to create high quality reconstructions for 2 -norm bounded attacks.
Making note of the results from Section 7.1.1 and 7.1.2, we now show that CRD can be used to defend against 0 -norm and 2 -norm bounded adversarial inputs.
Defense against 0 -norm attacks
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) [22] for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in our guarantees, hence allowing us to test how well CRD works within our gaurantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other 0 -norm bounded attacks: Carlini and Wagner (CW) 0 -norm attack [4] and the Jacobian based Saliency Map Attack (JSMA) [19]. For the latter two attacks, we test CRD on the all three datasets. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset.
One Pixel Attack
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses in Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an 0 noise budget of t = 3 per image. The 0 noise budget of t = 3 allows us to use k = 275 for recovery with Algorithm 1. Even though OPA only perturbs one pixel per image, Table 5 shows that it is very effective against natural images and forces the network to misclassify all correctly classified inputs. Figure 1 shows that adversarial images created using OPA are visually almost indistinguishable from the original images. We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images.
In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. The reconstructed images from Algorithm 1 can be seen in the second row of Figure 2. These reconstructions are then compared to the original images presented in the first row of the same figure.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as Table 6: Network performance on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the t avg column lists the average adversarial budget for each attack, Orig. Acc. column lists the accuracy of the network on the original inputs, the Acc. columns shows the accuracy on adversarial inputs, the IHT-Acc. and the BP-Acc. columns list the accuracy of the network on inputs that have been corrected using Algorithm 1 and Algorithm 2 respectively.
input to the network and report its accuracy in Table 5. We note that network performance does indeed improve as network accuracy goes from 0.0% to 68.3% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
CW-0 Attack and JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW 0 -norm attack and JSMA. Since these two attacks do not necessarily satisfy the required hypotheses on t for Theorem 1 and Theorem 2, we call upon the results of Section 7.1.1 to test if CRD is still able to defend the network against these attacks. For instance, in the case of the CW-0 attack, there is no way to pre-specify a fixed adversarial noise budget since the attack iteratively reduces the number of perturbed pixels until it is no longer effective. For JSMA one can pre-specify an adversarial budget, but as noted in [1], JSMA is only effective with larger values of t. However, even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that CRD is still able to defend the network. We observe that this is related to the behaviour of the RIP of a matrix for "most" 3 vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
To begin our analysis, we show adversarial images for MNIST and Fashion-MNIST created by CW-0 and JSMA in Figure 3. The first row contains the original test images while the second and the third rows show the adversarial images. We show adversarial images for the CIFAR-10 dataset in Figure 4. Next, we follow the procedure described in Section 7.2.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2. For MNIST and Fashion-MNIST, we show the reconstructions of Algorithm 1 in Figure 5 and for Algorithm 2 in Figure 6. For CIFAR-10, we show the reconstructions for Algorithm 1 in Figure 7 and for Algorithm 2 in Figure 8. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 6.
Note that while the network accuracy for all datasets improves substantially using CRD, for Algorithm 1, network accuracy on reconstructed images for CIFAR-10 remains considerably lower than accuracy on original images. We observe that a possible reason may be the difference in prop- erties of DCT co-efficients of MNIST/Fashion-MNIST data versus data from CIFAR-10. Consider the definition of (k, )-sparse adapted from [1]: a (k, )-sparse vector x ∈ C n follows the constraint ||x t(k) || 2 ≤ ||x h(k) || 2 . The point of this definition is that smaller values of mean the vector x is closer to being k-sparse. We notice that the average value of for DCT co-efficients for CIFAR-10 is approximately 0.30 while that for MNIST is 1.06 and Fashion-MNIST is approximately 0.89, where k = 0.05n . Based on our limited experimental results, it may be hypothesized that Algorithm 1 works well for larger values of , when k, t do not do not fit the constraints of Theorem 1. However, a deeper investigation is required to understand what makes Algorithm 1 perform poorly for CIFAR-10.
Defense against 2 -norm attacks
In the case of 2 -norm bounded attacks, we use the CW 2 -norm attack [4] and the Deepfool attack [18] as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (5) and (6) are applicable for recovery in all experiments of this section. Figure 9 shows examples of each attack for the CIFAR-10 dataset while adversarial images for MNIST and Fashion-MNIST are presented in Figure 10.
The reconstruction quality for MNIST and Fashion-MNIST is shown in Figure 11 and for CIFAR-10 we show the reconstruction quality in Figure 12. It can be noted that reconstruction Table 7: Accuracy of our network on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the 2avg column lists the average 2 -norm of the attack vector, Acc. columns list the accuracy of the network on the original and adversarial inputs, and the Corr. Acc. columns lists the accuracy of the network once the inputs have been corrected using CRD.
using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 7 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used. We can conclude that CRD is able to defend neural networks against 2 -norm bounded attacks.
Original CW 2 Deepfool Figure 10: Adversarial images for MNIST and Fashion-MNIST datasets for 2 -norm bounded attacks. The first row lists the original images for the MNIST and Fashion MNIST dataset. The second row shows adversarial images created using the CW 2 -norm attack and the third row shows adversarial images created using the Deepfool attack.
Original CW 2 Deepfool Figure 11: Reconstruction from adversarial images using Algorithm 2. The first row shows the original images while the second and the third rows show the reconstruction of the adversarial images after recovering the largest 40 co-efficients using Algorithm 2. 7.4 Which recovery algorithm to use for 0 -norm attacks As shown in Section 7.1.1, Algorithm 1 and Algorithm 2 lead to high quality reconstructions for 0 -norm bounded attacks. Hence, it is conceivable that CRD using either algorithm should be able to provide a good defense. However, we found that reconstructions using Algorithm 2 led to better network accuracy for CIFAR-10 than Algorithm 1 while Algorithm 1 outperformed Algorithm 2 for MNIST and Fashion-MNIST. Therefore, the algorithm to use may be dependent on the dataset in question. The next question to examine is which algorithm is faster in practice. Since Algorithm 2 is not technically an algorithm, its runtime is dependent on the actual method used to solve the optimization problem. For instance, we use Second Order Cone Programming (SOCP) from CVXPY [6] for solving the minimization problem in Algorithm 2. In our experiments, we noticed that the runtime of Algorithm 2 slows considerably for larger values of n. However, Algorithm 1 does not face this issue (there is a slowdown but it is much smaller than Algorithm 2). Therefore, if speed is important, it may be beneficial to use Algorithm 1 as opposed to Algorithm 2 for recovery in the case of 0 -norm attacks.
Conclusion
We provided recovery guarantees for corrupted signals in the case of 0 -norm bounded and 2 -norm bounded noise. We then experimentally verified these guarantees and showed that for the datasets used, recovery error was considerably lower than the upper bounds of our theorems. We were able to utilize these observations in CRD and improve the performance of neural networks substantially in the case of 0 -norm bounded noise as well as 2 -norm bounded noise. While 0 -norm attacks don't necessarily satisfy the constraints required for our guarantees, we showed that CRD is still able to provide a good defense for values of t much larger than allowed by Theorems 1 and 2.
In the case of 2 -norm bounded adversaries, the guarantees of Theorem 3 were applicable in all experiments and CRD was shown to improve network performance for all attacks. | 8,526 |
1907.06565 | 2957156153 | We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in [1] to defend neural networks against @math -norm and @math -norm attacks. Concretely, for a signal that is approximately sparse in some transform domain and has been perturbed with noise, we provide guarantees for accurately recovering the signal in the transform domain. We can then use the recovered signal to reconstruct the signal in its original domain while largely removing the noise. Our results are general as they can be directly applied to most unitary transforms used in practice and hold for both @math -norm bounded noise and @math -norm bounded noise. In the case of @math -norm bounded noise, we prove recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For the case of @math -norm bounded noise, we provide recovery guarantees for BP. These guarantees theoretically bolster the defense framework introduced in [1] for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate this defense framework using both IHT and BP against the One Pixel Attack [21], Carlini-Wagner @math and @math attacks [3], Jacobian Saliency Based attack [18], and the DeepFool attack [17] on CIFAR-10 [12], MNIST [13], and Fashion-MNIST [27] datasets. This expands beyond the experimental demonstrations of [1]. | Most defenses against adversarial inputs do not come with theoretical guarantees. Instead, a large body of research has focused on finding practical ways to improve robustness to adversarial inputs by either augmenting the training data @cite_22 , using adversarial inputs from various networks @cite_28 , or by reducing the dimensionality of the input @cite_14 . For instance, @cite_18 use robust optimization to make the network robust to worst case adversarial perturbations on the training data. However, the effectiveness of their approach is determined by the amount and quality of training data available and its similarity to the distribution of the test data. An approach similar to ours but without any theoretical guarantees is @cite_21 . In this work, the authors use Generative Adversarial Networks (GANs) to estimate the distribution of the training data and during inference, use a GAN to reconstruct an input that is most similar to a given test input and is not adversarial. | {
"abstract": [
"Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.",
"Feature squeezing is a recently-introduced framework for mitigating and detecting adversarial examples. In previous work, we showed that it is effective against several earlier methods for generating adversarial examples. In this short note, we report on recent results showing that simple feature squeezing techniques also make deep learning models significantly more robust against the Carlini Wagner attacks, which are the best known adversarial methods discovered to date.",
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.",
"In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at this https URL"
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_21"
],
"mid": [
"2640329709",
"2619203976",
"2963207607",
"2620038827",
"2787496614"
]
} | Recovery Guarantees for Compressible Signals with Adversarial Noise | Signal measurements are often corrupted due to measurement errors and can even be corrupted due to adversarial noise injection. Supposing some structure on the measurement mechanism, is it possible for us to retrieve the original signal from a corrupted measurement? Indeed, it is generally possible to do so using the theory of Compressive Sensing [3] if certain constraints on the measurement mechanism and the signal hold. In order to make the question more concrete, let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input [12,23]. However, it was found in [25] that neural networks can be easily forced into making incorrect predictions with high-confidence by adding adversarial perturbations to their inputs; see also [24,9,19,4].
Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either 0 -norm or 2 -norm) and often imperceptible to human beings. For this class of machine learning tasks, we show that it is possible to recover original inputs from adversarial inputs and defend the neural network.
In this paper, we first provide recovery guarantees for compressible signals that have been corrupted by noise bounded in either 0 -norm or 2 -norm. Then we extend the framework introduced in [1] to defend neural networks against 0 -norm and 2 -norm attacks. In the case of 0 -norm attacks on neural networks, the adversary can perturb a bounded number of elements in the input but has no restriction on how much each element is perturbed in absolute value. In the case of 2 -norm attacks, the adversary can perturb as many elements as they choose as long as the 2 -norm of the perturbation vector is bounded. Our recovery guarantees cover both cases and provide a partial theoretical explanation for the robustness of the defense framework against adversarial inputs. Our contributions can be summarized as follows:
1. We provide recovery guarantees for IHT and BP when the noise budget is bounded in 0 -norm.
2. We provide recovery guarantees for BP when the noise budget is bounded in the 2 -norm. 3. We extend the framework introduced in [1] to defend neural networks against 0 -norm bounded and 2 -norm bounded attacks.
The paper is organized as follows. We present the defense framework introduced in [1], which we call Compressive Recovery Defense (CRD), in Section 3.1. We present our main theoretical results (i.e. the recovery guarantees) in Section 3.2 and compare these results to related work in Section 3.3. We establish the Restricted Isometry Property (RIP) in Section 4 provide the proofs of our main results in Sections 5 and 6. We show that CRD can be used to defend against 0 -norm and 2 -norm bounded attacks in Section 7 and conclude the paper in Section 8.
Notation
Let x be a vector in C N and let S ⊆ {1, . . . , N } with S = {1, . . . , N } \ S. The support of x, denoted by supp(x), is set of indices of the non-zero entries of x, that is, supp(x) = {i ∈ {1, . . . , N } : x i = 0}. The 0 -norm of x, denoted x 0 , is defined to be the number of non-zero entries of x, i.e. x 0 = card(supp(x)). We say that x is k-sparse if x 0 ≤ k. We denote by x S either the sub-vector in C S consisting of the entries indexed by S or the vector in C N that is formed by starting with x and setting the entries with indices outside of S to zero. For example, if x = [4, 5, −9, 1] and S = {1, 3}, then x S is either [4, −9] or [4, 0, −9, 0]. In the latter case, note x S = x − x S . It will always be clear from the context which meaning is intended. If A ∈ C m×N is a matrix, we denote by A S the column sub-matrix of A consisting of the columns indexed by S.
We use x h(k) to denote a k-sparse vector in C N consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5, −9, 1] then x h(2) = [0, 5, −9, 0]. Note that x h(k) may not be uniquely defined. In contexts where a unique meaning for x h(k) is needed, we can choose x h(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define
x t(k) = x − x h(k) . Let x = x 1 x 2 ∈ C 2n with x 1 , x 2 ∈ C n , then x is called (k, t)-sparse if x 1 is k-sparse and x 2 is t-sparse. We define x h(k,t) = x 1 h(k) x 2 h(t)
, which is a (k, t)-sparse vector in C 2n . Again, x h(k,t) may not be uniquely defined, but when a unique meaning for x h(k,t) is needed (such as in Algorithm 1), we can choose x h(k,t) out of all possible candidates according to a predefined rule.
Main Results
In this section we outline the problem and the framework introduced in [1], state our main theorems, and compare our results to related work.
Compressive Recovery Defense (CRD)
Consider an image classification problem in which a machine learning classifier takes an image re-constructed from its largest Fourier co-efficients as input and outputs a classification decision. Let x ∈ C n be the image vector (we can assume the image is of size √ n × √ n for instance). Then, letting F ∈ C n×n be the unitary Discrete Fourier Transform (DFT) matrix, we get the Fourier coefficients of x asx = F x.
It is well known that natural images are approximately sparse in the frequency domain and therefore we can assume thatx is k-sparse, that is ||x|| 0 ≤ k. In our example of the image classification problem, this means that our machine learning classifier can accept as input the image reconstructed fromx h(k) , and still output the correct decision. That is, the machine learning classifier can accept F * x h(k) as input and still output the correct decision. Now, suppose an adversary corrupts the original image and we observe y = x + e. Noting that y can also be written as y = F * x + e, we are interested in recovering an approximation x # tox h(k) upon observing y, such that when we feed F * x # as input to the classifier, it can still output the correct classification decision.
More generally, this basic framework can be used for adversarial inputs u = v + d in any input domain, as long as there exists a matrix A such that u = Av + d, wherev is approximately sparse and ||d|| p ≤ η for some p, η ≥ 0. If we can recover an approximation v # tov with bounds on the recovery error, then we can use v # to reconstruct an approximation Av # to v with controlled error.
This general framework was proposed by [1]. Moving forward, we refer to this general framework as Compressive Recovery Defense (CRD) and utilize it to defend neural networks against 0 and 2 -norm attacks. As observed in [1], x [0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineering attack. In the case of Algorithm 2, the minimization problem can be posed as a Second Order Cone Programming (SOCP) problem and it appears non-trivial to create a reverse engineering attack that will retain the adversarial noise through the recovery and reconstruction process.
Results
Our main results are stated below. Theorem 1 and Theorem 2 provide bounds on the recovery error with Algorithm 1 and Algorithm 2 respectively when the noise is bounded in 0 -norm. Theorem 3 covers the case when the noise is bounded in the 2 -norm. We start with providing bounds on the approximation error using IHT when the noise is bounded in 0 -norm.
Theorem 1. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, wherex, e ∈ C n , and e is t-sparse. Let 1 ≤ k ≤ n be integer and define
ρ = √ 27 ckt n , τ (1 − ρ) = √ 3 1 + 2 ckt n .
Then for any solution x [T ] = IHT (y, A, k, t, T ) of Algorithm 1 we have the error bound
x [T ] −x h(k) 2 ≤ ρ T x h(k) 2 2 + e 2 2 + τ ( x t(k) 2 ),(1)
where we write
x [T ] = x [T ] e [T ]
withx [T ] , e [T ] ∈ C n . Moreover, if 0 < ρ < 1, then for any 0 < < 1 and any
T ≥ log(1/ ) + log( x h(k) 2 2 + e 2 2 ) log(1/ρ) + 1
we have
x [T ] −x h(k) ∞ ≤ 2ct n τ x t(k) 2 + (2) x [T ] −x h(k) 2 ≤ 4ckt n τ x t(k) 2 +(3)
The result above applies to unitary transformations such as the Fourier Transform, Cosine Transform, Sine Transform, Hadamard Transform, and other wavelet transforms. Since the constant in the above bound can be made arbitrarily small, the recovery error in equations (2) and (3) depends primarily on x t(k) 2 which is small for sparse signals.
Next, we consider the recovery error when using BP instead of IHT. Providing bounds BP is useful as there are cases 1 when (i) BP provides recovery guarantees against a larger 0 noise budget than IHT and (ii) BP leads to a better reconstruction than IHT.
Theorem 2. Let A = [F I] ∈ C n×2n ,
where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Let y = Fx + e, and let 1 ≤ k, t ≤ n be positive integers. Define
δ k,t = ckt n , β = max{k, t}c n , θ = √ k + t (1 − δ k,t ) β, τ = 1 + δ k,t 1 − δ k,t
If 0 < δ k,t < 1 and 0 < θ < 1, then for a solution x # = BP(y, A, ||x t(k) || 2 ) of Algorithm 2, we have the error bound
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 (4) where we write x # = x # e # withx # , e # ∈ C n .
Our final result covers the case when the noise is bounded in 2 -norm. Note that the result covers all unitary matrices and removes the restriction on the magnitude of their elements. We will utilize this result in defending against 2 -norm attacks.
Theorem 3. Let F ∈ C n×n be a unitary matrix and let y = Fx + e, wherex ∈ C n is k-sparse and e ∈ C n . If ||e|| 2 ≤ η, then for a solution x # = BP(y, F, η) of Algorithm 2, we have the error bound
||x # −x|| 1 ≤ 4 √ kη (5) ||x # −x|| 2 ≤ 6η(6)
1 As shown in Section 7.1.1 and Section 7.2.2
Restricted Isometry Property
All of our recovery guarantees are based on the following theorem which establishes a restricted isometry property for certain structured matrices. First, we give some definitions.
(1 − δ) x 2 2 ≤ Ax 2 2 ≤ (1 + δ) x 2 2
for all x ∈ M .
Definition 5. We define M k to be the set of all k-sparse vectors in C N and define S k to be the collection of subsets of {1, . . . , N } of cardinality less than or equal to k. Note that S k is the collection of supports of vectors in M k . Similarly, we define M k,t to be the set of (k, t)-sparse vectors in C 2n . In other words, M k,t is the following subset of C 2n :
M k,t = x = x 1 x 2 ∈ C 2n : x 1 ∈ C n , x 2 ∈ C n , ||x 1 || 0 ≤ k, ||x 2 || 0 ≤ t
We define S k,t to be the following collection of subsets of {1, . . . , 2n}:
S k,t = {S 1 ∪ S 2 : S 1 ⊆ {1, . . . , n} , S 2 ⊆ {n + 1, . . . , 2n} , card(S 1 ) ≤ k, card(S 2 ) ≤ t}
Note that S k,t is the collection of supports of vectors in M k,t .
Theorem 6. Let A = [F I] ∈ C n×2n , where F ∈ C n×n is a unitary matrix with |F ij | 2 ≤ c n and I ∈ C n×n is the identity matrix. Then 1 − ckt n x 2 2 ≤ Ax 2 2 ≤ 1 + ckt n x 2 2 (7)
for all x ∈ M k,t . In other words, A satisfies the M k,t -RIP property with constant ckt n .
Proof. In this proof, if B denotes an matrix in C n×n , then λ 1 (B), . . . , λ n (B) denote the eigenvalues of B ordered so that |λ 1 (B)| ≤ · · · ≤ |λ n (B)|. It suffices to fix an S = S 1 ∪ S 2 ∈ S k,t and prove (7) for all non-zero x ∈ C S . Since A * S A S is normal, there is an orthonormal basis of eigenvectors u 1 , . . . , u n for A * S A S , where u i corresponds to the eigenvalue λ i (A * S A S ). For any non-zero
x ∈ C S , we have x = n i=1 c i u i for some c i ∈ C, so Ax 2 2 x 2 2 = A * S A S x, x x, x = n i=1 λ i (A * S A S )c 2 i n i=1 c 2 i .(8)
Thus it will suffice to prove that |λ i (A * S A S ) − 1| ≤ ckt n for all i. Moreover,
|λ i (A * S A S ) − 1| = |λ i (A * S A S − I)| = λ i (A * S A S − I) * (A * S A S − I)(9)
where the last equality holds because A * S A S − I is normal. By combining (8) and (9), we see that (7) will hold upon showing that the eigenvalues of (A * S A S − I) * (A * S A S − I) are bounded by ckt/n. So far we have not used the structure of A, but now we must.
Observe that (A * S A S − I) * (A * S A S − I)
is a block diagonal matrix with two diagonal blocks of the form X * X and XX * . Therefore the three matrices (A * S A S − I) * (A * S A S − I), X * X, and XX * have the same non-zero eigenvalues. Moreover, X is simply the matrix F S 1 with those rows not indexed by S 2 deleted. The hypotheses on F imply that the entries of X * X satisfy |(X * X) ij | ≤ ct n . So the Gershgorin disc theorem implies that each eigenvalue λ of X * X and (hence) of (A * S A S − I) * (A * S A S − I) satisfies |λ| ≤ ckt n .
Algorithm 1 (k, t)-Iterative Hard Thresholding
Input: The observed vector y ∈ C n , the measurement matrix A ∈ C n×2n , and positive integers k, t, T ∈ Z + Output:
x [T ] ∈ M k,t 1: procedure IHT(y, A, k, t, T ) 2: x [0] ← 0 3: for i ∈ [0, . . . , T ] do 4: x [i+1] ← (x [i] + A * (y − Ax [i] )) h(k,t) 5: return x [T ]
Iterative Hard Thresholding
Now we utilize the result of Theorem 6 to prove recovery guarantees for the following Iterative Hard Thresholding algorithm.
Theorem 7. Let A ∈ C n×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ 3 is a M 3k,3t -RIP constant for A and that δ 2 is a M 2k,2t -RIP constant for A. Let x ∈ C 2n , r ∈ C n , y = Ax + r, and S ∈ S k,t . Letting x [T ] = IHT (y, A, k, t, T ), we have the approximation error bound x [T ] − x S 2 ≤ ρ T x [0] − x S 2 + τ Ax S + r 2 where ρ = √ 3δ 3 and (1 − ρ)τ = √ 3 √ 1 + δ 2 . In particular, if δ 3 < 1/ √ 3, then (1 − ρ)τ ≤ √ 3 √ 1 + δ 3 < 2.
18 and ρ < 1; the latter implies that the first term on the right goes to zero as T goes to ∞.
Theorem 7 is a modification of Theorem 6.18 of [8]. More specifically, Theorem 6.18 of [8] considers M 3k , M 2k , and S k in place of M 3k,3t and M 2k,2t and S k,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 7.
Proof of Theorem 1. . Theorem 6 implies that the statement of Theorem 7 holds with δ 3 = c·3k·3t n and δ 2 = c·2k·2t n .
Noting that y = A x h(k) e + Fx t(k) , where x h(k) e ∈ M k,t , set x [T ] = IHT (y, A, k, t, T ) and apply Theorem 7 with x = x h(k) e , r = Fx t(k) , and S = supp(x). Letting x [T ] = x [T ] e [T ] , use the facts that x [T ] −x h(k) 2 ≤ x [T ] − x S 2 and Fx t(k) 2 = x t(k) 2 . That will give (1). Now let (T − 1) = log(1/ )+log( √ x h(k) 2 2 + e 2 2 ) log(1/ρ) , which gives ρ (T −1) x h(k) 2 2 + e 2 2 ≤ . Not- ing that ||e [T −1] − e|| 2 ≤ τ ||x t(k) || 2 + ,
we can use the same reasoning as used in [1]. We first
define z := F * (y − e [T −1] ) which meansx [T ] = z h(k) and since Fx + e = F z + e [T −1] , we havê x − z = F * (e [T −1] − e).
Since the support of (e [T −1] − e) is at most 2t and since |F ij | 2 ≤ c n , we can use the fact that for a 2t-sparse vector v, ||v|| 1 ≤ √ 2t||v|| 2 to get the bound: (2). We get (3) by noting thatx h(k) − z h(k) is 2k sparse and therefore:
|(F * (e [T −1] − e)) i | ≤ n j=1 |(F * ij | |(e [T −1] − e) j | ≤ 2ct n ||(e [T −1] − e))|| 2 ≤ 2ct n τ ||x t(k) || 2 + for any i ∈ [n], . Therefore, ||x − z|| ∞ ≤ 2ct n τ ||x t(k) || 2 + and consequently ||x h(k) − z h(k) || ∞ ≤ 2ct n τ ||x t(k) || 2 + which is||x h(k) − z h(k) || 2 ≤ √ 2k||x h(k) − z h(k) || ∞ ≤ 4ckt n τ ||x t(k) || 2 +
Basis Pursuit
Next we introduce the Basis Pursuit algorithm and prove its recovery guarantees for 0 -norm and 2 -norm noise.
Algorithm 2 Basis Pursuit
Input: The observed vector y ∈ C n , where y = Ax + e, the measurement matrix A ∈ C n×N , and the norm of the error vector η such that ||e|| 2 ≤ η Output:
x # ∈ C N 1: procedure BP(y, A, η) 2: x # ← arg min z∈C N ||z|| 1 subject to||Az − y|| 2 ≤ η 3: return x #
We begin by stating some definitions that will be required in the proofs of the main theorems.
Definition 8. The matrix A ∈ C m×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S 1 ≤ ρ v S 1 + τ Av Definition 9. The matrix A ∈ C m×N satisfies the q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm · if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ C N we have v S q ≤ 1 s 1−1/q ρ v S 1 + τ Av Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the cited work).
Theorem 10 (Theorem 4.33 in [8]). Let a 1 , . . . , a N be the columns of A ∈ C m×N , let x ∈ C N with s largest absolute entries supported on S, and let y = Ax + e with ||e|| 2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
A * S A S − I 2→2 ≤ δ, max l∈S A * S a l 2 ≤ β,
and that there exists a vector u = A * h ∈ C N with h ∈ C m such that
u S − sgn(x S ) 2 ≤ γ, u S || ∞ ≤ θ, and h 2 ≤ τ √ s.
If ρ := θ + βγ (1−δ) < 1, then a minimizer x # of z 1 subject to Az − y 2 ≤ η satisfies:
x # − x 2 ≤ 2 (1 − ρ) 1 + β (1 − δ) x S 1 + 2(µγ + τ √ s) 1 − ρ 1 + β 1 − δ + 2µ η where µ := √ 1+δ 1−δ and sgn(x) i = 0, x i = 0 1, x i > 0 −1. x i < 0 .
We will need another Lemma before proving Theorem 2.
Lemma 11. Let A ∈ C n×2n , if ||Ax|| 2 2 ≤ (1 + δ)||x|| 2 2 for all x ∈ M k,t , then, ||A * S A S − I|| 2→2 ≤ δ, for any S ∈ S k,t .
Proof. Let S ∈ S k,t be given. Then for any x ∈ C S , we have
||A S x|| 2 2 − ||x|| 2 2 ≤ δ||x|| 2 2
We can re-write this as :
||A S x|| 2 2 − ||x|| 2 2 = A S x, A S x − x, x = (A * S A S − I)x, x . Noting that A * S A S − I is Hermitian, we have: ||A * S A S − I|| 2→2 = max x∈C S \{0} (A * S A S − I)x, x ||x|| 2 2 ≤ δ
Proof of Theorem 2. We will derive equation (4) by showing that the matrix A satisfies all the hypotheses in Theorem 10 for every vector in M k,t .
First note that by Theorem 6, A satisfies the M k,t -RIP property with constant δ k,t := ckt n . Therefore, by Lemma 11, for any S ∈ S k,t , we have A * S A S − I 2→2 ≤ δ k,t . Since A * S A S is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δ k,t , 1 + δ k,t ]. Since δ k,t < 1 by assumption, A * S A S is injective. Thus, we can set: h = A S (A * S A S ) −1 sgn(x S ) and get:
||h|| 2 = ||A S (A * S A S ) −1 sgn(x S )|| 2 ≤ ||A S || 2→2 ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ≤ τ √ k + t where τ = √ 1+δ k,t
1−δ k,t and we have used the following facts: since A * S A S − I 2→2 ≤ δ k,t < 1, we get that ||(A * S A S ) −1 || 2→2 ≤ 1 1−δ k,t and that the largest singular value of A S is less than 1 + δ k,t . Now let u = A * h, then ||u S − sgn(x S )|| 2 = 0. Now we need to bound the value ||u S || ∞ . Denoting row j of A * S A S by the vector v j , we see that it has at most max{k, t} non-zero entries and that |(v j ) l | 2 ≤ c n for l = 1, . . . , (k + t). Therefore, for any element (u S ) j , we have: 1−δ k,t β, we get ||u S || ∞ ≤ θ < 1 and also observe that max l∈S A * S a l 2 ≤ β. Therefore, all the hypotheses of Theorem 10 have been satisfied. Note that
|(u S ) j | = | (A * S A S ) −1 sgn(x S ), (v j ) * | ≤ ||(A * S A S ) −1 || 2→2 ||sgn(x S )|| 2 ||v j || 2 ≤ √ k + t 1 − δ k,ty = Fx + e = A x h(k) e + Fx t(k) , where x h(k) e ∈ M kt . Therefore, setting x # = BP(y, A, ||x t(k) || 2 ),
we use the fact ||Fx t(k) || 2 = ||x t(k) || 2 combined with the bound in Theorem 10 to get (4):
||x # −x h(k) || 2 ≤ 2τ √ k + t 1 − θ 1 + β 1 − δ k,t + 2τ ||x t(k) || 2 where we write x # = x # e # withx # , e # ∈ C n .
We note that since Algorithm 2 is not adapted to the structure of the matrix A in the statement of Theorem 2, one can expect the guarantees to be weaker. We now focus on proving Theorem 3. In order to do so, we will need to state a some lemmas that will be used in the main proof.
Lemma 12. If a matrix A ∈ C m×N satisfies the 2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the 1 robust null space property for S with constants 0 < ρ < 1, τ := τ √ s > 0.
Proof. For any v ∈ C N , ||v S || 2 ≤ ρ √ s ||vS|| 1 + τ ||Av||. Then, using the fact that ||v S || 1 ≤ √ s||v S || 2 , we get:||v S || 1 ≤ ρ||vS|| 1 + τ √ s||Av||.
Lemma 13 (Theorem 4.20 in [8]). If a matrix A ∈ C m×N satisfies the 1 robust null space property (with respect to ||.||) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
||z − x|| 1 ≤ 1 + ρ 1 − ρ (||z|| 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ 1 − ρ ||A(z − x)|| for all z, x ∈ C N .
Lemma 14 (Proposition 2.3 in [8]). For any p > q > 0 and x ∈ C n ,
inf z∈M k x − z p ≤ 1 (k) 1 q − 1 p ||x|| q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n]
and v ∈ C n , we have
||v S || 2 ≤ ρ √ k ||v S || 1 + τ ||v|| 2 = ρ √ k ||v S || 1 + τ ||F v|| 2(10)
where τ = 1. Therefore, F satisfies the 2 robust null space property for all S ⊆ [n] with card(S) ≤ k. Next, using Lemma 12 we get ||v S || 1 ≤ ρ||vS|| 1 + τ √ k||F v|| 2 for all v ∈ C n . Now let x # = BP(y, F, η), then we know ||x # || 1 ≤ ||x|| 1 , where x is k-sparse. Then letting S ⊆ [n] be the support of x and using the fact that ||x S || 2 = 0 and Lemma 13 , we get: Table 2: Recovery performance of Algorithm 1 on 0 -norm bounded noise.
||x # − x|| 1 ≤ 1 + ρ 1 − ρ (||x # || 1 − ||x|| 1 + 2||xS|| 1 ) + 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 2τ √ k 1 − ρ ||F (x # − x)|| 2 ≤ 4τ √ k 1 − ρ ||e|| 2 ≤ 4τ √ k 1 − ρ η
Letting ρ → 0 and recalling that τ = 1 gives (5). Now let S be the support of the k largest entries in
x # − x. Note (x # − x) S 2 = inf z∈M k (x # − x) − z 2 .
Then, using Lemma 14 and (10), we see that
||x # − x|| 2 ≤ ||(x # − x) S || 2 + ||(x # − x) S || 2 ≤ 1 √ k ||(x # − x)|| 1 + ρ √ k ||(x # − x) S || 1 + τ ||F (x # − x)|| 2 ≤ 1 + ρ √ k ||(x # − x)|| 1 + 2τ η ≤ 4τ (1 + ρ) (1 − ρ) η + 2τ η = 4τ (1 + ρ) (1 − ρ) + 2τ η
Recalling τ = 1 and letting ρ → 0 gives the desired result.
Experiments
We first analyze how our recovery guarantees perform in practice (Section 7.1) and then show that CRD can be used to defend neural networks against 0 -norm attacks (Section 7.2) as well as 2 -norm attacks (Section 7.3). All of our experiments are conducted on CIFAR-10 [13], MNIST [14], and Fashion-MNIST [28] datasets with pixel values of each image normalized to lie in [0, 1]. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ R n×n and F T ∈ R n×n respectively. That is, for an adversarial image y ∈ R √ n× √ n , such that, y = x + e, we letx = F x, and x = F Tx , where x,x ∈ R n and e ∈ R n is the noise vector (bounded either in 0 or 2 -norm). For an adversarial image y ∈ R √ n× √ n×c , that contains c channels, we perform recovery on each channel independently by considering y m = x m + e m , wherê x m = F x m , x m = F Tx m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT co-efficients used for reconstruction of each channel, and the value t denotes the 0 noise budget for each channel.
We now outline the neural network architectures used for experiments in Section 7.2 and 7.3. For CIFAR-10, we use the network architecture of [10] Fashion-MNIST datasets is provided in Table 1. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and Fashion-MNIST. In both cases, we use a cross-entropy loss function. We implement the following training procedure: for every training image x, we first generatex h(k) = (F x) h(k) , and then reconstruct the image x = F Tx h(k) . We then use both x and x to train the network. For instance, in MNIST we get 60000 original training images and 60000 reconstructed training images, for a total of 120000 training images. The code to reproduce our experiments is available here: https://github.com/jasjeetIM/recovering_ compressible_signals.
Recovery Error
Since recovery guarantees for Algorithm 1 and Algorithm 2 have been proved theoretically, our aim is to examine how close the recovery error is to the upper bound in practice. Each experiment is conducted on a subset of 500 data points sampled uniformly at random from the respective dataset. We first provide the experimental results for the case of 0 -norm bounded noise in Section 7.1.1 and then for the case of 2 -norm bounded noise in Section 7.1.2.
0 noise
For each data point x i ∈ R n , i = 1, 2, . . . , 500, we construct a noise vector e i ∈ R n as follows: we first sample an integer t i from a uniform distribution over the set {1, . . . , t}, where t is the allowed 0 noise budget. Next, we select an index set S t i ⊂ [n] uniformly at random, such that card(S t i ) = t i .
Orig. Acc. OPA. Acc Corr. Acc. 77.4% 0.0% 68.3% Table 5: Effectiveness of CRD against OPA. The first column lists the accuracy of the network on original images and the OPA Acc. columns shows the network's accuracy on adversarial images. The Corr. Acc. column shows the accuracy of the network on images reconstructed using Algorithm 1. Figure 2: Reconstruction quality of images using Algorithm 1. The first row shows the original images while the second row shows reconstruction from the largest 275 DCT co-efficients recovered using Algorithm 1.
Then for each j ∈ S t i , we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1) and (e i ) l = 0 for l / ∈ S t i We then set y i = x i + e i as the observed noisy vector. The first metric we report is δ p := 1
500 500 i=1 ||(x # i ) h(k) − (x i ) h(k) || p , where x # i 2 is the recovered vector for the noisy measurement y i , (x i ) h(k) = (F x i ) h(k)
and the average is taken over the 500 points sampled from the dataset. This measures the average magnitude of the recovery error for the respective algorithm in p norm. In order to relate this value to the upper bound on the recovery error, we also report ∆ p := 1
500 500 i=1 (Υ i − ||(x # i ) h(k) − (x i ) h(k) || p ),
where Υ i is the guaranteed upper bound (as per our Theorems 1 and 2) for y i . Using δ p and ∆ p , we aim to capture how much smaller the recovery error is than the upper bound for these datasets. Finally, we also report t avg := 1 500 500 i=1 t i .
Recovery with Algorithm 1
We set k = 4 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 3. For CIFAR-10, we set k = 5 and are allowed a noise budget of t = 3. That is, the number k of largest co-efficients used in each experiment is roughly equal to the 0 noise budget used. We note that k values have been chosen to meet our computational constraints. As such, any other values that fit the hypotheses of Theorem 1 would work just as well. The results in Table 2 show that on average, the recovery error is well below the upper bound for each dataset. This is quantified by ∆ ∞ and ∆ 2 that show a large difference between the upper bound and the observed error for all three datasets. We will utilize this observation in Section 7.2 when we show that recovery works well even when t is outside the theoretical constraints of Theorem 1.
Recovery with Algorithm 2
We implement Algorithm 2 using the open source library CVXPY [6]. We set k = 8 for MNIST and Fashion-MNIST and are allowed an 0 noise budget of t = 8. For CIFAR-10, we set k = 10 and are allowed a noise budget of t = 8. We observe the results in Table 3 and note once again that the 2 Note that x # i isx [T ] in the statement of Theorem 1 andx # in the statement of Theorem 2.
recovery error is well below the upper bound. This observation will also be useful in Section 7.2, where we will show that recovery error of Algorithm 2 is small for values of t that are well outside the theoretical constraints of Theorem 2.
2 noise
Now we consider the case when the noise vector e i , i = 1, 2, . . . 500 is only bounded in 2 -norm. This case is covered by the guarantees provided in Theorem 3. First we describe the procedure used to construct each noise vector. For each e i , i = 1, 2 . . . 500, we set (e i ) j = c j , where c j is sampled from the uniform distribution on [0, 1). Since there is no restriction on how small k needs to be, we set k = 75 for CIFAR-10 and k = 40 for MNIST and Fashion-MNIST. We report δ 1 , δ 2 , ∆ 1 , ∆ l 2 and since the noise budget is in 2 -norm, we also report 2avg := 500 i=1 ||e i || 2 . The results are shown in Table 4. As was the case in the Section 7.1.1, the recovery error is well below the upper bound here as well. This observation will be useful in Section 7.3 where we are able to create high quality reconstructions for 2 -norm bounded attacks.
Making note of the results from Section 7.1.1 and 7.1.2, we now show that CRD can be used to defend against 0 -norm and 2 -norm bounded adversarial inputs.
Defense against 0 -norm attacks
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) [22] for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in our guarantees, hence allowing us to test how well CRD works within our gaurantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other 0 -norm bounded attacks: Carlini and Wagner (CW) 0 -norm attack [4] and the Jacobian based Saliency Map Attack (JSMA) [19]. For the latter two attacks, we test CRD on the all three datasets. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset.
One Pixel Attack
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses in Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an 0 noise budget of t = 3 per image. The 0 noise budget of t = 3 allows us to use k = 275 for recovery with Algorithm 1. Even though OPA only perturbs one pixel per image, Table 5 shows that it is very effective against natural images and forces the network to misclassify all correctly classified inputs. Figure 1 shows that adversarial images created using OPA are visually almost indistinguishable from the original images. We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images.
In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. The reconstructed images from Algorithm 1 can be seen in the second row of Figure 2. These reconstructions are then compared to the original images presented in the first row of the same figure.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as Table 6: Network performance on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the t avg column lists the average adversarial budget for each attack, Orig. Acc. column lists the accuracy of the network on the original inputs, the Acc. columns shows the accuracy on adversarial inputs, the IHT-Acc. and the BP-Acc. columns list the accuracy of the network on inputs that have been corrected using Algorithm 1 and Algorithm 2 respectively.
input to the network and report its accuracy in Table 5. We note that network performance does indeed improve as network accuracy goes from 0.0% to 68.3% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
CW-0 Attack and JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW 0 -norm attack and JSMA. Since these two attacks do not necessarily satisfy the required hypotheses on t for Theorem 1 and Theorem 2, we call upon the results of Section 7.1.1 to test if CRD is still able to defend the network against these attacks. For instance, in the case of the CW-0 attack, there is no way to pre-specify a fixed adversarial noise budget since the attack iteratively reduces the number of perturbed pixels until it is no longer effective. For JSMA one can pre-specify an adversarial budget, but as noted in [1], JSMA is only effective with larger values of t. However, even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that CRD is still able to defend the network. We observe that this is related to the behaviour of the RIP of a matrix for "most" 3 vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
To begin our analysis, we show adversarial images for MNIST and Fashion-MNIST created by CW-0 and JSMA in Figure 3. The first row contains the original test images while the second and the third rows show the adversarial images. We show adversarial images for the CIFAR-10 dataset in Figure 4. Next, we follow the procedure described in Section 7.2.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2. For MNIST and Fashion-MNIST, we show the reconstructions of Algorithm 1 in Figure 5 and for Algorithm 2 in Figure 6. For CIFAR-10, we show the reconstructions for Algorithm 1 in Figure 7 and for Algorithm 2 in Figure 8. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 6.
Note that while the network accuracy for all datasets improves substantially using CRD, for Algorithm 1, network accuracy on reconstructed images for CIFAR-10 remains considerably lower than accuracy on original images. We observe that a possible reason may be the difference in prop- erties of DCT co-efficients of MNIST/Fashion-MNIST data versus data from CIFAR-10. Consider the definition of (k, )-sparse adapted from [1]: a (k, )-sparse vector x ∈ C n follows the constraint ||x t(k) || 2 ≤ ||x h(k) || 2 . The point of this definition is that smaller values of mean the vector x is closer to being k-sparse. We notice that the average value of for DCT co-efficients for CIFAR-10 is approximately 0.30 while that for MNIST is 1.06 and Fashion-MNIST is approximately 0.89, where k = 0.05n . Based on our limited experimental results, it may be hypothesized that Algorithm 1 works well for larger values of , when k, t do not do not fit the constraints of Theorem 1. However, a deeper investigation is required to understand what makes Algorithm 1 perform poorly for CIFAR-10.
Defense against 2 -norm attacks
In the case of 2 -norm bounded attacks, we use the CW 2 -norm attack [4] and the Deepfool attack [18] as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (5) and (6) are applicable for recovery in all experiments of this section. Figure 9 shows examples of each attack for the CIFAR-10 dataset while adversarial images for MNIST and Fashion-MNIST are presented in Figure 10.
The reconstruction quality for MNIST and Fashion-MNIST is shown in Figure 11 and for CIFAR-10 we show the reconstruction quality in Figure 12. It can be noted that reconstruction Table 7: Accuracy of our network on the original inputs, adversarial inputs and the inputs corrected using CRD. Here the 2avg column lists the average 2 -norm of the attack vector, Acc. columns list the accuracy of the network on the original and adversarial inputs, and the Corr. Acc. columns lists the accuracy of the network once the inputs have been corrected using CRD.
using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 7 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used. We can conclude that CRD is able to defend neural networks against 2 -norm bounded attacks.
Original CW 2 Deepfool Figure 10: Adversarial images for MNIST and Fashion-MNIST datasets for 2 -norm bounded attacks. The first row lists the original images for the MNIST and Fashion MNIST dataset. The second row shows adversarial images created using the CW 2 -norm attack and the third row shows adversarial images created using the Deepfool attack.
Original CW 2 Deepfool Figure 11: Reconstruction from adversarial images using Algorithm 2. The first row shows the original images while the second and the third rows show the reconstruction of the adversarial images after recovering the largest 40 co-efficients using Algorithm 2. 7.4 Which recovery algorithm to use for 0 -norm attacks As shown in Section 7.1.1, Algorithm 1 and Algorithm 2 lead to high quality reconstructions for 0 -norm bounded attacks. Hence, it is conceivable that CRD using either algorithm should be able to provide a good defense. However, we found that reconstructions using Algorithm 2 led to better network accuracy for CIFAR-10 than Algorithm 1 while Algorithm 1 outperformed Algorithm 2 for MNIST and Fashion-MNIST. Therefore, the algorithm to use may be dependent on the dataset in question. The next question to examine is which algorithm is faster in practice. Since Algorithm 2 is not technically an algorithm, its runtime is dependent on the actual method used to solve the optimization problem. For instance, we use Second Order Cone Programming (SOCP) from CVXPY [6] for solving the minimization problem in Algorithm 2. In our experiments, we noticed that the runtime of Algorithm 2 slows considerably for larger values of n. However, Algorithm 1 does not face this issue (there is a slowdown but it is much smaller than Algorithm 2). Therefore, if speed is important, it may be beneficial to use Algorithm 1 as opposed to Algorithm 2 for recovery in the case of 0 -norm attacks.
Conclusion
We provided recovery guarantees for corrupted signals in the case of 0 -norm bounded and 2 -norm bounded noise. We then experimentally verified these guarantees and showed that for the datasets used, recovery error was considerably lower than the upper bounds of our theorems. We were able to utilize these observations in CRD and improve the performance of neural networks substantially in the case of 0 -norm bounded noise as well as 2 -norm bounded noise. While 0 -norm attacks don't necessarily satisfy the constraints required for our guarantees, we showed that CRD is still able to provide a good defense for values of t much larger than allowed by Theorems 1 and 2.
In the case of 2 -norm bounded adversaries, the guarantees of Theorem 3 were applicable in all experiments and CRD was shown to improve network performance for all attacks. | 8,526 |
1907.06553 | 2972864268 | Modeling error or external disturbances can severely degrade the performance of Model Predictive Control (MPC) in real-world scenarios. Robust MPC (RMPC) addresses this limitation by optimizing over feedback policies but at the expense of increased computational complexity. Tube MPC is an approximate solution strategy in which a robust controller, designed offline, keeps the system in an invariant tube around a desired nominal trajectory, generated online. Naturally, this decomposition is suboptimal, especially for systems with changing objectives or operating conditions. In addition, many tube MPC approaches are unable to capture state-dependent uncertainty due to the complexity of calculating invariant tubes, resulting in overly-conservative approximations. This work presents the Dynamic Tube MPC (DTMPC) framework for nonlinear systems where both the tube geometry and open-loop trajectory are optimized simultaneously. By using boundary layer sliding control, the tube geometry can be expressed as a simple relation between control parameters and uncertainty bound; enabling the tube geometry dynamics to be added to the nominal MPC optimization with minimal increase in computational complexity. In addition, DTMPC is able to leverage state-dependent uncertainty to reduce conservativeness and improve optimization feasibility. DTMPC is demonstrated to robustly perform obstacle avoidance and modify the tube geometry in response to obstacle proximity. | A number of works have been published on the stability, feasibility, and performance of linear tube MPC @cite_10 @cite_13 @cite_8 . While this is an effective strategy to achieve robustness, decoupling the nominal MPC problem and controller design is suboptimal. Rakovi @cite_17 showed that the region of attraction can be enlarged by parameterizing the problem with the open-loop trajectory and tube size. The authors presented the homothetic tube MPC (HTMPC) algorithm that treated the state and control tubes as homothetic copies of a fixed cross-section shape, enabling the problem to be parameterized by the tube’s centers (i.e., open-loop trajectory) and a cross-section scaling factor. The work was extended to tubes with varying shapes, known as elastic tube MPC (ETMPC), but at the expense of computational complexity @cite_12 . Both HTMPC and ETMPC possess strong theoretical properties and have the potential to significantly improve performance but a nonlinear extension has yet to be developed. | {
"abstract": [
"Abstract Motivated by requirements in the process industries, the largest user of model predictive control, we re-examine some features of recent research on this topic. We suggest that some proposals are too complex and computationally demanding for application in this area and make some tentative proposals for research on robust and stochastic model predictive control to aid applicability",
"This paper recalls a few past achievements in Model Predictive Control, gives an overview of some current developments and suggests a few avenues for future research.",
"",
"This paper introduces elastic tube model predictive control (MPC) synthesis. The proposed framework is a natural generalization of the rigid and homothetic tube MPC design methods. The cross-sections of the employed state and control tubes are allowed to change more elastically, while the local component of the tubes control policy is permitted to take a more general form. The related stabilizing terminal conditions are also adequately generalized in order to take advantage of more flexible tubes and tubes control policy parameterizations. These novel features result in an improved tube MPC at the cost of a manageable increase in computational complexity.",
"The robust model predictive control for constrained linear discrete time systems is solved through the development of a homothetic tube model predictive control synthesis method. The method employs several novel features including a more general parameterization of the state and control tubes based on homothety and invariance, a more flexible form of the terminal constraint set and a relaxation of the controlled dynamics of the sets that define the state and control tubes. Under natural assumptions, the proposed method is computationally efficient and it induces strong system theoretic properties."
],
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2346460013",
"2028678875",
"",
"2485885111",
"2080256219"
]
} | Dynamic Tube MPC for Nonlinear Systems | Model predictive control (MPC) has become a core control strategy because of its natural ability to handle constraints and balance competing objectives. Heavy reliance on a model though makes MPC susceptible to modeling error and external disturbances, often leading to poor performance or instability. Robust MPC (RMPC) addresses this limitation (at the expense of additional computational complexity) by optimizing over control policies instead of open-loop control actions. Tube MPC is a tractable alternative that decomposes RMPC into an offline robust controller design and online open-loop MPC problem. However, this decoupled design strategy restricts the tube geometry (i.e., feedback controller) to be fixed for all operating conditions, which can lead to suboptimal performance. This article presents a framework for nonlinear systems where the tube geometry and openloop reference trajectory are designed simultaneously online, giving the optimization an additional degree of freedom to satisfy constraints or changing objectives.
Tube MPC for nonlinear systems has been an active area of research. For example, hierarchical MPC [1], reachability theory [2], sliding mode control [3], [4], sum-of-square optimization [5], and Control Contraction Metrics [6] been recently used in nonlinear tube MPC. These approaches try to maximize robustness by minimizing tube size given control constraints and bounds on uncertainty. However, minimizing tube size typically results in a high-bandwidth controller that responds aggressively to measurement noise or external disturbances. For mobile systems that use onboard sensing for estimation or perception, this type of response can severely degrade performance or cause a catastrophic failure. Further, the performance reduction often depends on the current operating environment so modifying the tube geometry online would be advantageous. While there is a precedent for optimizing tube geometry in linear MPC [7], [8], the relationship between tube geometry and control parameters for nonlinear systems is often too complex to put in a form suitable for real-time optimization. The approach described herein circumvents this issue by providing a simple and exact description of how the tube geometry, control parameters, and uncertainty are related, enabling the tube geometry to be optimized in real-time.
The primary contribution of this work is a tube MPC framework for nonlinear systems that simultaneously optimizes tube geometry and open-loop reference trajectories in the presence of uncertainty. The proposed framework leverages the simplicity and strong robustness properties of time-varying boundary layer sliding control [9] to establish a connection between tube geometry, control parameters, and uncertainty. Specifically, the tube geometry can be described by a simple first-order differential equation that is a function of control bandwidth and uncertainty bound. This allows the development of a framework with several desirable properties. First, the tube geometry can be easily optimized, with minimal increase in computational complexity, by treating the control bandwidth as a decision variable and augmenting the state vector with the tube geometry dynamics. Second, the uncertainty bound in the tube dynamics can be made state-dependent, allowing the optimizer to make smarter decisions about which states to avoid given the system's current state and proximity to constraints. And third, less conservative tubes can be constructed by combining the tube and tracking error dynamics. Simulation results demonstrate DTMPC's ability to optimize the tube geometry, via modulating control bandwidth and/or utilize knowledge of statedependent uncertainty, to robustly avoid obstacles.
III. PROBLEM FORMULATION
Consider a nonlinear, time-invariant, and control affine system given by (omitting the time argument)
x = f (x) + b (x) u + d,(1)
where x ∈ R n is the state of the system, u ∈ R m is the control input, and d ∈ R n is an external disturbance. Note that the model error bound in assumption 1 is state-dependent, which can be leveraged to construct less conservative tubes.
Assumption 2. The disturbance d belongs to a closed, bounded, and connected set D (i.e., D := {d ∈ R n : |d| ≤ D}) and is in the span of the control input matrix (i.e., d ∈ span (b(x))).
The standard RMPC formulation involves a minimax optimization to construct a feedback policy π : X × R → U where x ∈ X and u ∈ U are the allowable states and control inputs, respectively. However, optimizing over arbitrary functions is not tractable and discretization suffers from the curse of dimensionality. The standard approach taken in tube MPC [10] is to change the decision variable from control policy π to open-loop control input u * . In order to achieve this re-parameterization, the following assumption is made about the structure of the control policy π. Assumption 3. The control policy π takes the form π = u * + κ(x, x * ) where u * and x * are the open-loop input and reference trajectory, respectively.
In the tube MPC literature κ is known as the ancillary controller and is typically designed offline. The role of the ancillary controller is to ensure the state x remains in a robust control invariant (RCI) tube around the nominal trajectory x * . Definition 1. Let X denote the set of allowable states and let x := x − x * . The set Ω ⊂ X is a RCI tube if there exists an ancillary controller κ (x, x * ) such that ifx (t 0 ) ∈ Ω, then, for all realizations of the disturbance and modeling error, Calculating a RCI tube for a given ancillary controller can be difficult for nonlinear systems. Unsurprisingly, the chosen methodology for synthesizing the ancillary controller can dramatically influence the complexity of calculating the tube geometry. Ideally, the controller and tube geometry could be parameterized such that an explicit relationship between the two can be derived; enabling the controller and tube geometry to be designed online within the optimization. Also, the control strategy should be able capture statedependent uncertainty and how it impacts the tube geometry to reduce conservativeness. While it may seem infeasible to find such a control synthesis strategy, Section IV will show that boundary layer sliding control possesses both properties.
x(t) ∈ Ω, ∀t ≥ t 0 .
IV. BOUNDARY LAYER SLIDING CONTROL
A. Overview
This section reviews time-varying boundary layer sliding control [9], [16], provides analysis supporting its use as an ancillary controller, and shows how the DTMPC framework leverages its properties. As reviewed in Section II, sliding mode control has been extensively used for nonlinear tube MPC because of its simplicity and strong robustness properties. Unlike other control strategies, sliding mode control completely cancels any bounded modeling error or external disturbance (reducing the RCI tube to zero). However, complete cancellation comes at the cost of high-frequency discontinuous control making it impractical for many real systems; a number of version that ensure continuity in the control signal have since been developed. Note that the boundary layer controller was originally developed in [9] and is only presented here for completeness. Before proceeding the following assumption is made.
Assumption 4. The system given by (1) has the same number of outputs to be controlled as inputs. More precisely, the dynamic can be expressed as
x (ni) i = f i (x) + m j=1 b ij (x)u j + d i , i = 1, ..., m. (2)
Note that assumption 4 requires system (1) to be either feedback linearizable or minimum phase. While many systems fall into one of these categories, future work will extend DTMPC to more general nonlinear systems.
B. Sliding Control
Letx i := x i −x *
i be the tracking error for output x i . Then, for λ i > 0, the sliding variable s i for output x i is defined as
s i = d dt + λ i ni−1x i (3) =x (ni−1) i + · · · + λ ni−1 ix i = x (ni−1) i − x (ni−1) ri , where x (ni−1) ri = x * (ni−1) i − ni−1 k=1 n i − 1 k − 1 λ ni−k ix (k−1) i .(4)
In sliding mode control, a sliding manifold S i is defined such that s i = 0 for all time once the manifold is reached. This condition guarantees the tracking error goes to zero exponentially via (3). It can be shown that a discontinuous controller is required to ensure the manifold S i is reached in finite time and is invariant to uncertainty [16]. However, high-frequency discontinuous control can, among other things, excite unmodeled high-frequency dynamics and shorten actuator life span.
One strategy to smooth the control input is to introduce a boundary layer around the switching surface. Specifically, let the boundary layer be defined as B i := {x : |s i | ≤ Φ i } where Φ i is the boundary layer thickness. If Φ i is time varying, then the boundary layer can be made attractive if the following differential equation is satisfied
1 2 d dt s 2 i ≤ Φ i − η i |s i |,(5)
where η i dictates the convergence rate to the sliding surface.
Differentiating (3),
s i = x (ni) i − x (ni) ri = f i (x) + m j=1 b ij (x)u j + d i − x (ni) ri .(6)
Stacking (6) for each output, the vector version is obtaineḋ
s = F (x) + B(x)u + d + x (n) r .(7)
Note that F and B are stacked versions of the dynamics and input matrix, respectively, that correspond to the output variables. If the output variables are chosen to be the full state vector (i.e., state feedback linearization), then F and B simply become the dynamics and input matrix in (1). Let the controller take the form
u = B(x) −1 −F (x) − x (n) r − K(x)sat ( s /Φ) ,(8)
where sat(·) is the saturation function and the division is element-wise. Then, for |s| > Φ, the boundary layer is attractive if
K(x) = ∆(x) + D + η −Φ.(9)
Addition information can be inferred by considering the sliding variable dynamics inside the boundary. Again substituting (8) into (7) with |s| ≤ Φ,
s = − K(x) Φ s + F (x) −F (x) + d,(10)
where again the division is element-wise. Alternatively, (10) can be written aṡ
s = − K(x * ) Φ s + F (x * ) −F (x * ) + d + O (x) ,(11)
which is a first order filter with cutoff frequency K(x * ) Φ . Let α be the desired cutoff frequency, then, leveraging (9), one obtains
∆(x * ) + D + η −Φ Φ = α,(12)orΦ = −αΦ + ∆(x * ) + D + η.(13)
Thus, the final control law is given by (8), (9), and (13).
C. Discussion
The boundary layer sliding controller in (8) allows us to establish several key properties at the core of DTMPC.
Theorem 1 (RCI Tube). Letz i = x iẋi · · · T be the error vector for outputx i . Boundary layer control induces a robust control invariant tube Ω i where the tube geometry is given by
Ω i (t) ≤ e Ac,i(t−t0) Ω i (t 0 ) + t t0 e Ac,i(t−t0−τ ) B c,i Φ i (τ )dτ,(14)
where A c,i and B c,i are found by putting (3) into the controllable canonical form.
Proof. Recalling the definition of s i from (3), the error dynamics are given by the linear differential equatioñ
x (ni−1) i + · · · + λ ni−1 ix i = s i .(15)
With the error vectorz i = x iẋi · · · T and putting (15) into the controllable canonical form, the solution to (15) is
z i (t) = e Ac,i(t−t0)z i (t 0 ) + t t0 e Ac,i(t−t0−τ ) B c,i s i dτ.(16)
Taking the element-wise absolute value | · |, setting Ω i (t) = |z i (t)|, and noting |s i | ≤ Φ i , (14) is obtained. Thus, by Definition 1, Ω i is a RCI tube since the error vectorz i is bounded.
Theorem 1 proves that the geometry of the RCI tube Ω i is uniquely described by the boundary layer thickness Φ i . Using the terminology introduced by Rakovic et al., the tubes in our approach are both homothetic and elastic. For this reason, and the ability to capture state-dependent uncertainty, the approach developed here is called Dynamic Tube MPC. Further, as briefly discussed in [6], a tighter geometry can be obtained if the current (as opposed to the predicted) tracking error is used in (14).
The importance of (13) and (14) cannot be understated. It gives a precise description for how the tube geometry changes with the level of uncertainty (from the model or otherwise). This is an incredibly useful relation for constructing tubes that are not overly conservative since, in most cases, the model error bound is typically picked to be a large constant because of the difficulty/inability to establish a relation like (13). By letting the uncertainty be state-dependent, the controller and the MPC optimizer (to be discussed in Section V) can leverage all the available information to maximize performance. This further underlines the importance of acquiring a high-fidelity model to reduce uncertainty and make the tube as small as possible without using high-bandwidth control.
Another interesting aspect of (13) is the choice of the cutoff frequency α. In general, α and λ are picked based on control-bandwidth requirements, such as actuator limits or preventing excitation of higher-order dynamics. It is clear from (13) that a larger α produces a smaller boundary layer thickness (i.e., high-bandwidth control leads to compact tubes). However, from (11), increasing the bandwidth also increases the influence of the uncertainty. Hence, the bandwidth should change depending on the current objective and proximity to state/control constraints (see Section V).
V. DYNAMIC TUBE MPC
A. Overview
This section presents the DTMPC algorithm and discusses its properties. DTMPC is a unique algorithm because of its ability to change the tube geometry to meet changing objectives and to leverage state-dependent uncertainty to maximize performance. This section first presents a constraint tightening procedure necessary to prevent constraint violation due to uncertainty. Next, optimizing the tube geometry by adding the control bandwidth as a decision variable is discussed. Lastly, the non-convex formulation of DTMPC is presented. Before proceeding, the following assumption is made about the form of the state and actuator constraints.
Assumption 5. The state and actuator constraints take the form
P x x + q x ≤ c x , P u u + q u ≤ c u ,(17)
where · is the 2-norm.
Many physical systems posses these type of constrains so the above assumption is not overly restrictive.
B. Constraint Tightening
State and actuator constraints must be modified to account for the nonzero tracking error and control input caused by model error and disturbances. The following corollary establishes the modified state constraint. (8) is used as an ancillary controller with associated RCI tube B and bounded tracking error |x|. Then, the following modified state constraint
Corollary 1 (Tightened State Constraint). Assume the control law
P x x * + q x ≤ c x − P xx ,(18)
guarantees, for all realization of the uncertainty, the true constraint is satisfied.
Proof. Recall that Theorem 1 established that the boundary layer controller induces is a RCI tube with geometry given by (14). Then, the state is always upper bounded by x ≤ x * +|x|. Substituting this bound into the state constraint (17) and using the triangle inequality, the result is obtained.
Tightening the actuator constraints is more complicated since the control law in (8) depends on the current state x. However, the tracking error bound can be used to obtain an upper bound on the control input that is only a function of the boundary layer thickness, desired state, and dynamics. It is helpful to put the controller into a more useful form for the following theorem
u = B(x) −1 x * (n) −F (x) − n−1 k=1 n − 1 k − 1 λ n−kx(k) − K(x)sat ( s /Φ) ,(19)
where the first term is the feedforward (and hence the decision variable in the optimization) and the last three are the feedback terms.
Theorem 2 (Control Input Upper Bound). Assume that the control law is given by (19). Then, the control input is upper bounded, for all realizations of the uncertainty, by
u ≤B −1 x * (n) +F + n−1 k=1 n − 1 k − 1 λ n−k |x (k) |+K , (20) whereB −1 = max B −1 (x) , B −1 (x) ,(21)F = max F (x) , F (x) ,(22)K = max {K (x) , K (x)} ,(23)
withx := x * −|x|,x := x * +|x|, and max {·} is the elementwise maximum.
Proof. The tracking error bound can be leveraged to eliminate the state-dependency in (19). Specifically, the state is bounded by
x * − |x| ≤ x ≤ x * + |x|,(24)
where |x| is the solution to (14) when equality is imposed. It is clear from (19) that to upper bound u, the inverse of the input matrix B −1 and the last three feedback terms should be maximized. Definex := x * − |x| andx := x * + |x|, then using (24), each term in (19) can be upper bounded by evaluating atx andx and taking the maximum, resulting in Eqs. (21) to (23) and hence (20).
The bound established by Theorem 2 can be put into a more concise form
u ≤B −1 [u * +ū f b ] ,(25)
where u * := x * (n) andū f b is the sum of the last three terms in (20). Using Theorem 2, the following corollary establishes the tightened actuator constraint.
Corollary 2 (Tightened Actuator Constraint). Assume the control law (8) is used as an ancillary controller with associated RCI tube B and upper bound on input due to feedback u f b . Then, the following modified actuator constraint
P uB −1 u * + q u ≤ c u − P uB −1ū f b ,(26)
guarantees, for all realization of the uncertainty, the true constraint is satisfied.
Proof. Theorem 2 established the upper bound on the control input to be u ≤B −1 [u * +ū f b ]. Substituting this bound into the actuator constraint (17) and using the triangle inequality, the result is obtained.
C. Optimized Tube Geometry
For many autonomous systems, the ability to react to changing operating conditions is crucial for maximizing performance. For instance, a UAV performing obstacle avoidance should modify the aggressiveness of the controller based on the current obstacle density to minimize expended energy. Formally, the tube geometry must be added as a decision variable in the optimization to achieve this behavior. DTMPC is able to optimize the tube geometry because of the simple relationship between the tube geometry, control bandwidth, and level of uncertainty given by (13). This is one of the distinguishing features of DTMPC since other state-of-the-art nonlinear tube MPC algorithms are not able to establish an explicit relationship like (13).
In Section IV, it was shown that the control bandwidth α is responsible for how the uncertainty affects the sliding variable s. Subsequently, the choice of α influences the tube geometry (via (13)) and control gain (via (9)). In order to maintain continuity in the control signal, the tube geometry dynamics are augmented such that α and Φ remain smooth. More precisely, the augmented tube dynamics arė
Φ = −αΦ + ∆(x * ) + D + η, α = v,(27)
where v ∈ V is an artificial input that will serve as an additional decision variable in the optimization. It is easy to show that the above set of differential equations is stable so long as α remains positive.
D. Complete Formulation
With Corollary 1 and 2 establishing the tightened state and actuator constraints, the Dynamic Tube MPC optimization can now be formulated as
Problem 1 -Dynamic Tube MPC miň u(t),v(t) J = h(x(t f )) + t f t0 (x(t),ǔ(t),α(t),v(t))dt subject toẋ(t) =f (x(t)) + b(x(t))ǔ(t),α(t) =v(t), Φ(t) = −α(t)Φ(t) + ∆(x(t)) + D + η,Ω(t) = A c Ω(t) + B c Φ(t), Ω(t 0 ) = |x(t 0 )|, x(t 0 ) = x * 0 , Φ(t 0 ) = Φ 0 ,x (t f ) = x * f , x(t) ∈X,ǔ(t) ∈Ū,α(t) ∈ A,v(t) ∈ V,
where· denotes the internal variables in the optimization; Ω is the tube geometry with matrices A c and B c given by putting (3) into controllable canonical form;X andŪ are the tightened state and actuator constraints; and and h are the quadratic state and terminal cost. The output of DTMPC is an optimal open-loop (i.e., feedforward) control input u * , trajectory x * , and control bandwidth α * .
DTMPC is inherently a non-convex optimization problem because of the nonlinear dynamics. However, non-convexity is a fundamental characteristic of nonlinear tube MPC and a number of approximate solution procedures have been proposed. The key takeaway, though, is that Problem 1 is a nonlinear tube MPC algorithm that simultaneously optimizes the open-loop trajectory and tube geometry, eliminating the duality gap in standard tube MPC. Furthermore, conservativeness can be reduced since Problem 1 is able to leverage state-dependent uncertainty to select an open-loop trajectory based on the structure of the uncertainty and proximity to constraints. The benefits of these properties, in addition to combining the tube geometry and error dynamics, will be demonstrated in Section VIII.
VI. COLLISION AVOIDANCE MODEL
A. Overview
Collision avoidance is a fundamental capability for many autonomous systems, and is an ideal domain to test DTMPC for two reasons. First, enough safety margin must be allocated to prevent collisions when model error or disturbances are present. More precisely, the optimizer must leverage knowledge of the peak tracking error (given by the tube geometry) to prevent collisions. The robustness of DTMPC and ability to utilize knowledge of state dependent uncertainty can thus be demonstrated. Second, many real-world operating environments have variable obstacle densities so the tube geometry can be optimized in response to a changing environment. The rest of this section presents the model and formal optimal control problem.
B. Model
This work uses a double integrator model with nonlinear drag, which describes the dynamics of many mechanical systems. Let r = [r x r y r z ] T be the inertial position of the system that is to be tracked. The dynamics arë
r = −C d ṙ ṙ + g + u + d,(28)
where g ∈ R 3 is the gravity vector, C d is the unknown but bounded drag coefficient (0 ≤ C d ≤C d ), and d is a bounded disturbance (|d| ≤ D). From (8), the control law is
u =Ĉ d ṙ ṙ +r * − λṙ − Ksat ( s /Φ) ,(29)
whereĈ d is the best estimate of the drag coefficient, s = r + λr, and
K =C d ṙ |ṙ| −C d ṙ * |ṙ * | + αΦ,(30)Φ = −α * Φ +C d ṙ * |ṙ * | + D + η.(31)
C. Collision Avoidance DTMPC
Let H, p c , and r o denote the shape, location, and size of an obstacle. The minimum control effort DTMPC optimization with collision avoidance for system (28) is formulated as
Problem 2 -Collision Avoidance DTMPC miň u(t),v(t) J = t f t0 ǔ(t) T Qǔ(t) +α(t) T Rα(t) dt subject tor(t) = −Ĉ d ṙ (t) ṙ (t) + g +ǔ(t),α(t) = v(t), Φ(t) = −α(t)Φ(t) +C d ṙ (t) ṙ (t) + D + η,Ω(t) = A cΩ (t) + B cΦ (t),Ω(t 0 ) = |r(t 0 )|, r (t 0 ) = r * 0 ,Φ (t 0 ) = Φ 0 ,ř (t f ) = r * f , H i r(t) − p c,i ≥ r o,i + H ir (t) , i = 1 : N o , |ṙ(t)| ≤ṙ m − |ṙ|, u * (t) ≤ u m −ū f b , |v(t)| ≤ v m , 0 <ᾱ ≤α(t) ≤ᾱ,α =α(t) −ᾱ,
where again· denotes the internal variables of the optimization, | · | is the element-wise absolute value,ᾱ andᾱ are the upper and lower bounds of the control bandwidth,ṙ m is the peak desired speed, v m is the max artificial input, and N o is the number of obstacles.
VII. SIMULATION ENVIRONMENT
DTMPC was tested in simulation to demonstrate its ability to optimize tube geometry and utilize knowledge of statedependent uncertainty through an environment with obstacles. The obstacles were placed non-uniformly to emulate a changing operating condition (i.e., dense/open environment). In order to emphasize both characteristics of DTMPC, three test cases were conducted. First, the bandwidth was optimized when both the model and obstacle locations were completely known. Second, the bandwidth was again optimized with a known model but the obstacle locations were unknown, requiring a receding horizon implementation. Third, state-dependent uncertainty is considered but control bandwidth is kept constant. Nothing about the formulation prevents optimizing bandwidth and leveraging statedependent uncertainty simultaneously in a receding horizon fashion, this decoupling is only for clarity. The tracking error (14) is used to tighten the obstacle and velocity constraint.
Problem 2 is non-convex due to the nonlinear dynamics and non-convex obstacle constraints so sequential convex programming, similar to that in [17], was used to obtain a solution. The optimization was initialized with a naïve straight-line solution and solved using YALMIP [18] and MOSEK [19] in MATLAB. If large perturbations to the initial guess are required to find a feasible solution, then warm starting the optimization with a better initial guess (possibly provided by a global geometric planner) might be necessary. For the cases tested in this work, the optimization converged within three to four iterations -fast enough for real-time applications. The simulation parameters are summarized in Table I.
VIII. RESULTS AND ANALYSIS
A. Optimized Tube Geometry
The first test scenario for DTMPC highlights its ability to simultaneously optimize an open-loop trajectory and tube geometry in a known environment with obstacles placed non-uniformly. Fig. 2 shows the open-loop trajectory (multicolor), tube geometry (black), and obstacles (grey) when DTMPC optimizes both the trajectory and tube geometry. The color of the trajectory indicates the spatial variation of the control bandwidth, where low-and high-bandwidth are mapped to dark blue and yellow, respectively. It is clear that the bandwidth changes dramatically along the trajectory, especially in the vicinity of obstacles. The insets in Fig. 2 show that high-bandwidth (compact tube geometry) is used for the narrow gap and slalom and low-bandwidth (large tube geometry) for open space. Hence, high-bandwidth control is only used when the system is in close proximity to constraints (i.e., obstacles), consequently limiting aggressive control inputs to only when they are absolutely necessary. Thus, DTMPC can react to varying operating conditions by modifying the trajectory and tube geometry appropriately.
Since the tube geometry changes dramatically along the trajectory, it is important to verify that the tube remains invariant. This was tested by conducting 1000 simulations of the closed-loop system with a disturbance profile sampled uniformly from the disturbance set D. Fig. 3 shows the nominal trajectory (red), each closed-loop trial run (blue), tube geometry (black), and obstacles (grey). The inserts show that the state stays within the tube, even as the geometry changes, which verifies that the time-varying tube remains invariant.
B. Receding Horizon Optimized Tube Geometry
In many situations the operating environment is not completely known and requires a receding horizon implementation. The second test scenario for DTMPC highlights its ability to simultaneously optimize an open-loop trajectory and tube geometry in a unknown environment. Fig. 4 shows a receding horizon implementation of DTMPC where only a subset of obstacles are known (dark-grey) and the rest are unknown (light-grey). The bandwidth along the trajectory is visualized with the color map where low-and highbandwidth are mapped to dark blue and yellow. The first planned trajectory (Fig. 4a) uses high-bandwidth at the narrow gap and low-bandwidth in open space. When the second and third set of obstacles are observed, Fig. 4b and Fig. 4c respectively, DTMPC modifies the trajectory to again use high-bandwidth when in close-proximity to newly discovered obstacles. This further demonstrates DTMPC's ability to construct an optimized trajectory and tube geometry
C. State-Dependent Uncertainty
The third test scenario for DTMPC highlights its ability to leverage knowledge of state-dependent uncertainty, in this case arising from an unknown drag coefficient. From (31), the uncertainty scales with the square of the velocity so higher speeds increase uncertainty. Fig. 5 shows the openloop trajectory (multi-color), tube geometry (black), and obstacles (grey) when DTMPC leverages state-dependent uncertainty. The color of the trajectory is an indication of the instantaneous speed, where low and high speed are mapped to black and peach, respectively. It is clear that DTMPC generates a speed profile modulated by proximity to obstacles. For instance, using the insets in Fig. 5, the speed is lower (darker) when the trajectory goes through the narrow gap and around the other obstacles; reducing uncertainty and tightening the tube geometry. Further, the speed is higher (lighter) when in the open, subsequently increasing uncertainty causing the tube geometry to expand. If the state-dependent uncertainty is just assumed to be bounded, a simplification often made out of necessity in other tube MPC algorithms, the tube geometry is so large that, for this obstacle field, the optimization is infeasible with the same straight-line initialization as DTMPC. Hence, DTMPC is able to leverage knowledge of state-dependent uncertainty to reduce conservatism and improve feasibility.
IX. CONCLUSIONS
This work presented the Dynamic Tube MPC (DTMPC) algorithm that addresses a number of shortcomings of existing nonlinear tube MPC algorithms. First, the open-loop MPC optimization is augmented with the tube geometry dynamics enabling the trajectory and tube to be optimized simultaneously. Second, DTMPC is able to utilize statedependent uncertainty to reduce conservativeness and improve optimization feasibility. And third, the tube geometry and error dynamics can be combined to further reduce conservativeness. All three of these properties were made possible by leveraging the simplicity and robustness of boundary layer sliding control. Simulation results showed that DTMPC is able to control the tube geometry size, by changing control bandwidth or leveraging state-dependent uncertainty, in response to changing operating conditions. Future work includes expanding DTMPC to more general nonlinear systems. | 5,071 |
1901.04989 | 2909816737 | This work proposes an Application-Specific System Processor (ASSP) hardware for the Secure Hash Algorithm 1 (SHA-1) algorithm. The proposed hardware was implemented in a Field Programmable Gate Array (FPGA) Xilinx Virtex 6 xc6vlx240t-1ff1156. The throughput and the occupied area were analyzed for several implementations in parallel instances of the hash algorithm. The results showed that the hardware proposed for the SHA-1 achieved a throughput of 0.644 Gbps for a single instance and slightly more than 28 Gbps for 48 instances in a single FPGA. Various applications such as password recovery, password validation, and high volume data integrity checking can be performed efficiently and quickly with an ASSP for SHA1. | Works with SHA-1 implementation on other hardware platforms can be found in @cite_5 and @cite_6 in which comparisons between Graphics Processing Units (GPUs) and CPUs were performed. The GPUs NVIDIA Tesla M2050 with @math CUDA cores and AMD FirePro V7800 with @math stream processors could achieve throughput peaks of up to @math Gbps. | {
"abstract": [
"High performance computing is required in a number of data-intensive domains. CPU and GPU clusters are one of the most progressive branches in a field of parallel computing and data processing nowadays. Cloud computing has recently emerged as one of the buzzwords in the ICT industry. It offers suitable abstractions to manage the complexity of large data processing and analysis in various domains. This paper addresses issues associated with distributed computational system and the application of mixed GPU&CPU technology to data intensive computation. We describe a hybrid cluster formed by devices from different vendors (Intel, AMD, NVIDIA). Two variants of software environment that hides the heterogeneity of our hardware platform and provides tools for solving complex scientific and engineering problems are presented and discussed. The first solution (HGCC) is a software platform for data processing in heterogenous CPU GPU clusters. The second solution (HGCVC) is an extension version of the previous one. The cloud technology is incorporated to the HGCC framework. The results of numerical experiments performed for parallel implementations of password recovery algorithms are presented to illustrate the performance of our systems.",
"Today Graphics Processing Units (GPUs) are a largely underexploited resource on existing desktops and a possible cost-effective enhancement to high-performance systems. To date, most applications that exploit GPUs are specialized scientific applications. Little attention has been paid to harnessing these highly-parallel devices to support more generic functionality at the operating system or middleware level. This study starts from the hypothesis that generic middleware-level techniques that improve distributed system reliability or performance (such as content addressing, erasure coding, or data similarity detection) can be significantly accelerated using GPU support. We take a first step towards validating this hypothesis and we design StoreGPU, a library that accelerates a number of hashing-based middleware primitives popular in distributed storage system implementations. Our evaluation shows that StoreGPU enables up twenty five fold performance gains on synthetic benchmarks as well as on a high-level application: the online similarity detection between large data files."
],
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2051648511",
"2085059901"
]
} | Application-Specific System Processor for the SHA-1 Hash Algorithm | The Secure Hash Algorithm version one, SHA-1, is an algorithm used to verify the integrity of variable length data streams from an operation called hash.
A hash function outputs a fixed-length code C given a message of variable length K as input. It may be said that the output of the hash function, also called ASIC for IoT applications or used in the FPGA itself, aiming to accelerate hash code calculation in several applications such as password recovery, password validation and integrity checking in large volumes of data.
in which comparisons between Graphics Processing Units (GPUs) and
CPUs were performed. The GPUs NVIDIA Tesla M2050 with 448 CUDA cores and AMD FirePro V7800 with 1440 stream processors could achieve throughput peaks of up to 1.5 Gbps.
The proposal here developed used as target device a Virtex FPGA 6 xc6vlx240t-11156 FPGA and the results showed a throughput of 652 Mbps for a single SHA-1 module. The implementation used the Iterative Looping strategy which occupied less circuit area when compared to other strategy Michail et al. (2005) and Kakarountas et al. (2006) and unlike the results presented in the literature, it was possible to synthesize up to 48 SHA-1 modules in a single FPGA device yielding a throughput of 28.160 Gbps.
Secure Hash Algorithm 1 (SHA-1)
The SHA-1 is a hashing algorithm described by the Federal Information
m i = m 0 m 1 . . . m Ki−1 where m k ∈ {0, 1} ∀ k,(1)
the SHA-1 algorithm generates an output message, m i , called a hash code, of fixed size C = 160 bits, characterized as
h i = h 0 h 1 . . . h C−1 where h k ∈ {0, 1} ∀ k.(2)
4
The i-th incoming message, m i , of K i bits is extended by inserting two binary words. The first one, called here, p i , has P i bits and it is inserted by an operation called Append Padding. The second, called here v i , has T bits and it is inserted by an operation called Append Length. Thus, the calculation of the hash code, h i , for each i-th incoming message is carried out in an extended message, here called z i , which corresponds to a concatenation of the messages
m i , p i and v i , that is, z i = [m i , p i v i ]. Each i-th message z i has Z i = K i + P i + T
bits that can be divided into L i blocks of length M = 512 bits, that is,
L i = Z i M = K i + P i + T 512 .(3)
The pseudo-code presented in the Algorithm 1 displays the sequence of steps required to generate the hash code. These steps are going to be described in detail in the following subsections.
Padding Insertion
This step (lines 2 and 3 of the Algorithm 1) is performed before calculating the hash code and it makes the i-th message length, m i , divisible by M = 512
after the Append Length step. The padding message, p i , associated with the i-th incoming message is formed by a binary word of P i bits in which the most significant bit is 1 and the rest of the bits are 0. The generation of the padding message is performed by the function PaddingGeneration(K i ) shown in the line 2 of the algorithm 1.
The calculation of the P i value can be expressed by
P i = 448 − (K i mod 512) for (K i mod 512) < 448 512 − (K i mod 512) + 448 for (K i mod 512) ≥ 448 ,(4)
where the (a mod b) operation returns the modulo of the division between a and b. Thus, p i can be expressed as
p i = p 0 p 1 . . . p Pi−1 ,(5)
where, p 0 = 1 and p i = 0 for i = 1 . . . P i − 1.
Algorithm 1 SHA-1 for each i-th message W i 1: z i ← [m i ] 2: p i ← PaddingGeneration(K i ) 3: z i ← [m i p i ] 4: v i ← LenghtGeneration(K i ) 5: z i ← [m i p i v i ] 6: h i ← HashInitialization( ) 7: for j ← 0 until L i − 1 do 8: b j ← MessageSplit(z i )h i ← UpdateHash(H(n))
17: end for
Length Insertion
In this step (lines 4 and 5 of the Algorithm 1) the message v i is added, which is characterized by a binary word of T = 64 bits and expressed as
v i = v 0 v 1 . . . v T −1 where v k ∈ {0, 1} ∀ k.(6)
The generation of the message length is performed by the function LenghtGen-
eration (K i ) presented in the line 4 of the algorithm 1. The message v i stores the length value of the i-th incoming message m i , that is, v i = Binary(K, T )(7)
where Binary(a, b) is a function that returns a vector of size b with the binary representation of a decimal number a with b bits according to the big-endian standard.
The 180-4 FIPS norm NIST (2015), assumes that the size, K i , of most messages can be represented by 64 bits, that is,
K i < 2 T .
Finally, at the end of the second step the message, z i , which is an extension of the i-th original input message, m i , is generated (line 5 of the Algorithm 1).
In this work the message z i is identified as a Z i bits vector expressed as
z i = z 0 z 1 . . . z Z−1 where z k ∈ {0, 1} ∀ k.(8)
Hash Code Initialization
The hash code initialization (line 6 of the Algorithm 1) is standardized by the FIPS 180-4 NIST (2015) according to the following expressions:
ha = h 0 . . . h 31 = Binary(1732584193, 32),(9)hb = h 32 . . . h 63 = Binary(4023233417, 32),(10)hc = h 64 . . . h 95 = Binary(2562383102, 32),(11)hd = h 96 . . . h 127 = Binary(0271733878, 32),(12)
and he = h 128 . . . h 159 = Binary(3285377520, 32),
where h i = ha hb hc hd he .
Message Split
In this step, line 8 of the Algorithm 1, the message z i is split into L i blocks of M = 512 bits, that is,
z i = b 0 b 1 . . . b Li−1 ,(15)
where each j-th block associated with i-th message is expressed as
b j = b j,0 b j,1 . . . b j,M −1 where b j,k ∈ {0, 1} ∀ k.(16)
The j-th block, b j , can also be represented as
b j = u j [0] u j [1] . . . u j [15] ,(17)
where u j [k] is a 32 bits message, that is,
u j [k] = u j [k, 0] u j [k, 1] . . . b j [k, 31](18)
where u j [k, l] ∈ {0, 1} ∀ l.
H(n) Hash Variables Initialization
The SHA-1 algorithm has five 32 bits variables, called A(n), B(n), C(n), D(n) and E(n) that are updated during iterations of the algorithm. These variables are identified in this work as vectors:
X(n) = x 0 x 1 . . . x 31 where x k ∈ {0, 1} ∀ k,(19)
where, the combination of these five variables form a vector of 160 positions identified as
H(n) = A(n) B(n) C(n) D(n) E(n) .(20)
The initialization of these variables, in the instant n = −1, (line 10 of the
w(n) Variable Calculation
In SHA-1, it takes 80 iterations for a valid output, h i , associated with a i-th message be generated (Algorithm 1, line 11). In each n-th iteration a w(n)
variable is calculated, expressed as
w(n) = u j [n] for 0 ≤ n ≤ 15 sw[n] for 16 ≤ n ≤ 79 ,(21)
where
sw[n] = lr (u j [n − 3] ⊕ u j [n − 8] ⊕ u j [n − 14] ⊕ u j [n − 16], 1)(22)
where ⊕ is the exclusive or operation and lr(r, s) represents the leftrotate function that is expressed as
lr(r, s) = (r s) ∨ (r (32 − s)),(23)
where ∨, , and are the bitwise OR and left and right bitwise shift, respectively.
f (·) Function Calculation
In each n-th iteration of each j-th block, b j (n), a nonlinear function, f (·), is calculated from the information of the hash variables B(n), C(n) and D(n).
The output of the function, f (·) is stored in the vector f (n) (line 13 of the Algorithm 1), expressed as
f (n) = f (n, B, C, D) = α(n) for n = 0 . . .
where α(n) = (B(n − 1) ∧ C(n − 1)) ∨ (¬B(n − 1) ∧ D(n − 1)),
β(n) = B(n − 1) ⊕ C(n − 1) ⊕ D(n − 1),(25)
γ(n) = (B(n−1)∧C(n−1))∨(B(n−1)∧D(n−1))∨(C(n−1)∧D(n−1)) (27) and
δ(n) = B(n − 1) ⊕ C(n − 1) ⊕ D(n − 1),(28)
where ¬ and ∧ are negation operation and bitwise AND, respectively.
Hash Variables Update
Also, in each n-th iteration of each j-th block b j (n), the values of the variables A(n), B(n), C(n), D(n) and E(n) are updated after the calculation of f (n) (line 14 of the Algorithm 1). The update of these variables is represented by the following equations:
E(n) = D(n − 1),(29)D(n) = C(n − 1),(30)C(n) = lr(B(n − 1), 30),(31)B(n) = A(n − 1)(32)
and A(n) = V(n) + Z(n) + lr(A(n − 1), 5),
in which,
Z(n) = W(n) + E(n − 1)(34)
and V(n) = f (n) + k(n).
The SHA-1 has four 32 bits constants k(n), which are used in the n-th iteration of each j-th block b j n, as specified by
K(n) =
10
Hash Code Update
For each j-th block, b j , SHA-1 executes 80 iterations, and at the end of every j-th block the hash code is updated linearly following the expressions:
ha = ha + A(79),(37)hb = hb + B(79),(38)hc = hc + C(79),(39)hd = hd + D(79),(40)
and he = he + E(79).
So for every i-th message, m i , the value of the associated hash code, h i , is found
in N i = L i × 80(42)
iterations, where N i is defined in this work as the total number of interactions for the calculation of the hash associated with a message m i . The function type selection in the GF-MUX multiplexer is controlled by the GV module, through binary logic with comparators and logic gates corresponding to each interval, having the following outputs,
Proposed Implementation
GV = 0 for n = 0 . . .
Each one selecting a function f (n) based on the 7 bits counter of the CN module.
GW Module
The GW module consists of 16 messages u j [n] (with 32-bits ) in the input,
h i Hash Processing
After generating the signals w(n), k(n), f (n), in each n-th iteration, and the value E(n − 1), the signals Z(n) and V(n), both of 32 bits, are calculated through the sum modules S1 and S2, executed in parallel, subsequently S3 and S4. All the sum modules used in the implementation are 32-bit-specific circuits, which optimize the processing time and the space occupied by the total circuit. The CO module has the function of concatenating the 5 buses of the 32-bits formed by the signals ha, hb, hc, hd and he and generating a serial signal with the hash code h i .
Results
The Table 1 The proposal here presented, used several SHA-1 parallel modules, enabling the throughout acceleration which is especially useful in cases of brute force password recovery, in which there are a large number of hash codes to be generated. only an increment of less than 1 ns in T s , which represents an increase of almost 32× in hash throughput.
Based on the Algorithm 1 and the architecture presented in Figure 1, for every j-th M = 512 bits block b j , 80 iterations are executes (Equation 42), so the proposed hardware throughput can be calculated as
R s = M × NI 80 × T s = 512 × NI 80 × T s = 64 × NI 10 × T s .(44)
It is important to note that the values of throughput greater than 15 Gbps are unpublished in the literature (NI = 32 e NI = 48). A 28, 16Gbps throughput is equivalent to retrieve a totally unknown 6 digits numeric password (using the brute force method) in a maximum of 20 ms or a 6 digits alpha numeric password (each digit with 62 possibilities) from a hash code in a maximum of 17.4 minutes.
Conclusion
This work presented a SHA-1 hardware implementation proposal. The proposed structure, also called ASSP, was synthesized in an FPGA aiming to validate the implemented circuit. All implementation details of the project were presented and analyzed regarding occupation area and processing time. The results obtained are quite significant and point to new possibilities of using hash algorithms in dedicated hardware for real-time and high-volume applications.
Funding
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) -Finance Code 001. | 2,345 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | We begin code constructions existence results. present an explicit construction of MSR codes with small sub-packetization @math when the code rate @math is at most @math @cite_1 . @cite_5 show the existence of high rate MSR codes when the sub-packetization approaches infinity. Motivated by this result, the problem of designing high-rate MSR codes with finite sub-packetization level is explored in @cite_11 @cite_27 @cite_26 @cite_23 @cite_14 @cite_28 @cite_2 @cite_13 @cite_4 and references therein. In particular, @cite_16 show the existence of MSR codes with the sub-packetization level @math . Such a result with similar sub-packetization levels for repair of only @math systematic nodes was obtained earlier in @cite_15 @cite_14 . In order to ensure the MDS property, these results relied on huge fields and randomized construction of the parity check matrices. | {
"abstract": [
"",
"",
"",
"",
"Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1].",
"",
"",
"",
"The high repair bandwidth cost of (n,k) maximum distance separable (MDS) erasure codes has motivated a new class of codes that can reduce repair bandwidth over that of conventional MDS codes. In this paper, we address (n,k,d) exact repair MDS codes, which allow for any single failed node to be repaired exactly with access to any arbitrary set of d survivor nodes. We show the existence of exact repair MDS codes that achieve minimum repair bandwidth (matching the cut-set lower bound) for arbitrary admissible (n,k,d), i.e., k ≤ d ≤ n-1. Moreover, we extend our results to show the optimality of our codes for multiple-node failure scenarios in which an arbitrary set of r ≤ n-k failed nodes needs to repaired. Our approach is based on asymptotic interference alignment proposed by Cadambe and Jafar. As a byproduct, we also characterize the capacity of a class of multisource nonmulticast networks.",
"MDS codes are erasure-correcting codes that can correct the maximum number of erasures given the number of redundancy or parity symbols. If an MDS code has r parities and no more than r erasures occur, then by transmitting all the remaining data in the code one can recover the original information. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1 r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we ask the following question: given the length of the column l, can we construct high-rate MDS array codes with optimal repair bandwidth of 1 r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r + l)log r l.",
"We present a high-rate (n, k, d = n − 1)-MSR code with a sub-packetization level that is polynomial in the dimension k of the code. While polynomial sub-packetization level was achieved earlier for vector MDS codes that repair systematic nodes optimally, no such MSR code construction is known. In the low-rate regime (i. e., rates less than one-half), MSR code constructions with a linear sub-packetization level are available. But in the high-rate regime (i. e., rates greater than one-half), the known MSR code constructions required a sub-packetization level that is exponential in k. In the present paper, we construct an MSR code for d = n − 1 with a fixed rate equation, achieveing a sub-packetization level α = O(kt). The code allows help-by-transfer repair, i. e., no computations are needed at the helper nodes during repair of a failed node.",
"",
"In distributed storage systems that employ erasure coding, the issue of minimizing the total communication required to exactly rebuild a storage node after a failure arises. This repair bandwidth depends on the structure of the storage code and the repair strategies used to restore the lost data. Designing high-rate maximum-distance separable (MDS) codes that achieve the optimum repair communication has been a well-known open problem. Our work resolves, in part, this open problem. In this study, we use Hadamard matrices to construct the first explicit two-parity MDS storage code with optimal repair properties for all single node failures, including the parities. Our construction relies on a novel method of achieving perfect interference alignment over finite fields with a finite number of symbol extensions. We generalize this construction to design @math -parity MDS codes that achieve the optimum repair communication for single systematic node failures."
],
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_4",
"@cite_28",
"@cite_1",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"",
"",
"",
"2150777202",
"",
"",
"",
"1977073502",
"2158398747",
"1647846156",
"",
"2154063365"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | In two fascinating (independent) works, Ye and Barg @cite_22 and Sasidharan, Vajha, and Kumar @cite_25 give a fully explicit construction of MSR codes over small fields with sub-packetization level @math . These constructions also have the so-called or property, which means that the helper nodes do not have to perform any linear combinations on their data, and can simply transfer a suitable subset of @math coordinates of the vector in @math that they store. Thus the number of symbols at a node equals the number of symbols it transmits over the network to aid the repair (recall that the repair-bandwidth measures the latter amount). | {
"abstract": [
"This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.",
"An @math maximum distance separable (MDS) array code of length @math , dimension @math , and sub-packetization @math is formed of @math matrices over a finite field @math , with every column of the matrix stored on a separate node in the distributed storage system and viewed as a coordinate of the codeword. Repair of a failed node (recovery of one erased column) can be performed by accessing a set of @math surviving (helper) nodes. The code is said to have the optimal access property if the amount of data accessed at each of the helper nodes meets a lower bound on this quantity. For optimal-access MDS codes with @math , the sub-packetization @math satisfies the bound @math . In our previous work (IEEE Trans. Inf. Theory, vol. 63, no. 4, 2017), for any @math and @math , we presented an explicit construction of optimal-access MDS codes with sub-packetization @math . In this paper, we take up the question of reducing the sub-packetization value @math to make it to approach the lower bound. We construct an explicit family of optimal-access codes with @math , which differs from the optimal value by at most a factor of @math . These codes can be constructed over any finite field @math as long as @math , and afford low-complexity encoding and decoding procedures. We also define a version of the repair problem that bridges the context of regenerating codes and codes with locality constraints (LRC codes), which we call group repair with optimal access . In this variation, we assume that the set of @math nodes is partitioned into @math repair groups of size @math , and require that the amount of accessed data for repair is the smallest possible whenever the @math helper nodes include all the other @math nodes from the same group as the failed node. For this problem, we construct a family of codes with the group optimal access property. These codes can be constructed over any field @math of size @math , and also afford low-complexity encoding and decoding procedures."
],
"cite_N": [
"@cite_25",
"@cite_22"
],
"mid": [
"2492240243",
"2963781977"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | In summary, while there are several constructions of high rate MSR codes, they all incur large sub-packetization, which is undesirable as briefly explained earlier. This has been partially explained by lower bounds on @math in a few previous works. For the special case of optimal-access MSR codes, a lower bound of @math was shown in @cite_0 , and this was improved (when all-node repair is desired) to @math recently @cite_8 . Together with the above-mentioned constructions, we thus have matching upper and lower bounds on @math for the optimal-access case. This help-by-transfer setting is primarily combinatorial in nature, which is exploited heavily in these lower bounds. | {
"abstract": [
"Maximum distance separable (MDS) codes are widely used in storage systems to protect against disk (node) failures. A node is said to have capacity l over some field F, if it can store that amount of symbols of the field. An (n, k, l) MDS code uses n nodes of capacity l to store k information nodes. The MDS property guarantees the resiliency to any n-k node failures. An optimal bandwidth (respectively, optimal access) MDS code communicates (respectively, accesses) the minimum amount of data during the repair process of a single failed node. It was shown that this amount equals a fraction of 1 (n - k) of data stored in each node. In previous optimal bandwidth constructions, l scaled polynomially with k in codes when the asymptotic rate is less than 1. Moreover, in constructions with a constant number of parities, i.e., when the rate approaches 1, l is scaled exponentially with k. In this paper, we focus on the case of linear codes with linear repair operations and constant number of parities n - k = r, and ask the following question: given the capacity of a node l what is the largest number of information disks k in an optimal bandwidth (respectively, access) (k + r, k, l) MDS code? We give an upper bound for the general case, and two tight bounds in the special cases of two important families of codes. The first is a family of codes with optimal update property, and the second is a family with optimal access property. Moreover, the bounds show that in some cases optimal-bandwidth codes have larger k than optimal-access codes, and therefore these two measures are not equivalent.",
"The first focus of the present paper, is on lower bounds on the sub-packetization level @math of an MSR code that is capable of carrying out repair in help-by-transfer fashion (also called optimal-access property). We prove here a lower bound on @math which is shown to be tight for the case @math by comparing with recent code constructions in the literature. We also extend our results to an @math MDS code over the vector alphabet. Our objective even here, is on lower bounds on the sub-packetization level @math of an MDS code that can carry out repair of any node in a subset of @math nodes, @math where each node is repaired (linear repair) by help-by-transfer with minimum repair bandwidth. We prove a lower bound on @math for the case of @math . This bound holds for any @math and is shown to be tight, again by comparing with recent code constructions in the literature. Also provided, are bounds for the case @math Q @math $ . It turns out interestingly, that such a code must necessarily have a coupled-layer structure, similar to that of the Ye-Barg code."
],
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2207060061",
"2963333438"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | However, lower bounds for general MSR codes, that allow helper nodes to transmit linear combinations of their comments, are harder to obtain. Such a lower bound must rule out a much broader range of possible repair schemes, and must work in an inherently linear-algebraic rather than combinatorial setting. Note that the simple example presented above also used linear combinations in repairing one of the nodes. An MSR code construction with sub-packetization @math , which beats the above lower bound for optimal-access codes and thus shows a separation between these models, was given in @cite_15 . | {
"abstract": [
"MDS codes are erasure-correcting codes that can correct the maximum number of erasures given the number of redundancy or parity symbols. If an MDS code has r parities and no more than r erasures occur, then by transmitting all the remaining data in the code one can recover the original information. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1 r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we ask the following question: given the length of the column l, can we construct high-rate MDS array codes with optimal repair bandwidth of 1 r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r + l)log r l."
],
"cite_N": [
"@cite_15"
],
"mid": [
"2158398747"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | Turning to known lower bounds on @math , a weak bound of @math was shown via a combinatorial argument in @cite_0 . Using an elegant linear independence and partitioning argument, the following bound is proven in @cite_24 : (This was slightly improved in @cite_10 , but the improvement is tiny for the case when @math which is our focus.) The bound implies a lower bound on sub-packetization of @math . Even for the case @math , it was not known if one can achieve sub-packetization smaller than @math . Our Theorem now rules out this possibility. We conjecture that our bound can be improved to @math which will show that the construction in @cite_15 is exactly tight. | {
"abstract": [
"Maximum distance separable (MDS) codes are widely used in storage systems to protect against disk (node) failures. A node is said to have capacity l over some field F, if it can store that amount of symbols of the field. An (n, k, l) MDS code uses n nodes of capacity l to store k information nodes. The MDS property guarantees the resiliency to any n-k node failures. An optimal bandwidth (respectively, optimal access) MDS code communicates (respectively, accesses) the minimum amount of data during the repair process of a single failed node. It was shown that this amount equals a fraction of 1 (n - k) of data stored in each node. In previous optimal bandwidth constructions, l scaled polynomially with k in codes when the asymptotic rate is less than 1. Moreover, in constructions with a constant number of parities, i.e., when the rate approaches 1, l is scaled exponentially with k. In this paper, we focus on the case of linear codes with linear repair operations and constant number of parities n - k = r, and ask the following question: given the capacity of a node l what is the largest number of information disks k in an optimal bandwidth (respectively, access) (k + r, k, l) MDS code? We give an upper bound for the general case, and two tight bounds in the special cases of two important families of codes. The first is a family of codes with optimal update property, and the second is a family with optimal access property. Moreover, the bounds show that in some cases optimal-bandwidth codes have larger k than optimal-access codes, and therefore these two measures are not equivalent.",
"MDS codes are erasure-correcting codes that can correct the maximum number of erasures given the number of redundancy or parity symbols. If an MDS code has r parities and no more than r erasures occur, then by transmitting all the remaining data in the code one can recover the original information. However, it was shown that in order to recover a single symbol erasure, only a fraction of 1 r of the information needs to be transmitted. This fraction is called the repair bandwidth (fraction). Explicit code constructions were given in previous works. If we view each symbol in the code as a vector or a column, then the code forms a 2D array and such codes are especially widely used in storage systems. In this paper, we ask the following question: given the length of the column l, can we construct high-rate MDS array codes with optimal repair bandwidth of 1 r, whose code length is as long as possible? In this paper, we give code constructions such that the code length is (r + l)log r l.",
"In this paper, we revisit the problem of finding the longest systematic-length @math for a linear minimum storage regenerating (MSR) code with optimal repair of only systematic part, for a given per-node storage capacity @math and an arbitrary number of parity nodes @math . We study the problem by following the geometric analysis of linear subspaces and operators. First, a simple quadratic bound is given, which implies that @math is the largest number of systematic nodes in the scenario. Second, an @math -based-log bound is derived, which is superior to the upper bound on log-base @math in the prior work. Finally, an explicit upper bound depending on the value of @math is introduced, which further extends the corresponding result in the literature.",
"Distributed storage systems employ codes to provide resilience to failure of multiple storage disks. In particular, an (n, k) maximum distance separable (MDS) code stores k symbols in n disks such that the overall system is tolerant to a failure of up to n - k disks. However, access to at least k disks is still required to repair a single erasure. To reduce repair bandwidth, array codes are used where the stored symbols or packets are vectors of length l. The MDS array codes have the potential to repair a single erasure using a fraction 1 (n - k) of data stored in the remaining disks. We introduce new methods of analysis, which capitalize on the translation of the storage system problem into a geometric problem on a set of operators and subspaces. In particular, we ask the following question: for a given (n, k), what is the minimum vector-length or subpacketization factor l required to achieve this optimal fraction? For exact recovery of systematic disks in an MDS code of low redundancy, i.e., k n > 1 2, the best known explicit codes have a subpacketization factor l, which is exponential in k. It has been conjectured that for a fixed number of parity nodes, it is in fact necessary for l to be exponential in k. In this paper, we provide a new log-squared converse bound on k for a given l, and prove that k ≤ 2 log2 I(logδ l + 1), for an arbitrary number of parity nodes r = n - k, where δ = r (r - 1)."
],
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_10",
"@cite_24"
],
"mid": [
"2207060061",
"2158398747",
"2541263990",
"2078872683"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | A slight relaxation MSR codes called @math -MSR codes where the helper nodes are allowed to transmit a factor @math more than the cutset bound, i.e., @math symbols, were put forth in @cite_19 . They showed that one can construct @math -MSR codes with sub-packetization @math , and roughly logarithmic sub-packetization is also necessary. | {
"abstract": [
"This paper addresses the problem of constructing maximum distance separable (MDS) codes that enable exact reconstruction (repair) of each code block by downloading a small amount of information from the remaining code blocks. The total amount of information flow from the remaining code blocks during this reconstruction process is referred to as repair bandwidth of the underlying code. Existing constructions of exact-repairable MDS codes with optimal repair bandwidth require working with large subpacketization levels, which restrict their applicability in practice. This paper presents two general approaches to construct exact-repairable MDS codes that aim at significantly reducing the required subpacketization level at the cost of slightly suboptimal repair bandwidth. The first approach provides MDS codes that have repair bandwidth at most twice the optimal repair bandwidth. In addition, these codes also have the smallest possible subpacketization level @math , where @math denotes the number of parity blocks. This approach is then generalized to design codes that have their repair bandwidth approaching the optimal repair bandwidth at the cost of graceful increment in the required subpacketization level. The second approach transforms an MDS code with optimal repair bandwidth and large subpacketization level into a longer MDS code with small subpacketization level and near-optimal repair bandwidth. For a given @math , the codes constructed using this approach have their subpacketization level scaling logarithmically with the code length. In addition, the obtained codes require field size only linear in the code length and ensure load balancing among the intact code blocks in terms of the information downloaded from these blocks during the exact reconstruction of a code block."
],
"cite_N": [
"@cite_19"
],
"mid": [
"2789573885"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.05112 | 2910298035 | An @math -vector MDS code is a @math -linear subspace of @math (for some field @math ) of dimension @math , such that any @math (vector) symbols of the codeword suffice to determine the remaining @math (vector) symbols. The length @math of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading @math field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization @math . Our main result is an almost tight lower bound showing that for an MSR code, one must have @math . Previously, a lower bound of @math , and a tight lower bound for a restricted class of "optimal access" MSR codes, were known. Our work settles a central open question concerning MSR codes that has received much attention. Further our proof is really short, hinging on one key definition that is somewhat inspired by Galois theory. | Regenerating and MSR codes have close connections to communication-efficient secret sharing schemes, which were studied and developed in @cite_12 . In this context, the sub-packetization corresponds to the size of the shares that the parties must hold. | {
"abstract": [
"A secret sharing scheme is a method to store information securely and reliably. Particularly, in a threshold secret sharing scheme , a secret is encoded into @math shares, such that any set of at least @math shares suffice to decode the secret, and any set of at most @math d @math t_ 1 d n$ . The scheme is based on a generalization of Shamir’s secret sharing scheme and preserves its simplicity and efficiency. In addition, we consider the setting of secure distributed storage where the proposed communication efficient secret sharing schemes not only improve decoding bandwidth but further improve disk access complexity during decoding."
],
"cite_N": [
"@cite_12"
],
"mid": [
"1641218312"
]
} | An Exponential Lower Bound on the Sub-Packetization of Minimum Storage Regenerating Codes * | Traditional Maximum Distance Separable (MDS) codes such as Reed-Solomon codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. When encoding k symbols of data into an n symbol codeword by an (n, k)-MDS code, the data can be recovered from any set of k out of n codeword symbols, which is clearly the best possible. MDS codes are thus a a naturally appealing choice to minimize storage overhead in distributed storage systems (DSS). One can encode data, broken into k pieces, by an (n, k)-MDS code, and distribute the n codeword symbols on n different storage nodes, each holding the symbol corresponding to one codeword position. In the sequel, we use the terms storage node and codeword symbol interchangeably.
A rather common scenario faced by modern large scale DSS is the failure or temporary unavailability of storage nodes. It is of great importance to promptly respond to such failures, by efficient repair/regeneration of the failed node using the content stored in some of other nodes (which are called "helper" nodes as they assist in the repair). This requirement has spurred a set of fundamentally new and exciting challenges concerning codes for recovery from erasures, with the goal of balancing worst-case fault tolerance from many erasures, with very efficient schemes to recover from the much more common scenario of single (or a few) erasures.
There are two measures of repair efficiency that have received a significant amount of attention in the last decade. One concerns locality, where we would like to repair a node locally based on the contents of a small number of other storage nodes. Such locality necessarily compromises the MDS property, and a rich body of work on locally repairable codes (LRCs) studies the best trade-offs possible in this model and constructions achieving those [8,14,20]. The other line of work, which is the subject of this paper, focuses on optimizing the amount of data downloaded from the other nodes. This model allows the helper node to respond with a fraction of its contents. The efficiency measure is the repair bandwidth, which is the total amount of data downloaded from all the helper nodes. Codes in this model are called regenerating codes, and were systematically introduced in the seminal work of Dimakis et al. [6], and have since witnessed an explosive amount of research.
Rather surprisingly, even for some MDS codes, by contacting more helper nodes but downloading fewer symbols from each, one can do much better than the "usual" scheme, which would download the contents of k nodes in full. In general an entire spectrum of trade-offs is possible between storage overhead and repair bandwidth. This includes minimum bandwidth regenerating (MBR) codes with the minimum repair bandwidth of ℓ [16]. At the other end of the spectrum, we have minimum storage regenerating (MSR) codes defined formally below) which retain the MDS property and thus have optimal redundancy. This work focuses on MSR codes.
Example. We quickly recap the classic example of the EVENODD code [3,7] to illustate regeneration of a lost symbol in an MDS code with non-trivial bandwidth. This is an (4, 2) MDS code with 4 storage nodes, each storing a vector of two symbols over the binary field. We denote by P 1 , P 2 the two parity nodes.
S 1 S 2 P 1 P 2 a 1 b 1 a 1 + b 1 a 2 + b 1 a 2 b 2 a 2 + b 2 a 1 + a 2 + b 2
The naive scheme to repair a node would contact any two of the remaining three nodes, and download both bits from each of them, for a total repair bandwidth of 4 bits. However, it turns out that one can get away with downloading just one bit from each of the three other nodes, for a repair bandwidth of 3 bits! If we were to repair the node S 1 , the remaining nodes (S 2 , P 1 , P 2 ) would send (b 1 , a 1 + b 1 , a 2 + b 1 ), respectively. If we were to repair the node S 2 , the remaining nodes (S 1 , P 1 , P 2 ) would send (a 2 , a 2 + b 2 , a 2 + b 1 ), respectively. If we were to repair the node P 1 , the remaining nodes (S 1 , S 2 , P 2 ) would send (a 1 , b 1 , a 1 + a 2 + b 2 ), respectively. If we were to repair the node P 2 , the remaining nodes (S 1 , S 2 , P 1 ) would send (a 2 , b 1 , (a 1 + b 1 ) + (a 2 + b 2 )), respectively. Note that in the last case, the helper node P 1 sends a linear combination of its symbols-this is in general a powerful ability that we allow in MSR codes.
Vector codes and sub-packetization. The above example shows that when the code is an (n, k) vector MDS code, where each codeword symbol itself is a vector, say in F ℓ for some field F, then one can hope to achieve repair bandwidth smaller than then naive kℓ. The length of the vector ℓ stored at each node is called the sub-packetization (since this is the granularity to which a single codeword symbol needs to be divided into).
MSR codes.
A natural question is how small a repair bandwidth one can achieve with MDS codes. The so-called cutset bound [6] dictates that one must download at least (n − 1)ℓ/(n − k) symbols of F from the remaining nodes to recover any single node. Further, in order to attain this optimal repair bandwidth bound, each of the (n − 1) nodes must respond with ℓ/(n − k) field elements. Vector MDS codes which admit repair schemes meeting the cutset bound (for repair of every node) are called minimum storage regenerating (MSR) codes (for the formal description, see Definition 1). MSR codes, and specifically their sub-packetization, are the focus of this paper.
Large sub-packetization: problematic and inherent. While there are many constructions of MSR codes by now, they all have large sub-packetization, which is at least r k/r . For the setting of most interest, when we incur a small redundancy r in exchange for repair of information, this is very large, and in particular exp(Ω(k)) when r = O(1). A small sub-packetization is important for a number of reasons, as explained in some detail in the introduction of [17]. A large subpacketization limits the number of storage nodes (for example if ℓ exp(Ω(n)), then n = O(log ℓ) where ℓ is the storage capacity of each node), and in general leads to a reduced design space in terms of various systems parameters. A larger sub-packetization also makes management of meta-data, such as description of the code and the repair mechanisms for different nodes, more difficult. For a given storage capacity, a smaller sub-packetization allows one to distribute codewords corresponding to independently coded files among multiple nodes, which allows for distributing the load of providing information for the repair of a failed node among a larger number of nodes.
It has been known that somewhat large sub-packetization is inherent for MSR codes (we will describe the relevant prior results in the next section). In this work, we improve this lower bound to exponential, showing that unfortunately the exponential sub-packetization of known constructions is inherent. Our main result is the following. Theorem 1. Suppose an (n, k)-vector MDS code with redundancy r = n − k 2 is minimum storage regenerating (MSR). Then its sub-packetization ℓ must satisfy 1 ℓ r 2 r 2 − r + 1
(k−1)/2 e (k−1)(r−1)/(2r 2 ) .
Our lower bound almost matches the sub-packetization of r O(k/r) achieved by the best known constructions. Improving the base of the exponent in our lower bound to r will make it even closer to the upper bounds. Though when r is small, which is the primary setting of interest in codes for distributed storage, this difference is not that substantial. We remark that our theorem leaves out the case when r = 1, which is known to have a sub-packetization of ℓ = 1 [9].
A few words about our proof. Previous work [22] has shown that an (n, k) MSR code with sub-packetization ℓ implies a family of (k − 1) ℓ/r-dimensional subspaces H i of F ℓ each of which has an associated collection of (r − 1) linear maps obeying some strong properties. For instance, in the case r = 2, there is an invertible map φ i associated with H i for each i which leaves all subspaces H j , j = i, invariant, and maps H i itself to a disjoint space (i.e., φ i (H i ) ∩ H i = {0}). The task of showing a lower bound on ℓ then reduces to the linear-algebraic challenge of showing an upper bound on the size of such a family of subspaces and linear transformations, which we call an MSR subspace family (Definition 2). The authors of [10] showed an upper bound O(r log 2 ℓ) on the size of MSR subspace families via a nifty partitioning and linear independence argument.
We follow a different approach by showing that the number of linear maps that fix all subspaces in an MSR family decreases sharply as the number of subspaces increases. Specifically, we show that dimension of the linear space of such linear maps decreases exponentially in the number of subspaces in the MSR family. This enables us to prove an O(r log ℓ) upper bound. This bound is asymptotically tight (up to a O(log r) factor), as there is a construction of an MSR subspace family of size (r + 1) log r ℓ [24]. We also present an alternate construction in Section ??, which works for all fields with more than 2 elements, compared to the large field size (of at least ≈ r r ℓ) required in [24].
We now proceed to situate our work in the context of prior work, both constructions and lower bounds, for MSR codes.
Preliminaries
We will now define MSR codes more formally. We begin by defining vector codes. Let F be a field, and n, ℓ be positive integers. For a positive integer b, we denote [b] = {1, 2, . . . , b}. A vector code C of block length n and sub-packetization ℓ is an F-linear subspace of (F ℓ ) n . We can express a codeword of C as c = (c 1 , c 2 , . . . , c n ), where for i ∈ [n], the block c i = (c i,1 , . . . , c i,ℓ ) ∈ F ℓ denotes the length ℓ vector corresponding to the i'th code symbol c i .
Let k be an integer, with 1 k n. If the dimension of C, as an F-vector space, is kℓ, we say that C is an (n, k, ℓ) F -vector code. The codewords of an (n, k, ℓ) F -vector code are in one-to-one correspondence with vectors in (F ℓ ) k , consisting of k blocks of ℓ field elements each.
Such a code is said to be Maximum Distance Separable (MDS), and called an (n, k, ℓ)-MDS code (over the field F), if every subset of k code symbols c i 1 , c i 2 , . . . , c i k is an information set for the code, i.e., knowing these symbols determines the remaining n − k code symbols and thus the full codeword. An MDS code thus offers the optimal erasure correction propertythe information can be recovered from any set of k code symbols, thus tolerating the maximum possible number n − k of worst-case erasures.
An (n, k, ℓ)-MDS code can be used in distributed storage systems as follows. Data viewed as kℓ symbols over F is encoded using the code resulting in n vectors in F ℓ , which are stored in n storage nodes. Downloading the full contents from any subset of these k nodes (a total of kℓ symbols from F) suffices to reconstruct the original data in entirety. Motivated by the challenge of efficient regeneration of a failed storage node, which is a fairly typical occurrence in large scale distributed storage systems, the repair problem aims to recover any single code symbol c i by downloading fewer than kℓ field elements. This is impossible if one only downloads contents from k nodes, but becomes feasible if one is allowed to contact h > k helper nodes and receive fewer than ℓ field elements from each.
Here we focus our attention to only repairing the first k code symbols, which we view as the information symbols. This is called "systematic node repair" as opposed to the more general "all node repair" where the goal is to repair all n codeword symbols. We will also only consider the case h = n − 1, when all the remaining nodes are available as helper nodes. Since our focus is on a lower bound on the sub-packetization ℓ, this only makes our result stronger, and keeps the description somewhat simpler. We note that the currently best known constructions allow for all-node repair with optimal bandwidth from any subset of h helper nodes.
Suppose we want to repair the m'th code symbol for some m ∈ [k]. We download from the i'th code symbol, i = m, a function h i,m (c i ) of its contents, where h i,m : F ℓ → F β i,m is the repair function. If we consider the linear nature of C, then we should expect from h i,m to utilize it. Therefore, throughout this paper, we shall assume linear repair of the failed node. That is, h i,m is an F-linear function. Thus, we download from each node certain linear combinations of the ℓ symbols stored at that node. The total repair bandwidth to recover c m is defined to be i =m β i,m . By the cutset bound for repair of MDS codes [6], this quantity is lower bounded by (n − 1)ℓ/r, where r = n − k is the redundancy of the code. Further, equality can be attained only if β i,m = ℓ/r for all i. That is, we download ℓ/r field elements from each of the remaining nodes. MDS codes achieving such an optimal repair bandwidth are called Minimum Storage Regenerating (MSR) codes, as precisely defined below. Let C ⊆ (F ℓ ) n be an (n, k, ℓ)-MSR code, with redundancy r = n − k. The MDS property implies that any subset of k codeword symbols determine the whole codeword. We view the first k symbols as the "systematic" ones, with r parity check symbols computed from them, where we remind that when we say code symbol we mean a vector in F ℓ . So we can assume that there are invertible matrices C i,j ∈ F ℓ×ℓ for i ∈ [r] and j ∈ [k] such that for c = (c 1 , c 2 , . . . , c n ) ∈ C, we have
c k+i = k j=1 C i,j c j .
Suppose we want to repair a systematic node c m for m ∈ [k] with optimal repair bandwidth, by receiving from each of the remaining n − 1 nodes, ℓ/r F-linear combinations of the information they stored. This means that there are repair matrices S 1,m , . . . , S r,m ∈ F ℓ/r×ℓ , such that parity node k + i sends the linear combination
S i,m c k+i = S i,m k j=1 C i,j c j(2)
Therefore, the information about c m that is sent to it by c k+i is S i,m C i,m c m . Since the k systematic nodes are independent of each other, then the only way to recover c m is by taking a linear combination of S i,m C i,m c m for i ∈ [r] such that the linear combination equals c m for any c m ∈ F ℓ . Therefore, to ensure full regeneration of c m , we must satisfy
rank S 1,m C 1,m S 2,m C 2,m . . . S r,m C r,m = ℓ
Since each S i,m C i,m has ℓ/r rows, the above happens if and only if
r i=1 R(S i,m C i,m ) = F ℓ(3)
where R(M ) denotes the row-span of a matrix M .
Cancelling interference of other systematic symbols
Now, for every other systematic node m ′ ∈ [k] \ {m}, the parity nodes send the following information linear combinations of
c m ′ S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ c m ′(4)
In order to cancel this from the linear combinations (2) received from the parity nodes, the systematic node m ′ has to send the linear combinations (4) about its contents. To achieve optimal repair bandwidth of at most ℓ/r symbols from every node, this imposes the requirement
rank S 1,m C 1,m ′ S 2,m C 2,m ′ . . . S r,m C r,m ′ ℓ r
However since C i,m ′ is invertible, and S i,m has full row rank, rank(S i,m C i,m ′ ) = ℓ/r for all i ∈ [r]. Combining this fact with the rank inequality above, this implies
R(S 1,m C 1,m ′ ) = · · · = R(S r,m C r,m ′ )(5)
for every m = m ′ ∈ [k], where R(M ) is the row-span of a matrix M .
Constant repair matrices and casting the problem in terms of subspaces
We now make an important simplification, which allows us to assume that the matrices S i,m above depend only on the node m being repaired, but not on the helping parity node i. That is, S m = S i,m for all i ∈ [r]. We call repair with this restriction as possessing constant repair matrices. It turns out that one can impose this restriction with essentially no loss in parameters -by Theorem 2 of [22], if there is a (n, k, ℓ)-MSR code then there is also a (n − 1, k − 1, ℓ)-MSR code with constant repair matrices.
This allows us to cast the requirements (3) and (5) in terms of a nice property about subspaces and associated invertible maps, which we abstract below. This property was shown to be intimately tied to MSR codes in [24,22]. Definition 2 (MSR subspace family). For integers ℓ, r with r|ℓ and a field F, a collection of subspaces H 1 , . . . , H k of F ℓ of dimension ℓ/r each is said to be an (ℓ, r) F -MSR subspace family if there exist invertible linear maps Φ i,j on F ℓ , i ∈ {1, 2, . . . , k} and j ∈ {1, 2, . . . , r − 1} such that for every i ∈ [k], the following holds:
H i ⊕ r−1 j=1 Φ i,j (H i ) = F ℓ (6) Φ i ′ ,j (H i ) = H i for every j ∈ [r − 1], and i ′ = i(7)
Now, we recall the argument that if we have an (n, k, ℓ)-MSR code with constant repair matrices, then that also yields a family of subspaces and maps with the above properties. Indeed, we can take H m , m ∈ [k], to be R(S m ), and Φ m,j , j ∈ [r − 1], is the invertible linear transformation mapping x ∈ F ℓ , viewed as a row vector, to xC j+1,m C −1 1,m . It is clear that Property (6) follows from (3), and Property (7) follows from (5). Together with the loss of one dimension in the transformation [22] to an MSR code with constant repair subspaces, we can conclude the following connection between MSR codes and the very structured set of subspaces and maps of Definition 2. For the reverse direction, the MSR subspace family can take care of the node repair, but one still needs to ensure the MDS property. This approach was taken in [24], based on a construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ. For completeness, we present another construction of an MSR subspace family in Section ??. The subspaces in our construction are identical to [24] but we pick the linear maps differently, using just two distinct eigenvalues. As a result, our construction works over any field with more than two elements. In comparison, the approach in [24] used k r−1 ℓ/r distinct eigenvalues, and thus required a field that is bigger than this bound. It is an interesting question to see if the MDS property can be incorporated into our construction to give MSR codes with sub-packetization r k/(r+1) over smaller fields.
Limitation of MSR subspace families
In this section, we state and prove the following strong upper bound on the size of an MSR family of subspaces, showing that the construction claimed in Theorem 8 is not too far from the best possible. This upper bound together with Proposition 2 immediately implies our main result, Theorem 1. In the rest of the section, we prove the above theorem. Let H 1 , H 2 , . . . , H k be the subspaces in an (ℓ, r) F -MSR subspace family with associated invertible linear maps Φ i,j where i ∈ [k] and j ∈ [r − 1]. Note that these linear maps are in some sense statements about the structure of the spaces H 1 , H 2 , . . . , H k . They dictate the way the subspaces can interact with each other, thereby giving rigidity to the way they are structured.
The major insight and crux of the proof is the following definition on collections of subspaces. This definition is somewhat inspired by Galois Theory, in that we are looking at the space of linear maps on the vector space F ℓ that fix all the subspaces in question.
Definition 3.
In the vector space L(F ℓ , F ℓ ) of all linear maps from F ℓ to F ℓ , define the subspace
F(A 1 → B 1 , . . . , A s → B s ) := {ψ ∈ L(F ℓ , F ℓ ) | ψ(A i ) ⊆ B i ∀i ∈ {1, . . . , s}} for arbitrary subspaces A i , B i of F ℓ . Define the value I(A 1 → B 1 , . . . , A s → B s ) := dim(F(A 1 → B 1 , . . . , A s → B s ))
When A i = B i for each i, we adopt the shorthand notation F(A 1 , . . . , A s ) and I(A 1 , . . . , A s ) to denote the above quantities. We will also use the mixed notation F(A 1 , . . . , A s−1 , A s → B s ) to denote F(A 1 → A 1 , . . . , A s → B s ) and likewise for I(A 1 , . . . , A s−1 , A s → B s ).
Thus I(A 1 , . . . , A s ) is the dimension of the space of linear maps that map each A i within itself. We use the notation I() to suggest such an invariance. The key idea will be to cleverly exploit the invertible maps Φ i,j associated with each H i to argue that the dimension I(H 1 , H 2 , . . . , H t ) shrinks by a constant factor whenever we add in an H t+1 into the collection. Specifically, we will show that the dimension shrinks at least by a factor of r 2 −r+1 r 2 for each newly added H t+1 . Because the identity map is always in F (H 1 , H 2 , . . . , H k ), the dimension I (H 1 , H 2 , . . . , H k ) is at least 1. As the ambient space of linear maps from F ℓ → F ℓ has dimension ℓ 2 , this leads to an O(r log ℓ) upper bound on k. We begin with the following lemma.
Lemma 4. Let U 1 , U 2 , . . . , U s F p , s 2 be arbitrary subspaces such that s i=1 U i = {0}. Then following inequality holds:
s i=1 dim(U i ) (s − 1) dim (U 1 + . . . + U s ) .
Proof. We proceed by inducting on s. Indeed, when s = 2, we have from the Principle of Inclusion and Exclusion (PIE)
dim(U 1 ) + dim(U 2 ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) = dim(U 1 + U 2 )
And thus the base case holds. Now, if the inequality holds when s = p, then we have via the Principle of Inclusion and Exclusion
p+1 i=1 dim(U i ) = dim(U 1 + U 2 ) + dim(U 1 ∩ U 2 ) + p+1 i=3 dim(U i )(8)
By the induction hypothesis, we deduce that Equation (8) is at most
dim(U 1 + U 2 ) + (p − 1) dim((U 1 ∩ U 2 ) + · · · + U p+1 )(9)
And Equation (9) is at most p dim(U 1 + U 2 + · · · + U p+1 )
By combining Equations (8), (9), and (10), we deduce that the inequality also holds when s = p + 1. Since the base case s = 2 holds, we therefore conclude that the inequality holds for all integers s 2.
Next, we prove an identity for MSR subspace families that will come in handy. For the sake of brevity, we use the shorthands H a := {H 1 , . . . , H a } and Φ a,0 to denote the identity map.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) sI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ s j=0 Φ t,j (H t ))(11)
Proof. We proceed by inducting on s. The base case when s = 0 is clear as the right hand side simplifies to the left hand side. Now, if Equation (11) holds when s = p and p < r − 1, then we have via the Principle of Inclusion and Exclusion (PIE) and Equation (6) p+1 j=0
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t ))(12)
By the induction hypothesis, we deduce that Equation (12) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p j=0 Φ t,j (H t )) + I(H t−1 , Φ t,i (H t ) → Φ t,p+1 (H t )) (13)
By applying the Principle of Inclusion and Exclusion and Equation 6, we deduce that Equation (13) is at most
pI(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(14)
And Equation (14) is equal to
(p + 1)I(H t−1 , H t → 0) + I(H t−1 , Φ t,i (H t ) → ⊕ p+1 j=0 Φ t,j (H t ))(15)
And so combining Equations (12), (13), (14), and (15), we deduce that Equation (11) also holds when s = p + 1. Since the base case s = 0 holds, we therefore conclude that the inequality holds for all s ∈ {0, 1, . . . , r − 1}.
Following Lemma 5 and Equation (6), we deduce when s = r − 1 the following corollary.
I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) (r − 1)I(H t−1 , H t → 0) + I(H t−1 )
We are now ready to establish the key iterative step, showing geometric decay of the dimension I(H 1 , . . . , H t ) in t.
Proof. Recall that by the property of an (ℓ, r) F -MSR subspace family, the maps Φ t,j , j ∈ {0, 1, . . . , r − 1}, leave H 1 , . . . , H t−1 invariant. Using this it follows that
I(H t−1 , H t ) = I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) for each i, j ∈ {0, 1, . . . , r−1}, since we have an isomorphism F(H t−1 , H t ) → F(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) given by ψ → Φ t,j • ψ • Φ −1 t,i . Thus we have r 2 · I(H t−1 , H t ) = r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) .(17)
Notice the the inner sum is the same as the left hand side in Corollary 6. Thus we are able to apply Corollary 6 on Equation (17) to find that
r−1 i=0 r−1 j=0 I(H t−1 , Φ t,i (H t ) → Φ t,j (H t )) r−1 i=0 [(r − 1)I(H t−1 , Φ t,i (H t ) → 0) + I(H t−1 )] = rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) .(18)
Now we observe that the only linear transformation of F ℓ that maps Φ t,i (H t ) → 0 for all i ∈ {0, 1, . . . , r − 1} simultaneously is the identically 0 map. This is because r−1 j=0 Φ t,j (H t ) = F ℓ from Equation 6. Thus we are in a situation where Lemma 4 applies, and we have
rI(H t−1 ) + (r − 1) r−1 i=0 I(H t−1 , Φ t,i (H t ) → 0) rI(H t−1 ) + (r − 1) · (r − 1)I(H t−1 ) = (r 2 − r + 1)I(H t−1 )(19)
Combining Equations (17), (18), and (19), we conclude Equation (16) as desired.
We are now ready to finish off the proof of our claimed upper bound on the size k of an (ℓ, r) F -MSR family.
Proof of Theorem 3. Since the identity map belongs to the space of I(H 1 , . . . , H k ), by applying Lemma 7 inductively on H 1 , H 2 , . . . , H k , we obtain the inequality
1 I(H 1 , . . . , H k ) r 2 − r + 1 r 2 k · ℓ 2 ,
from which we find that
k 2 ln ℓ ln r 2 r 2 −r+1 2 ln ℓ r−1 r 2 = 2r 2 r − 1 ln ℓ
where the second inequality follows because ln(1 + x)
x 1+x for all x > −1. We thus have the claimed upper bound.
A Proof of Theorem 8
In this section, we state and prove an alternate construction of an MSR subspace family of size (r + 1) log r ℓ. The first construction of an (ℓ, r) F -MSR subspace family of size (r + 1) log r ℓ that also satisfied the MDS property was shown in [24] for fields of size more than k r−1 ℓ/r elements. Without the MDS property, the field size needed to be more than r elements to show that the construction satisfied the node repair property.
Our construction uses subspaces that are identical to the ones in [24], but we choose different linear maps that required only two distinct eigenvalues. As a result, our construction works over all fields with more than two elements. It remains a very interesting question whether the MDS property can be additionally incorporated into our construction to yield MSR codes with sub-packetization r k/(r+1) over smaller fields.
Theorem 8. For |F| > 2 and r 2, there exists an (ℓ = r m , r) F -MSR subspace family of (r + 1)m = (r + 1) log r (ℓ) subspaces.
In the rest of the section, we will prove the theorem above.
To give a general view of our construction, we first shift our view of the ambient space F ℓ = F r m to (F r ) ⊗m , vectors that consist of m tensored vectors in F r . We then consider a collection of vectors T := {v 1 , v 2 , . . . , v r , v r+1 }, situated in F r , such that any r of them form a basis in F r . The subspace A k,i will be all vectors in (F r ) ⊗m whose k'th position in the m tensored vectors is the vector v i .
The r − 1 associated linear maps Φ (k,i),1 , . . . , Φ (k,i),r−1 of the subspace A k,i will simply focus on transforming the k'th position of each vector while retaining all remaining positions. Specifically, on the k'th position, it will scale all vectors in T \ {v i }. The linear map Φ (k,i),t will scale v i+t by a factor λ = 1 while all other vectors in T \ {v i } will be identically mapped, where the indices are taken modulo r + 1. That way, everything in T \ {v i } will stay almost the same while v i along with the r − 1 images of v i will form a basis for F r in the k'th position.
Proof. Let ℓ = r m , and let V = (F r ) ⊗m ≃ F ℓ be the ambient space. Consider a set of vectors {v 1 , v 2 , . . . , v r , v r+1 } ⊂ F r for which the first r form a basis in F r and satisfy the equation
v 1 + v 2 + . . . + v r + v r+1 = 0
For k ∈ [m] and i ∈ [r + 1], we define our (r + 1)m subspaces to be
A k,i := span(v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1], i k = i)
which is a subspace of V . Observe that while the k'th position is fixated for any vector in A k,i , the remaining m − 1 positions are free to choose from any r vectors in F r . Through this observation, we see that dim(A k,i ) = r m−1 = ℓ/r.
To properly define the associated linear maps of the subspace family, it suffices to show their mapping for the basis
S i := {v i 1 ⊗ . . . ⊗ v im | i j ∈ [r + 1] \ {i}} of V .
Since |F| > 2, then we can fix a constant λ ∈ F with λ / ∈ {0, 1}, which we will use as an eigenvalue across all (r − 1)(r + 1)m linear maps. For each t ∈ [r − 1], the linear map Φ (k,i),t will scale all vectors in S i whose k'th position is v i+t by a factor λ and identically all remaining vectors in S i , where indices are taken modulo r + 1. Namely, for
i k = i + t, v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (λv i k ) ⊗ . . . ⊗ v im And for i k ∈ [r + 1] \ {i + t, i}, v i 1 ⊗ . . . v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im
Observe that all the vectors in the basis S i are scaled by either 1 or λ, which means that the image Φ (k,i),t (S i ) is also a basis for V . This tells us that Φ (k,i),t is an invertible linear map. It now remains to show Properties 6 and 7 hold for our given subspaces and linear maps.
To show Property 6, we can use Equation (A) to rewrite v i as v i = − j∈[r+1]\{i} v i . This shows us that when the k'th position of a vector is v i , then Φ (k,i),t will map it as
v i 1 ⊗ . . . ⊗ v i k ⊗ . . . ⊗ v im Φ (k,i),t − −−− → v i 1 ⊗ . . . ⊗ (v i − (λ − 1)v i+t ) ⊗ . . . ⊗ v im
Since λ = 1, then the set {v i , v i − (λ − 1)v i+1 , . . . , v i − (λ − 1)v i+r−1 } forms a basis for F r . Thus for vector v = v i 1 ⊗ . . . ⊗ v i k−1 ⊗ v i ⊗ v i k+1 ⊗ . . . ⊗ v im , the vectors {v, Φ (k,i),1 (v), . . . , Φ (k,i),r−1 (v)} span all of F r in the k'th position. Because we are free to choose any vector in all remaining positions, then are all able to span all of V for all such v. That is, we find that A k,i ⊕ r−1 t=1 Φ (k,i),t (A k,i ) = F ℓ this shows Property 6.
To show (7), we start by breaking the subspace A k ′ ,i ′ into two possibilities:
1. For the case when k ′ = k, the subspace A k ′ ,i ′ remains invariant under each Φ (k,i),t as they only linearly transform the k'th position while retaining all other positions.
2. For the case when k ′ = k and i ′ = i, the subspace A k,i ′ is an eigenspace for Φ (k,i),t . Namely, when i ′ = i + t, A k,i ′ is the eigenspace of eigenvalue 1. When i ′ = i + t, the eigenvalue is instead λ.
This shows that (7) also holds.
B Proof of the Cutset bound
Proof. Consider an (n, k, ℓ)-MDS vector code that stores a file M of size kℓ in storage nodes s 1 , s 2 , . . . , s n . The MDS vector code will repair a storage node s h by making every other storage node s i communicate β i,h bits to s h . From the MDS property, we know that any collection C ⊆ [n] \ {h} of k − 1 of nodes {s i } i∈C along with s h is able to construct our original file M.
Thus the collective information of these k storage nodes is at least |M| = kℓ, implying the inequality i∈C |s i | + i∈[n]\C∪{h} β i,h kℓ.
Since every storage node stores ℓ bits (|s i | = ℓ), then (20) reduces down to i∈[n]\(C∪{h})
β i,h ℓ.(21)
Hence (21) implies that any n − k helper storage nodes collectively communicate at least ℓ bits. Thus we find from (21) by summing over all possible n − k collections of helper storage nodes i∈[n]\{h}
β i,h (n − 1) (n − k) · ℓ.(22)
Which is the claimed cutset bound. Moreover, to achieve equality for (22), equality must be achieved for (21) over all n − k collections of helper storage nodes. That is possible only when β i,h = ℓ/(n−k) for all i ∈ [n]\{h}. Hence, under optimal repair bandwidth, the total information communicated is n i=2 β i,h = (n − 1)ℓ/(n − k) and is only achieved when every helper storage node communicates exactly ℓ/(n − k) bits to storage node s h . | 6,904 |
1901.04969 | 2910420318 | We report FPGA implementation results of low precision CNN convolution layers optimized for sparse and constant parameters. We describe techniques that amortizes the cost of common factor multiplication and automatically leverage dense hand tuned LUT structures. We apply this method to corner case residual blocks of Resnet on a sparse Resnet50 model to assess achievable utilization and frequency and demonstrate an effective performance of 131 and 23 TOP chip for the corner case blocks. The projected performance on a multichip persistent implementation of all Resnet50 convolution layers is 10k im s chip at batch size 2. This is 1.37x higher than V100 GPU upper bound at the same batch size after normalizing for sparsity. | In recent years, Convolutional Neural Networks(CNN) have demonstrated great efficacy on computer vision tasks such as classification @cite_8 , localization @cite_10 , and SRGAN @cite_6 . Together with Recurrent Neural Networks(RNN), it has motivated the development of custom silicon for Deep Learning(DL). For example, GPU Tensor Core @cite_1 , TPU @cite_2 and Graphcore @cite_4 . | {
"abstract": [
"",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95 of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS Watt to nearly 70X the GPU and 200X the CPU.",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork."
],
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_10"
],
"mid": [
"",
"2194775991",
"",
"2963470893",
"2606722458",
"2963037989"
]
} | Low Precision Constant Parameter CNN on FPGA | II. COMPILED CNN IMPLEMENTATION A. Resnet50 Model Sparsity and Precision
An advantage of Compiled CNNs is its ability to exploit fine grained parameter sparsity without overhead. Multiply-Accumulates (MAC) associated with constant zeros are simply dropped. AMC [8] showed 80% sparse Resnet50 with no accuracy loss. We use an 80% sparse model from Movidius [9] to us as a proxy. We received a pre-quantized model obtained using a modified version of TRN [10] as our starting point. In version of TRN, each output channel has one independent scaling factor and 6 residual terms (equivalent to INT7) are used to obtain an accuracy loss of just 0.22% vs FP32. Table I shows that the key design parameters of conv3 x and conv4 x are bounded by those of conv2 x and conv5 x. The design corners are therefore represented by conv2 x and conv5 x. We focus our efforts on these 2 layers to as a way to quickly assess the potential of Low Precision Compiled CNNs. The basic unit of design is the Resnet Residual Block (Fig 1). We require that this fits on a single chip to keep residual shortcut data on chip. The top level is divided into 2 modules. The Kernel implements the CNN Multiply-Accumulates(MAC). Everything else is part of the Non Kernel(NK). The benefits of constant parameter optimization is found mostly in the Kernel and we focus our efforts there. The NK is, at present, modestly optimized. Where needed we fill the GX280 with several slightly modified Residual Blocks to evaluate resource use, frequency and routability at high utilization.
B. Selection of Resnet Layers for Implementation
Compiling post training parameters into FPGAs yields a persistent network. Therefore, we expect a multichip implementation with suitable interchip links, like, Ethernet or PCIE. To minimize the number of chips needed, we use bit serial math in the Kernel. While bit serial operations are slower they are also smaller and consequently more numerous. In theory, bit serial math does not reduce performance per Logic Element(LE). In practice, it is lower because typical implementations are unable to use the hard adders and carry chains built into modern FPGAs. We will present a solution to overcome inefficiency.
D. Non Kernel Design
The NK module includes everything that is not part of a single convolutional step. This includes buffers for intermediate feature maps, data movement between Kernels and other operations such as bias add, activation functions, normalization and rounding. It also performs the partial result accumulation across convolutional steps for filter sizes greater than 1x1.
We intend to automatically generate the NK RTL in the future but it is currently hand coded. We have also set aside resources for implementing the interchip Ethernet/PCIE channels but have not yet done so.
1) Buffers: The Buffers are constructed using FPGA block RAMs as either (a) streaming FIFOs or (b) double buffers. Buffer type is selected based on layer dataflow, FPGA resource allocation and time division multiplexing (TDM) of the kernel operation. In all cases, interchip buffers are double buffers.
The residual shortcut shown on top of the Fig 1 may have an additional buffer and 1x1 kernel if needed. The buffers at the input and output boundary of a Residual Block is shared with the prior and next Residual Block respectively if they are on the same chip. Otherwise, the buffers also serve as the chip level input and output buffer.
2) Feeder: The feeder fetches parallel data stored in the Buffer and serializes it to feed the bit serial Kernel as well as the residual shortcuts. Deserialization is not needed because the bit serial Kernel generates a parallel output.
3) Accumulator: The Accumulator block adds the partial sums across multiple convolutional steps and is not required for layers with 1x1 filters. When the final sum is ready, it streams the sum to the Collector block.
4) Collector:
The collector performs miscellaneous operations such as bias addition, scaling (normalization) and ReLU. The last collector within each Residual block also adds the shortcut data. The activations are also saturated and rounded to 8 bits here. The design uses as many bit as necessary at other stages to avoid rounding/saturation elsewhere and we make use of constant parameter and ReLU properties to minimize the number of bits needed at every stage. DSP blocks used for scaling are shared by multiple Output Feature Maps(OFM). This is possible because bit serial math requires multiple clocks per operation while DSPs require only one.
E. Kernel Design Compilation and Flow
We developed a tool to automatically convert the constant parameters of a single convolution layer stored in a caffemodel into the Kernel RTL optimized for Intel Stratix 10. The NK consumes the RTL as a blackbox and is unaffected by the changes in parameter values and retraining. The Kernel design shown in Fig 2 is composed of the module inputs and outputs (Xm and Yn), the Common Factor Mass Multiplication (CFMM) blocks, bit serial adder tree (Add Yn) and shift right accumulator (Shr Acc Yn). We reduce the cost of multiplication by amortizing it across multiple operations sharing the same Common Factor(CF). To enable this, we refactor the computation to take a set of inputs from an input feature map (IFM) and perform all computations for it in a single pass. This turns the inputs values into a CF. If the multiplier values are well distributed and numerous relative to the number of unique products, then every product is likely needed and the optimal CFMM design is trivial. The will show that the number of unique multiplications required for an INT7 parameter is 32 for Compiled CNNs. This is small relative to the typical number of CF multiplications in CNNs even at 80% sparsity. We optimize the design for this case and allow the vendor tools to remove unused output products during synthesis, should they exist.
To minimize the number of unique products to compute, we move the sign bit of the parameter into the adder tree. This equivalence classes positive and negative values and reduces the number of unique INT7 products to 64. Also, even products can be produced with a shift left of an odd product and constant shift lefts are free on FPGAs (costs 1 flop for bit serial math). Thus a INT7 CFMM block only has 32 unique products and multiplication by 0 and 1 is free. Now note that, (a) when generating all products each incremental product can be generated using one add/sub and (b) the cost of a bit serial adder is about 1 ALM. Therefore, the first order cost of a CFMM block only about 30 ALMs plus flops. This renders the ALM cost of multiplication trivial. However, each product must still be routed to the an adder tree. This makes FPGA routablity a fundamental limiter on the efficiency of CFMM based multiplication.
2) Adder Tree and Shift Right Accumulator: Each CFMM block computes a product for all OFMs for one IFM. However, each OFM output is a sum of the products from multiple IFMs. Therefore, the design requires one CFMM per IFM and one adder tree for each OFM to sum the products from all CFMM blocks. A simple example is shown in With cheap CFMM multipliers, adder trees become the main consumer of ALMs. However, bit serial math is poorly supported by current FPGA architecture and tools. We find that vendor tools implement 3:2 reduction and bit serial math at half the efficiency of parallel adds. To resolve this, we introduce (a) a hand tuned WYSIWYG design for Intel Stratix 10 with (b) carry hiding. It performs a 6:3 reduction in one ALM stage and asymptotically uses only 3 ALMs. This is double the efficiency of vendor tools and brings the efficiency of bit serial adders back inline with parallel adders. Fig 5 describes a 5 ALM variant of this structure for adding 12 bits. Several variants of this design were built as hand coded WYSIWYG modules which are automatically instantiated by tools converting caffemodels to RTL. This leverages the efficiency of hand tuned designs while hiding its complexity.
The largest variants uses 10 ALMs and add up to 27 bits. While denser variants uses of fewer ALMs it may negatively impact routability and frequency. To determine the optimal variant, we sweep the variants with parameters such as pipelining and accumulator reset strategy. We find that the variant in Fig 5 with one pipeline stage for every 2nd adder stage best meet our frequency and routability requirements.
Also, the adder tree adds a single bit in a bit serial value every clock. To get the final sum, a shift right accumulator(SRA) is added to the end of the adder tree. The SRA simply shifts right the sum of bit N and accumulates it with the sum for bit N+1. For efficiency, the adder trees perform 1's complement math. A constant modifier in the NK module's bias adder converts the final sum into 2's complement for free.
3) Multi Instance Kernels and Folding: Compiled CNNs requires that every parameter be hardened into the FPGA. Table I shows that the amount of compute per parameter differ by up to 64x between conv2 x and conv5 x. To maintain the same throughput at every layer we may need to fold (TDM) the kernels or use multiple instances as appropriate.
Kernel folding is implemented using muxes. As a result, the almost free CFMM multiplication now requires a mux each. In multi instance kernels, each instance corresponds to one convolutional step. For efficiency, the multi instance Kernels directly sum the partial products across multiple convolutional steps. Fig 6 illustrates this for a 4 instance kernel with a 3x3 filter summed into a 3x6 output slice.
1) Residual Block on GX280:
We implement conv2 2 and conv5 2 using the techniques described above. As noted, we require that Residual Blocks be fully contained within one chip. Initial experiments show that conv5 2 must be folded by 4x to fit on GX280. Also, 8 instances of conv2 2 kernels at 2x the frequency of the conv5 2 layer is required to match their inference throughput. This is implemented as 2 conv2 2 Kernel modules with 4 instances arranged as in Fig 6. The design targets and implementation results are summarized in Table II. To simulate high chip utilization we duplicated the conv2 2 Kernels 5 times. This is not needed for conv5 2. However, conv5 2 Kernels contains duplicates of each CFMM block to alleviate routing congestion. The effective TOP/Chip reports the number of effective TOPs which includes the benefits of sparsity.
2) Projections and Comparisons: TOP/chip of GX280 is an inaccurate measure of the fundamental FPGA capability. Compiled CNNs are DSP light and use ¡20% of the DSPs on DSP heavy GX280. At the same performance density, the DSP light GX550 would yield 131 and 23 TOP/chip for conv2 2 and conv5 2 respectively. Finally, we use the demonstrated implementation results to estimate the resource requirements for the remaining convolution layers. This was used to create a reasonable multichip partitioning of Resnet50 (Fig 7). It is throughput balanced and requires at most 75Gbps links. At frequencies demonstrated its throughput is 53061 image/second at batch 2. This corresponds to 5896 and 10612 im/s/chip on GX280 and GX550 respectively. The throughput of a V100 in a DGX-1 system [11] at batch 2 is 1544 im/s/chip. If V100 can extract a 5x efficiency from the 80% unstructured sparsity in the model, its upper bound performance would be 7720 im/s/chip. Our implementation is 1.37x faster than that bound. At submission time, the full system performance is an estimate and has not been validated. Additionally, it does not include the FC layer which we intend to offload to the CPU. However, we feel that the optimistic V100 assumptions more than make up for it and the ability of Compiled CNNs to naturally exploit unstructured sparsity without overhead is a fundamental benefit of this approach. Similarly, the ability to use INT7 with similar accuracy to INT8 and FP32 is a legitimate strength of FPGAs in general.
IV. FUTURE WORK Future work may include full network implementation plus power and latency measurements which should account for inter chip link power and latency. We note that the low clock frequency is a positive for power. A more complete comparison of our work against sparse persistent GPU and FPGA implementations would also be useful. Finally, additional improvements to Compiled CNNs performance through tools, IP design or FPGA architecture may be explored.
V. CONCLUSION
We proposed the use of Compiled CNNs to improve FPGA efficiency and exploit parameter sparsity during CNN inferencing. We introduced techniques to amortize multiplication cost and automated tools that exploit the efficiency of hand tuned designs. We then demonstrated these techniques on sample corner case residual blocks of Resnet50 and use the results to estimate performance for all Resnet50 convolution layers. The projected performance on GX550 is 10612 im/s/chip which is 1.37x higher than the V100 upper bound at the same batch size after normalizing for sparsity. | 2,172 |
1901.04969 | 2910420318 | We report FPGA implementation results of low precision CNN convolution layers optimized for sparse and constant parameters. We describe techniques that amortizes the cost of common factor multiplication and automatically leverage dense hand tuned LUT structures. We apply this method to corner case residual blocks of Resnet on a sparse Resnet50 model to assess achievable utilization and frequency and demonstrate an effective performance of 131 and 23 TOP chip for the corner case blocks. The projected performance on a multichip persistent implementation of all Resnet50 convolution layers is 10k im s chip at batch size 2. This is 1.37x higher than V100 GPU upper bound at the same batch size after normalizing for sparsity. | There has also been work to optimize DL on programmable logic. Notably, Song @cite_3 proposed software-hardware co-design. While silicon implementations must customize for a range of DL applications, an FPGA can customize to a single DL application. This enables application specific customization of precision, sparsity and network structure. However, this is not the limit of FPGA customization. FPGAs can be further customized to a specific instance of a DL application by implementing post training parameters as constants. We call this a Compiled CNN or RNN. | {
"abstract": [
"Long Short-Term Memory (LSTM) is widely used in speech recognition. In order to achieve higher prediction accuracy, machine learning scientists have built increasingly larger models. Such large model is both computation intensive and memory intensive. Deploying such bulky model results in high power consumption and leads to a high total cost of ownership (TCO) of a data center. To speedup the prediction and make it energy efficient, we first propose a load-balance-aware pruning method that can compress the LSTM model size by 20x (10x from pruning and 2x from quantization) with negligible loss of the prediction accuracy. The pruned model is friendly for parallel processing. Next, we propose a scheduler that encodes and partitions the compressed model to multiple PEs for parallelism and schedule the complicated LSTM data flow. Finally, we design the hardware architecture, named Efficient Speech Recognition Engine (ESE) that works directly on the sparse LSTM model. Implemented on Xilinx KU060 FPGA running at 200MHz, ESE has a performance of 282 GOPS working directly on the sparse LSTM network, corresponding to 2.52 TOPS on the dense one, and processes a full LSTM for speech recognition with a power dissipation of 41 Watts. Evaluated on the LSTM for speech recognition benchmark, ESE is 43x and 3x faster than Core i7 5930k CPU and Pascal Titan X GPU implementations. It achieves 40x and 11.5x higher energy efficiency compared with the CPU and GPU respectively."
],
"cite_N": [
"@cite_3"
],
"mid": [
"2585720638"
]
} | Low Precision Constant Parameter CNN on FPGA | II. COMPILED CNN IMPLEMENTATION A. Resnet50 Model Sparsity and Precision
An advantage of Compiled CNNs is its ability to exploit fine grained parameter sparsity without overhead. Multiply-Accumulates (MAC) associated with constant zeros are simply dropped. AMC [8] showed 80% sparse Resnet50 with no accuracy loss. We use an 80% sparse model from Movidius [9] to us as a proxy. We received a pre-quantized model obtained using a modified version of TRN [10] as our starting point. In version of TRN, each output channel has one independent scaling factor and 6 residual terms (equivalent to INT7) are used to obtain an accuracy loss of just 0.22% vs FP32. Table I shows that the key design parameters of conv3 x and conv4 x are bounded by those of conv2 x and conv5 x. The design corners are therefore represented by conv2 x and conv5 x. We focus our efforts on these 2 layers to as a way to quickly assess the potential of Low Precision Compiled CNNs. The basic unit of design is the Resnet Residual Block (Fig 1). We require that this fits on a single chip to keep residual shortcut data on chip. The top level is divided into 2 modules. The Kernel implements the CNN Multiply-Accumulates(MAC). Everything else is part of the Non Kernel(NK). The benefits of constant parameter optimization is found mostly in the Kernel and we focus our efforts there. The NK is, at present, modestly optimized. Where needed we fill the GX280 with several slightly modified Residual Blocks to evaluate resource use, frequency and routability at high utilization.
B. Selection of Resnet Layers for Implementation
Compiling post training parameters into FPGAs yields a persistent network. Therefore, we expect a multichip implementation with suitable interchip links, like, Ethernet or PCIE. To minimize the number of chips needed, we use bit serial math in the Kernel. While bit serial operations are slower they are also smaller and consequently more numerous. In theory, bit serial math does not reduce performance per Logic Element(LE). In practice, it is lower because typical implementations are unable to use the hard adders and carry chains built into modern FPGAs. We will present a solution to overcome inefficiency.
D. Non Kernel Design
The NK module includes everything that is not part of a single convolutional step. This includes buffers for intermediate feature maps, data movement between Kernels and other operations such as bias add, activation functions, normalization and rounding. It also performs the partial result accumulation across convolutional steps for filter sizes greater than 1x1.
We intend to automatically generate the NK RTL in the future but it is currently hand coded. We have also set aside resources for implementing the interchip Ethernet/PCIE channels but have not yet done so.
1) Buffers: The Buffers are constructed using FPGA block RAMs as either (a) streaming FIFOs or (b) double buffers. Buffer type is selected based on layer dataflow, FPGA resource allocation and time division multiplexing (TDM) of the kernel operation. In all cases, interchip buffers are double buffers.
The residual shortcut shown on top of the Fig 1 may have an additional buffer and 1x1 kernel if needed. The buffers at the input and output boundary of a Residual Block is shared with the prior and next Residual Block respectively if they are on the same chip. Otherwise, the buffers also serve as the chip level input and output buffer.
2) Feeder: The feeder fetches parallel data stored in the Buffer and serializes it to feed the bit serial Kernel as well as the residual shortcuts. Deserialization is not needed because the bit serial Kernel generates a parallel output.
3) Accumulator: The Accumulator block adds the partial sums across multiple convolutional steps and is not required for layers with 1x1 filters. When the final sum is ready, it streams the sum to the Collector block.
4) Collector:
The collector performs miscellaneous operations such as bias addition, scaling (normalization) and ReLU. The last collector within each Residual block also adds the shortcut data. The activations are also saturated and rounded to 8 bits here. The design uses as many bit as necessary at other stages to avoid rounding/saturation elsewhere and we make use of constant parameter and ReLU properties to minimize the number of bits needed at every stage. DSP blocks used for scaling are shared by multiple Output Feature Maps(OFM). This is possible because bit serial math requires multiple clocks per operation while DSPs require only one.
E. Kernel Design Compilation and Flow
We developed a tool to automatically convert the constant parameters of a single convolution layer stored in a caffemodel into the Kernel RTL optimized for Intel Stratix 10. The NK consumes the RTL as a blackbox and is unaffected by the changes in parameter values and retraining. The Kernel design shown in Fig 2 is composed of the module inputs and outputs (Xm and Yn), the Common Factor Mass Multiplication (CFMM) blocks, bit serial adder tree (Add Yn) and shift right accumulator (Shr Acc Yn). We reduce the cost of multiplication by amortizing it across multiple operations sharing the same Common Factor(CF). To enable this, we refactor the computation to take a set of inputs from an input feature map (IFM) and perform all computations for it in a single pass. This turns the inputs values into a CF. If the multiplier values are well distributed and numerous relative to the number of unique products, then every product is likely needed and the optimal CFMM design is trivial. The will show that the number of unique multiplications required for an INT7 parameter is 32 for Compiled CNNs. This is small relative to the typical number of CF multiplications in CNNs even at 80% sparsity. We optimize the design for this case and allow the vendor tools to remove unused output products during synthesis, should they exist.
To minimize the number of unique products to compute, we move the sign bit of the parameter into the adder tree. This equivalence classes positive and negative values and reduces the number of unique INT7 products to 64. Also, even products can be produced with a shift left of an odd product and constant shift lefts are free on FPGAs (costs 1 flop for bit serial math). Thus a INT7 CFMM block only has 32 unique products and multiplication by 0 and 1 is free. Now note that, (a) when generating all products each incremental product can be generated using one add/sub and (b) the cost of a bit serial adder is about 1 ALM. Therefore, the first order cost of a CFMM block only about 30 ALMs plus flops. This renders the ALM cost of multiplication trivial. However, each product must still be routed to the an adder tree. This makes FPGA routablity a fundamental limiter on the efficiency of CFMM based multiplication.
2) Adder Tree and Shift Right Accumulator: Each CFMM block computes a product for all OFMs for one IFM. However, each OFM output is a sum of the products from multiple IFMs. Therefore, the design requires one CFMM per IFM and one adder tree for each OFM to sum the products from all CFMM blocks. A simple example is shown in With cheap CFMM multipliers, adder trees become the main consumer of ALMs. However, bit serial math is poorly supported by current FPGA architecture and tools. We find that vendor tools implement 3:2 reduction and bit serial math at half the efficiency of parallel adds. To resolve this, we introduce (a) a hand tuned WYSIWYG design for Intel Stratix 10 with (b) carry hiding. It performs a 6:3 reduction in one ALM stage and asymptotically uses only 3 ALMs. This is double the efficiency of vendor tools and brings the efficiency of bit serial adders back inline with parallel adders. Fig 5 describes a 5 ALM variant of this structure for adding 12 bits. Several variants of this design were built as hand coded WYSIWYG modules which are automatically instantiated by tools converting caffemodels to RTL. This leverages the efficiency of hand tuned designs while hiding its complexity.
The largest variants uses 10 ALMs and add up to 27 bits. While denser variants uses of fewer ALMs it may negatively impact routability and frequency. To determine the optimal variant, we sweep the variants with parameters such as pipelining and accumulator reset strategy. We find that the variant in Fig 5 with one pipeline stage for every 2nd adder stage best meet our frequency and routability requirements.
Also, the adder tree adds a single bit in a bit serial value every clock. To get the final sum, a shift right accumulator(SRA) is added to the end of the adder tree. The SRA simply shifts right the sum of bit N and accumulates it with the sum for bit N+1. For efficiency, the adder trees perform 1's complement math. A constant modifier in the NK module's bias adder converts the final sum into 2's complement for free.
3) Multi Instance Kernels and Folding: Compiled CNNs requires that every parameter be hardened into the FPGA. Table I shows that the amount of compute per parameter differ by up to 64x between conv2 x and conv5 x. To maintain the same throughput at every layer we may need to fold (TDM) the kernels or use multiple instances as appropriate.
Kernel folding is implemented using muxes. As a result, the almost free CFMM multiplication now requires a mux each. In multi instance kernels, each instance corresponds to one convolutional step. For efficiency, the multi instance Kernels directly sum the partial products across multiple convolutional steps. Fig 6 illustrates this for a 4 instance kernel with a 3x3 filter summed into a 3x6 output slice.
1) Residual Block on GX280:
We implement conv2 2 and conv5 2 using the techniques described above. As noted, we require that Residual Blocks be fully contained within one chip. Initial experiments show that conv5 2 must be folded by 4x to fit on GX280. Also, 8 instances of conv2 2 kernels at 2x the frequency of the conv5 2 layer is required to match their inference throughput. This is implemented as 2 conv2 2 Kernel modules with 4 instances arranged as in Fig 6. The design targets and implementation results are summarized in Table II. To simulate high chip utilization we duplicated the conv2 2 Kernels 5 times. This is not needed for conv5 2. However, conv5 2 Kernels contains duplicates of each CFMM block to alleviate routing congestion. The effective TOP/Chip reports the number of effective TOPs which includes the benefits of sparsity.
2) Projections and Comparisons: TOP/chip of GX280 is an inaccurate measure of the fundamental FPGA capability. Compiled CNNs are DSP light and use ¡20% of the DSPs on DSP heavy GX280. At the same performance density, the DSP light GX550 would yield 131 and 23 TOP/chip for conv2 2 and conv5 2 respectively. Finally, we use the demonstrated implementation results to estimate the resource requirements for the remaining convolution layers. This was used to create a reasonable multichip partitioning of Resnet50 (Fig 7). It is throughput balanced and requires at most 75Gbps links. At frequencies demonstrated its throughput is 53061 image/second at batch 2. This corresponds to 5896 and 10612 im/s/chip on GX280 and GX550 respectively. The throughput of a V100 in a DGX-1 system [11] at batch 2 is 1544 im/s/chip. If V100 can extract a 5x efficiency from the 80% unstructured sparsity in the model, its upper bound performance would be 7720 im/s/chip. Our implementation is 1.37x faster than that bound. At submission time, the full system performance is an estimate and has not been validated. Additionally, it does not include the FC layer which we intend to offload to the CPU. However, we feel that the optimistic V100 assumptions more than make up for it and the ability of Compiled CNNs to naturally exploit unstructured sparsity without overhead is a fundamental benefit of this approach. Similarly, the ability to use INT7 with similar accuracy to INT8 and FP32 is a legitimate strength of FPGAs in general.
IV. FUTURE WORK Future work may include full network implementation plus power and latency measurements which should account for inter chip link power and latency. We note that the low clock frequency is a positive for power. A more complete comparison of our work against sparse persistent GPU and FPGA implementations would also be useful. Finally, additional improvements to Compiled CNNs performance through tools, IP design or FPGA architecture may be explored.
V. CONCLUSION
We proposed the use of Compiled CNNs to improve FPGA efficiency and exploit parameter sparsity during CNN inferencing. We introduced techniques to amortize multiplication cost and automated tools that exploit the efficiency of hand tuned designs. We then demonstrated these techniques on sample corner case residual blocks of Resnet50 and use the results to estimate performance for all Resnet50 convolution layers. The projected performance on GX550 is 10612 im/s/chip which is 1.37x higher than the V100 upper bound at the same batch size after normalizing for sparsity. | 2,172 |
1901.04805 | 2964234547 | The widespread adoption of Internet of Things has led to many security issues. Recently, there have been malware attacks on IoT devices, the most prominent one being that of Mirai. IoT devices such as IP cameras, DVRs and routers were compromised by the Mirai malware and later large-scale DDoS attacks were propagated using those infected devices (bots) in October 2016. In this research, we develop a network-based algorithm which can be used to detect IoT bots infected by Mirai or similar malware in large-scale networks (e.g. ISP network). The algorithm particularly targets bots scanning the network for vulnerable devices since the typical scanning phase for botnets lasts for months and the bots can be detected much before they are involved in an actual attack. We analyze the unique signatures of the Mirai malware to identify its presence in an IoT device. Further, to optimize the usage of computational resources, we use a two-dimensional (2D) packet sampling approach, wherein we sample the packets transmitted by IoT devices both across time and across the devices. Leveraging the Mirai signatures identified and the 2D packet sampling approach, a bot detection algorithm is proposed. We use testbed measurements and simulations to study the relationship between bot detection delays and the sampling frequencies for device packets. Subsequently, we derive insights from the obtained results and use them to design our proposed bot detection algorithm. Finally, we discuss the deployment of our bot detection algorithm and the countermeasures which can be taken post detection. | There has also been some research on intrusion detection and anomaly detection systems for IoT. A whitelist-based intrusion detection system for IoT devices (Heimdall) has been presented in @cite_1 . Heimdall is based on dynamic profile learning and is designed to work on routers acting as gateways for IoT devices. The authors in @cite_2 propose an intrusion detection model for IoT backbone networks leveraging two-layer dimension reduction and two-tier classification techniques to detect U2R (User-to-Root) and R2L (Remote-to-Local) attacks. In a recently published paper @cite_11 , deep-autoencoders based anomaly detection has been used to detect attacks launched from IoT botnets. The method consists of extraction of statistical features from behavioral snapshots of normal IoT device traffic captures, training of a deep learning-based autoencoder (for each IoT device) on the extracted features and comparison of the reconstruction error for traffic observations with a threshold for normal-anomalous classification. The proposed detection method was evaluated on Mirai and BASHLITE botnets formed using commercial IoT devices. | {
"abstract": [
"The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet.",
"The Internet of Things (IoT) is built of many small smart objects continuously connected to the Internet. This makes these devices an easy target for attacks exploiting vulnerabilities at the network, application, and mobile level. With that it comes as no surprise that distributed denial of service attacks leveraging these vulnerable devices have become a new standard for effective botnets. In this paper, we propose Heimdall, a whitelist-based intrusion detection technique tailored to IoT devices. Heimdall operates on routers acting as gateways for IoT as a homogeneous defense for all devices behind the router. Our experimental results show that our defense mechanism is effective and has minimal overhead.",
"With increasing reliance on Internet of Things (IoT) devices and services, the capability to detect intrusions and malicious activities within IoT networks is critical for resilience of the network infrastructure. In this paper, we present a novel model for intrusion detection based on two-layer dimension reduction and two-tier classification module, designed to detect malicious activities such as User to Root (U2R) and Remote to Local (R2L) attacks. The proposed model is using component analysis and linear discriminate analysis of dimension reduction module to spate the high dimensional dataset to a lower one with lesser features. We then apply a two-tier classification module utilizing Naive Bayes and Certainty Factor version of K-Nearest Neighbor to identify suspicious behaviors. The experiment results using NSL-KDD dataset shows that our model outperforms previous models designed to detect U2R and R2L attacks."
],
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_2"
],
"mid": [
"2799758613",
"2614230424",
"2557450880"
]
} | Early Detection Of Mirai-Like IoT Bots In Large-Scale Networks Through Sub-Sampled Packet Traffic Analysis | The number of devices has been increasing steadily (albeit at a slower rate than some earlier generous predictions), and this trend is expected to hold in the future.
IoT devices are being increasingly targeted by hackers using malware (malicious software) as they are easier to infect than conventional computers for the following reasons [3][4][5]:
-There are many legacy IoT devices connected to the Internet with no security updates. -Security is given a low priority within the development cycle of IoT devices.
-Implementing conventional cryptography in IoT devices is computationally expensive due to processing power and memory constraints. In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [6] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack. Apart from Mirai, there are other IoT malware which operate using a similar brute force technique of scanning random IP addresses for open ports and attempting to login using a built-in dictionary of commonly used credentials. BASHLITE [7], Remaiten [8], Hajime [9] are some examples of these IoT malware. Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [10]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. McAfee reported in April 2017 [11] that about 2.5 million IoT devices were infected by Mirai in late 2016. Bitdefender mentioned in its blog in September 2017 [12] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through telnet scanning telemetry data. Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage.
As pointed out in [13], attempting to ensure that all IoT devices are secure-byconstruction is futile as there will always be insecure devices (with patched and unpatched vulnerabilities) connected to the Internet due to the scale and diversity of IoT devices and vendors. Moreover, considering the lack of full-fledged operating systems, low power requirements, resource constraints and presence of legacy devices, it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be network-based rather than host-based.
In this research, we propose a network-based algorithm which can be used to detect IoT bots infected by Mirai-like malware (which use port-based scanning) in large-scale networks. Bots scanning the network for vulnerable devices are targeted in particular by our algorithm. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Moreover, our algorithm is practical in terms of utilization of computational resources (such as CPU processing power, memory). For example, ISP (Internet Service Provider) network operators can use the proposed algorithm to identify infected IoT devices in their network. The operators can then take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. Actions that can be taken post bot detection are further discussed in a later section. The major contributions of this paper are listed below:
1. We have analyzed the traffic signatures produced by Mirai malware infecting IoT devices through testbed experiments. Further, we have identified specific signatures which can be used to positively detect the presence of Mirai and similar malware in IoT devices. These signatures are similar to the observations reported in [14] based on their analysis of the Mirai source code. 2. We have proposed an algorithm to detect Mirai-like IoT malware bots in large-scale networks. The algorithm is based on a novel two dimensional sampling approach where the device packets are sampled across time as well as across the devices.
The rest of the contents of this paper are organized as follows. In Section 2, we review few prominent works on detecting botnets exploiting CnC communication features and intrusion detection systems for IoT. Subsequently, in section 3, we explain the operation of Mirai, extract important features from the traffic generated by Mirai bots in a testbed and present a detailed analysis of those features towards detecting Mirai-like bots. Section 4 presents the network deployment of our bot detection solution. It also includes the formulation of the optimization problem resulting from detection of IoT bots along with the constraints imposed by limited computational resources followed by the proposed bot detection algorithm. Finally, the algorithm is numerically evaluated and the results are presented in section 5.
Mirai Traffic Analysis
Detecting IoT devices compromised by Mirai-like malware requires us to analyze the packet traffic generated by those devices and extract some features to aid us in detection. In this section, we begin with a brief description the operation of Mirai to make the readers familiar with some of the related terms. Later, we present a testbed that we use to emulate IoT devices, infect them with Mirai and capture the packet traffic generated from them. Finally, we present the extracted features and analyze them in detail with respect to identifying Mirai bots.
Mirai Operation
The Mirai [25] setup consists of three major components: bot, scanListen/loading server, and the CnC (Command-and-Control) server. The CnC server also functions as a MySQL [26] database server. User accounts can be created in this database for customers who wish to hire DDoS-as-a-service. The operation of Mirai is illustrated in Fig. 1. Once an IoT device is infected with Mirai (and becomes a bot), it first attempts to connect to the listening CnC server by resolving its domain name and opening a socket connection. Thereafter, it starts scanning the network by sending SYN packets to random IP addresses and waiting for them to respond. This process may take a long time since the bot has to go through a large number of IP addresses. Once it finds a vulnerable device with a TELNET port open, it attempts to open a socket connection to that device and emulates the TELNET protocol. Then it attempts to login using a list of default credentials and if working credential is found, it reports the IP address of the discovered device and the working TELNET login credentials to the listening scanListen server.
The scanListen server sends that information to the loader which again logs in to the discovered device using the details received from the scanListen server. Once logged in, the loader downloads the Mirai bot binary to that device and the new bot connects to the CnC server and starts scanning the network.
Testbed Description
The testbed shown in Fig. 2 was configured on an isolated computing cluster. Each cluster node has two Intel Xeon E5-2620 processors, 64 GB DDR4 ECC memeory and runs Ubuntu 14.04 LTS standard image. The testbed consists of a local authoritative DNS server, a CnC (Command-and-Control) server and a server for scanListen and loading utility, all connected to a single LAN. The IoT gateways are connected to the above LAN through routers and behind the gateways are QEMU [28]-emulated IoT devices (Raspberry Pi). We chose this gateway-IoT device topology since it is used in a number of IoT deployments (such as IP cameras, smart lighting devices, wearables etc.). The testbed also includes few non-IoT devices (PCs) to reflect real-world networks. As per our information, this is the first controlled testbed to emulate the true behavior of Mirai malware. It can be modified to add more nodes, study a different network topology and test more advanced versions or derivatives of Mirai malware.
Mirai Traffic Features
We infected the emulated IoT devices in our testbed with Mirai and captured a total of 1,583,623 packets transmitted by the devices. An analysis of the captured packets reveals the following features/signatures: Both ports 23 and 2323 are assigned for TELNET applications [29,30]. The TEL-NET [31] protocol is used for bidirectional byte-oriented communication. In the most widely used implementation of TELNET, a user with a terminal and running a TEL-NET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. The most common application of TELNET is for configuring network devices such as routers. Now, IoT devices operate by continuously transmitting sensed data to and receiving commands from cloud servers through a gateway over a secure communication channel without external human input [32]. We claim that an IoT device is unlikely to be used to access or configure another device using TELNET, and therefore in the absence of malware infection, IoT devices should not open TELNET connections to any other device.
To verify our claim that uninfected IoT devices are not expected to open TELNET connections, the following experiment was conducted. We configured a Raspberry Pi 3 (Model B+) to act as a gateway and connected it to several real-world IoT devices such as IP cameras (D-Link), motion sensors (D-Link), smart bulbs (Philips Hue), smart switches (WeMo) and smart plugs (TPLink). We left the devices connected for a long time and for each device type mentioned above, we captured around 10,000 packets per device at the gateway interface. Later, the captured packets were analysed using Wireshark [33] and no SYN packets with destination ports 23 or 2323 were found. Thus, if a SYN packet from an IoT device with destination port number 23 or 2323 is received, it is sufficient evidence to conclude with certainty that the IoT device is infected with a Mirai-like malware. The above experiment also help us to rule out false positives, if any at all, if we use the identified scanning traffic signatures, which is a substantial advantage when it comes to practical intrusion detection.
The third Mirai signature related to keep-alive messages is not required since the port-scanning signatures is sufficient for detection with certainty. We may require the third signature to detect more advanced malware which do not use TELNET port-based scanning. It needs to be emphasized here that the TELNET port-scanning signatures can be used to identify not only bots infected by Mirai but also other Mirai-like malware such as BASHLITE, Remaiten, Hajime etc. which employ similar TELNET port brute forcing technique.
Mirai-like IoT Malware Bot Detection
The bot scanning traffic analyzed in the previous section can be detected using simple firewalls. However, since IoT devices are usually resource-constrained, they do not have firewalls installed on them. Moreover, network-level firewalls (protecting computers in a LAN/WAN/intranet) are usually not configured to block TELNET traffic. In this section, we discuss the network deployment of our bot detection solution. Further, we formulate the optimization problem arising out of detecting IoT bots with the accompanying computational resource constraints. Further, we propose an algorithm for bot detection based on our analysis.
Bot Detection Solution Deployment
It is proposed that our Mirai-like bot detection solution be run on the edge gateway connected to IoT devices or the aggregation router connected directly to the gateway. Assuming that we run the bot detection solution on the router, a prospective network deployment for our solution is shown in Fig. 3. The incoming packets at the router are arranged and stored in buffers according to their source devices.
We process only IoT device packets at the router, whereas the aggregate router traffic consists of IoT as well as non-IoT traffic (PCs, smartphones etc.) The authors in [34] distinguish between traffic generated by IoT and non-IoT devices from a single TCP session by analyzing user-agent HTTP property for smartphones and single-session binary classifiers for PCs. A classification accuracy of 100% for smartphones and false positive, negative rates of 0.003 each for PCs were claimed to be achieved. We can use their methods to distinguish between IoT and non-IoT device packets using a single session worth of packets. Further, once we identify a device as belonging to IoT or non-IoT type, we can continue to use this information in the future as the device type is not expected to change.
It is assumed that the ISPs already have access to the information regarding vulnerable and non-vulnerable devices. As explained earlier in Section 4.2, IoT devices installed in home environments can be regarded as vulnerable while the devices installed in enterprise/industrial/government networks can be deemed as non-vulnerable. We expect the firmware running on bot detection routers to be upgradeable so that in future, if more advanced bot detection algorithms are designed (e.g. for IoT malware which do not just rely on port based scanning), the corresponding software updates can be easily pushed to those routers.
Once the bots are detected by our proposed algorithm, the next step is to take mitigating actions to prevent the bots from spreading further damage. The network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. The concerned ISP can inform the device owners and ask them to secure their device (by using strong usernames/passwords, placing the device behind a firewall etc.). Another defense mechanism is that instead of blocking all the traffic, the bot can be allowed communications with a few secure domains for remediation of malware infection. This strategy has been mentioned as part of the bot remediation techniques [35] recommended for ISPs by IETF (Internet Engineering Task Force). The bot can also be placed under continuous monitoring and all other communication except that required for the underlying IoT device to function can be denied. Finally, security personnel can exploit bugs in the bot binary to disinfect them remotely.
Formulation of Optimization Problem
Processing all the incoming packets at the bot detection router to check if they originated from an IoT device and subsequently matching those packets against the Mirai traffic signature would require a lot of memory. To give an example, considering 10,000 IoT devices connected to the aggregation router with each device transmitting at 10% of the peak data rate of 250 kbits/s (according to IEEE 802.15.4 standard for low rate personal wireless devices), we need ≈ 9.36GB of storage for a 5 mins traffic session. However, typically WAN aggregation routers have 1-4GB RAM and few GBs of external storage only, of which a major part is used in packet routing. Therefore, for our bot detection solution, we propose to sample only a fraction of the IoT devices per unit time. However, this approach has the drawback that we may miss the scanning packets due to the sub-sampling operation. This leads to the formulation of the following optimization problem to detect infected devices.
Our objective in this optimization problem is to minimize the cost associated with the delay in detecting a compromised device. We define average detection delay (T D ) as the average time between the first occurrence of a scanning packet and the positive conclusion that the originating device is infected. Now, some IoT devices in a network are easier to infect with malware than others. Therefore, we split the IoT devices into two categories: vulnerable and non-vulnerable devices. Vulnerable devices are the devices which are easier to get successfully infected with Mirai-like malware and added to the botnet. The devices other than vulnerable ones are non-vulnerable devices.
For example, personal IoT devices installed at homes can be deemed as vulnerable since they are less likely to be behind a firewall (host-level firewalls not feasible on IoT devices due to resource constraints) and more likely to have their TELNET ports open (often owners buy cheap devices in which the manufacturer has left TELNET port open for remote configuration etc.). IoT devices installed in enterprise/industrial/government networks can be categorized as non-vulnerable since most likely, they would be behind a network-level firewall (blocking access to insecure TELNET connections) and they are much less likely to have to have their TELNET ports open (due to organizational IT security policies).
We define the sampling frequency for an IoT device as the fraction of the time when that device is selected for monitoring for possible infection. We also define the sampling matrix, Σ as a matrix with columns representing devices and rows representing the packets transmitted by those devices. An element of Σ is equal to 1 when the corresponding packet has been sampled and equal to 0 when the corresponding packet has not been sampled.
Further, our optimization problem imposes the following constraints that need to be satisfied:
-The sampling frequency for a vulnerable device ( f v n ) should be greater than the sampling frequency for a non-vulnerable device ( f nv n ). This is because vulnerable devices are more likely to be attacked than non-vulnerable devices and hence they need to be more frequently monitored.
-The total number of vulnerable and non-vulnerable devices selected within a certain time period (ρ v f v n T + ρ nv f nv n T ) should not exceed a maximum number ( f max n T ), where ρ v and ρ nv are the fractions of total number of devices that and vulnerable and non-vulnerable respectively. This is to limit the utilization of computational resources for if the total number of selected devices is more than an upper bound, it may require significant amounts of processing power defeating the purpose of packet sub-sampling.
-The maximum number of vulnerable devices selected at any time should have an upper bound (N max v ). Similarly, the maximum number of non-vulnerable devices selected at any time should have an upper bound (N max nv ). This is again to place a bound on computational resources utilization.
-After a certain number of sampling time units (T ), every device (in the set of all devices, Ω N ) should be covered by the sampling process. This is to ensure that every device is checked for malware infection within a certain time duration or else few devices which are infected may be missed by the sampling process.
We propose to minimize the cost associated with the average detection delay while satisfying the above constraints as follows:
minimize Σ , f v n , f nv n αT D ( f v n , f nv n ,Y v ,Y nv ) subject to f v n > f nv n ρ v f v n + ρ nv f nv n < f max n max[N v {Σ }] < N max v max[N nv {Σ }] < N max nv t start +T t=t start dev_set(Σ ,t) = Ω N
where α is defined as the cost incurred by the bot detection algorithm due to a unit average detection delay, N v {.}, N nv {.} denote the number of vulnerable and nonvulnerable devices selected in Σ at any point of time, Y v is the set of vulnerable devices, Y nv is the set of non-vulnerable devices, and dev_set(.) is a function that outputs the set of devices sampled in Σ at a time t. It is to be noted that the above optimization problem is a combinatorial one and it is computationally hard to find an optimal solution [36]. Hence, we devise a method to numerically solve the optimization problem. The results obtained from the numerical analysis are explained in Section 5. Based on our findings through the formulation of optimization problem, we have proposed an algorithm for detecting IoT bots (shown in Algorithm 1) which is practical in terms of lower number of packets that need to be monitored for infected device detection. The values for f v n and f nv n to be used while designing our algorithm will be discussed in our numerical analysis.
Evaluation of Proposed Algorithm
In this section, we analyze the the behavior of average detection delay for vulnerable and non-vulnerable devices with varying sampling rates. A few important background details are presented below: if src_dev(recv_pkt) ∈ list_dev then 4:
add_dev_to_list(src_dev(recv_pkt),list_dev) 5:
end if 6:
add_pkt_to_buf(recv_pkt, dev_buf(src_dev(recv_pkt)) 7: pktcnt=pktcnt+1 8: end for 9: while TRUE do 10:
sel_dev_set=dev_set(Σ ,t) 11:
for i = 1 to length(sel_dev_set) do 12:
sampled_pkts(t,:)=dev_buf(sel_dev_set(i), CURRENT_PKT) 13:
end for 14:
for j = 1 to length(sampled_pkts(t,:)) do 15:
if Check_TCP_flag(sampled_pkts(t,j)) = SYN & Check_dst_port(sampled_pkts(t,j)) = 23 OR 2323 then 16:
Bot_detected(src_dev(sampled_pkts(t,j))) = TRUE 17:
end if 18:
end for 19: t=t+1 20: end while -The set of attacked devices, Φ is selected based on the assumed probability model for malware attack on vulnerable and non-vulnerable devices. For example, we can assume the probability of attack on vulnerable devices within a given time duration (N p packets' transmission) as p 1 and that on non-vulnerable devices as p 2 . -The sampling matrix, Σ used in our evaluation has a staggered structure and may be visualized as in Fig. 4. Since the sampling frequency for vulnerable devices is greater than that for non-vulnerable devices, the portion of Σ containing packets transmitted by vulnerable devices has a more dense distribution of 1s than that for non-vulnerable devices. The structure of the matrix also ensures that every device is sampled after a certain number of sampling time units as required by one of the constraints in the optimization problem presented in Section 4.2. -We form a scanning matrix with size as (number of IoT devices) × (number of packets transmitted). The matrix uses 0 to represent a normal IoT device packet and 1 to represent a malware scanning packet. Only the devices in Φ would have 1s in their corresponding rows in the scanning matrix. -The elements where the scanning and the sampling matrices are both 1 represent detected scanning packets. This is because the matching elements would only be present where the scanning packet transmitted by an attacked device has been selected by the sampling process.
Moreover, we need to form a statistical model for scanning packet arrivals in the scanning matrix. Towards this, we used one of our emulated IoT devices and established a video streaming server to emulate the operation of an IP camera (IoT device used in Mirai attack on Dyn). Another emulated IoT device acted as a client connected to the video Based on the above empirical observations, we model the scanning packet arrivals as a Poisson process, i.e., the inter-packet arrival times for scanning packets are exponentially distributed with the average packet arrival rate calculated from the testbed measurements. At all other times, we assume that normal IoT traffic is transmitted, again based on above observations. The values assumed for the various parameters in our analysis are shown in Table 1. The plot for average detection delay vs sampling frequency for different values of attack probability on vulnerable devices (p 1 ) is shown in Fig. 6. The detection delay values are averaged over all the detected devices as well as over a number of trial runs (1000). The units of average detection delay are in number of packets elapsed while the units of sampling frequency are in per packet elapsed. It can be observed that the average detection delay decreases almost exponentially with increasing sampling frequency. This behavior can be intuitively explained as follows. Increasing the sampling frequency means that the vulnerable devices are sampled much more frequently, which in turn increases the likelihood of sampling the scanning packets transmitted by infected vulnerable devices. Once a scanning packet is sampled, it can be positively concluded that the corresponding source device is infected as discussed in section 3.3. Hence, an increase in the likelihood of sampling scanning packets should lead to a decrease in the average detection delay as defined in section 4.2. Further, it can also be noted from the plot that increasing the sampling frequency beyond a certain value (e.g. '0.33' for p 1 = 0.5) leads to slower reduction in average detection delay. This suggests that while designing the proposed Algorithm 1, the sampling frequency for vulnerable devices should be selected towards the upper half of the range of available values but not too high since higher sampling frequencies will not result in more benefit in terms of decrease in average detection delay. Instead, sampling frequencies which are too high may lead to greater consumption of computational resources.
One may observe that the average detection delay values decrease slightly as the attack probability increases. This is expected since an increase in attack probability means that more number of vulnerable devices are likely to be infected, thus increasing the likelihood of sampling the scanning packets transmitted by those infected devices resulting in a decrease in average detection delay. Lastly, the plots for the three attack probabilities, p 1 = 0.5, 0.7, 0.9, are quite close to each other, suggesting that changes in attack probability do not affect the average detection delay vs sampling frequency behavior significantly.
In Fig. 7, we have illustrated the distribution of average detection delays for vulnerable devices for a sampling frequency of 0.2 and attack probability of 0.6 using a histogram. The distribution closely fits an exponential distribution with a mean of ≈ 52, suggesting that the probability of achieving higher and higher average detection delays for vulnerable devices decreases almost exponentially. Vulnerable devices are sampled at a relatively higher frequency and also have a higher probability of being infected than non-vulnerable devices. Therefore, scanning packets can be detected with lower delays in most trials, resulting in higher probability for lower values and lower probabilities for higher values of average detection delays. Fig. 8, we have presented the plot for average detection delay vs sampling frequency for different values of attack probability on non-vulnerable devices (p 2 ). The plot behavior is somewhat irregular near lower sampling frequencies. For higher sampling frequencies, the average detection delay can be observed to decrease almost linearly with increasing sampling frequency. The intuitive explanation for the decreasing behavior is similar to the one given above for vulnerable devices. While designing the proposed Algorithm 1, a sampling frequency for non-vulnerable devices which is too high may lead to lower average detection delay but the corresponding increase in processing power and memory requirements may not be desirable since non-vulnerable devices are not expected to be compromised easily. A sampling frequency which is too low on the other hand, may increase the average detection delay significantly in the unexpected scenario when some of the non-vulnerable devices are compromised. Therefore, the algorithm designers may have to settle for a sampling frequency which falls in the mid of the range of available values. Fig. 9 shows the distribution of average detection delays for non-vulnerable devices for a sampling frequency of 0.025 and attack probability of 0.2 using a histogram. The distribution assumes the highest values for average detection delays between '0-10,000'. Thereafter, values taken by the distribution decrease slowly with increasing average detection delays. We are developing a software prototype of the proposed bot detection algorithm [37] which will be evaluated on a testbed consisting of physical IoT and non-IoT devices, gateways, and routers. The network behavior of Mirai will be emulated by replaying Mirai traffic captured from our virtualized testbed. In the future, we would like to develop solutions for detecting IoT bots infected with malware exploiting software vulnerabilities to hack the devices and add to the botnet. For instance, Linux.Darlloz, Reaper and Amnesia malware [38][39][40] use HTTP (Hyper Text Transfer Protocol)-based exploits to perform code injection and arbitrarily execute code on remote devices bypassing authentication. It should be noted here that the packet sub-sampling approach proposed in this paper is likely to be a part of the bot detection solution devised for such advanced malware. Finally, some malware may try to evade detection, e.g. by attempting to hide their scanning activity. It would be an interesting problem to detect such evasive IoT malware.
Conclusion
In this paper, we proposed an algorithm for detecting IoT devices infected by Mirai or similar malware. The bot detection algorithm uses Mirai traffic signatures and a two-dimensional sub-sampling approach. The deployment of our algorithm within a real-world network was discussed and prospective actions which can be taken after bot detection were also mentioned. Leveraging measurements taken from a testbed constructed to emulate the behavior of Mirai, we studied the relationship between average detection delays and sampling frequencies for vulnerable and non-vulnerable devices. Based on our analysis of the plots, we made suggestions regarding the process of selection of sampling frequencies while designing our proposed algorithm. Finally, we identified few interesting problems stemming out of this research which we would like to work upon in the future. | 4,858 |
1901.04805 | 2964234547 | The widespread adoption of Internet of Things has led to many security issues. Recently, there have been malware attacks on IoT devices, the most prominent one being that of Mirai. IoT devices such as IP cameras, DVRs and routers were compromised by the Mirai malware and later large-scale DDoS attacks were propagated using those infected devices (bots) in October 2016. In this research, we develop a network-based algorithm which can be used to detect IoT bots infected by Mirai or similar malware in large-scale networks (e.g. ISP network). The algorithm particularly targets bots scanning the network for vulnerable devices since the typical scanning phase for botnets lasts for months and the bots can be detected much before they are involved in an actual attack. We analyze the unique signatures of the Mirai malware to identify its presence in an IoT device. Further, to optimize the usage of computational resources, we use a two-dimensional (2D) packet sampling approach, wherein we sample the packets transmitted by IoT devices both across time and across the devices. Leveraging the Mirai signatures identified and the 2D packet sampling approach, a bot detection algorithm is proposed. We use testbed measurements and simulations to study the relationship between bot detection delays and the sampling frequencies for device packets. Subsequently, we derive insights from the obtained results and use them to design our proposed bot detection algorithm. Finally, we discuss the deployment of our bot detection algorithm and the countermeasures which can be taken post detection. | Fourth, we do not extract CnC communication features and use them to identify bot-CnC communications as done in @cite_10 @cite_24 @cite_29 . This is because we aim to detect bots infected by Mirai-like IoT malware, towards which much simpler features can be used as discussed in Section . Fifth, unlike @cite_11 , we aim to detect IoT bots much before the actual attack, during the scanning phase itself as explained in Section . Finally, most of the above cited works use quantifiers such as detection rate and false positive rates to evaluate the performance of their proposed botnet detection solutions. Instead, we use a quantity called average detection delay (defined in Section ) for the performance evaluation of our proposed bot detection solution since the features used by our solution eliminate the possibility of inaccurate detections or false positives. To the best of our knowledge, there are no existing papers on detecting IoT bots compromised by Mirai or its variants which exhibit port-based SYN scanning behavior. | {
"abstract": [
"Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names addresses). We start from the definition and essential properties of botnets. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets (IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate.",
"Peer-to-peer (P2P) botnets have recently been adopted by botmasters for their resiliency against take-down efforts. Besides being harder to take down, modern botnets tend to be stealthier in the way they perform malicious activities, making current detection approaches ineffective. In addition, the rapidly growing volume of network traffic calls for high scalability of detection systems. In this paper, we propose a novel scalable botnet detection system capable of detecting stealthy P2P botnets. Our system first identifies all hosts that are likely engaged in P2P communications. It then derives statistical fingerprints to profile P2P traffic and further distinguish between P2P botnet traffic and legitimate P2P traffic. The parallelized computation with bounded complexity makes scalability a built-in feature of our system. Extensive evaluation has demonstrated both high detection accuracy and great scalability of the proposed system.",
"Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.",
"The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet."
],
"cite_N": [
"@cite_24",
"@cite_29",
"@cite_10",
"@cite_11"
],
"mid": [
"1775772884",
"2058314598",
"1583098994",
"2799758613"
]
} | Early Detection Of Mirai-Like IoT Bots In Large-Scale Networks Through Sub-Sampled Packet Traffic Analysis | The number of devices has been increasing steadily (albeit at a slower rate than some earlier generous predictions), and this trend is expected to hold in the future.
IoT devices are being increasingly targeted by hackers using malware (malicious software) as they are easier to infect than conventional computers for the following reasons [3][4][5]:
-There are many legacy IoT devices connected to the Internet with no security updates. -Security is given a low priority within the development cycle of IoT devices.
-Implementing conventional cryptography in IoT devices is computationally expensive due to processing power and memory constraints. In a widely publicized attack, the IoT malware Mirai was used to propagate the biggest DDoS (Distributed Denial-of-Service) attack on record on October 21, 2016. The attack targeted the Dyn DNS (Domain Name Service) servers [6] and generated an attack throughput of the order of 1.2 Tbps. It disabled major internet services such as Amazon, Twitter and Netflix. The attackers had infected IoT devices such as IP cameras and DVR recorders with Mirai, thereby creating an army of bots (botnet) to take part in the DDoS attack. Apart from Mirai, there are other IoT malware which operate using a similar brute force technique of scanning random IP addresses for open ports and attempting to login using a built-in dictionary of commonly used credentials. BASHLITE [7], Remaiten [8], Hajime [9] are some examples of these IoT malware. Bots compromised by Mirai or similar IoT malware can be used for DDoS attacks, phishing and spamming [10]. These attacks can cause network downtime for long periods which may lead to financial loss to network companies, and leak users' confidential data. McAfee reported in April 2017 [11] that about 2.5 million IoT devices were infected by Mirai in late 2016. Bitdefender mentioned in its blog in September 2017 [12] that researchers had estimated at least 100,000 devices infected by Mirai or similar malware revealed daily through telnet scanning telemetry data. Further, many of the infected devices are expected to remain infected for a long time. Therefore, there is a substantial motivation for detecting these IoT bots and taking appropriate action against them so that they are unable to cause any further damage.
As pointed out in [13], attempting to ensure that all IoT devices are secure-byconstruction is futile as there will always be insecure devices (with patched and unpatched vulnerabilities) connected to the Internet due to the scale and diversity of IoT devices and vendors. Moreover, considering the lack of full-fledged operating systems, low power requirements, resource constraints and presence of legacy devices, it is practically unfeasible to deploy traditional host-based detection and prevention mechanisms such as antivirus, firewalls for IoT devices. Therefore, it becomes imperative that the security mechanisms for the IoT ecosystem are designed to be network-based rather than host-based.
In this research, we propose a network-based algorithm which can be used to detect IoT bots infected by Mirai-like malware (which use port-based scanning) in large-scale networks. Bots scanning the network for vulnerable devices are targeted in particular by our algorithm. This is because the scanning and propagation phase of the botnet life-cycle stretches over many months and we can detect and isolate the bots before they can participate in an actual attack such as DDoS. If the DDoS attack has already occurred (due to a botnet), detecting the attack itself is not that difficult and there are already existing methods both in literature and industry to defend against such attacks. Moreover, our algorithm is practical in terms of utilization of computational resources (such as CPU processing power, memory). For example, ISP (Internet Service Provider) network operators can use the proposed algorithm to identify infected IoT devices in their network. The operators can then take suitable countermeasures such as blocking the traffic originating from IoT bots and notifying the local network administrators. Actions that can be taken post bot detection are further discussed in a later section. The major contributions of this paper are listed below:
1. We have analyzed the traffic signatures produced by Mirai malware infecting IoT devices through testbed experiments. Further, we have identified specific signatures which can be used to positively detect the presence of Mirai and similar malware in IoT devices. These signatures are similar to the observations reported in [14] based on their analysis of the Mirai source code. 2. We have proposed an algorithm to detect Mirai-like IoT malware bots in large-scale networks. The algorithm is based on a novel two dimensional sampling approach where the device packets are sampled across time as well as across the devices.
The rest of the contents of this paper are organized as follows. In Section 2, we review few prominent works on detecting botnets exploiting CnC communication features and intrusion detection systems for IoT. Subsequently, in section 3, we explain the operation of Mirai, extract important features from the traffic generated by Mirai bots in a testbed and present a detailed analysis of those features towards detecting Mirai-like bots. Section 4 presents the network deployment of our bot detection solution. It also includes the formulation of the optimization problem resulting from detection of IoT bots along with the constraints imposed by limited computational resources followed by the proposed bot detection algorithm. Finally, the algorithm is numerically evaluated and the results are presented in section 5.
Mirai Traffic Analysis
Detecting IoT devices compromised by Mirai-like malware requires us to analyze the packet traffic generated by those devices and extract some features to aid us in detection. In this section, we begin with a brief description the operation of Mirai to make the readers familiar with some of the related terms. Later, we present a testbed that we use to emulate IoT devices, infect them with Mirai and capture the packet traffic generated from them. Finally, we present the extracted features and analyze them in detail with respect to identifying Mirai bots.
Mirai Operation
The Mirai [25] setup consists of three major components: bot, scanListen/loading server, and the CnC (Command-and-Control) server. The CnC server also functions as a MySQL [26] database server. User accounts can be created in this database for customers who wish to hire DDoS-as-a-service. The operation of Mirai is illustrated in Fig. 1. Once an IoT device is infected with Mirai (and becomes a bot), it first attempts to connect to the listening CnC server by resolving its domain name and opening a socket connection. Thereafter, it starts scanning the network by sending SYN packets to random IP addresses and waiting for them to respond. This process may take a long time since the bot has to go through a large number of IP addresses. Once it finds a vulnerable device with a TELNET port open, it attempts to open a socket connection to that device and emulates the TELNET protocol. Then it attempts to login using a list of default credentials and if working credential is found, it reports the IP address of the discovered device and the working TELNET login credentials to the listening scanListen server.
The scanListen server sends that information to the loader which again logs in to the discovered device using the details received from the scanListen server. Once logged in, the loader downloads the Mirai bot binary to that device and the new bot connects to the CnC server and starts scanning the network.
Testbed Description
The testbed shown in Fig. 2 was configured on an isolated computing cluster. Each cluster node has two Intel Xeon E5-2620 processors, 64 GB DDR4 ECC memeory and runs Ubuntu 14.04 LTS standard image. The testbed consists of a local authoritative DNS server, a CnC (Command-and-Control) server and a server for scanListen and loading utility, all connected to a single LAN. The IoT gateways are connected to the above LAN through routers and behind the gateways are QEMU [28]-emulated IoT devices (Raspberry Pi). We chose this gateway-IoT device topology since it is used in a number of IoT deployments (such as IP cameras, smart lighting devices, wearables etc.). The testbed also includes few non-IoT devices (PCs) to reflect real-world networks. As per our information, this is the first controlled testbed to emulate the true behavior of Mirai malware. It can be modified to add more nodes, study a different network topology and test more advanced versions or derivatives of Mirai malware.
Mirai Traffic Features
We infected the emulated IoT devices in our testbed with Mirai and captured a total of 1,583,623 packets transmitted by the devices. An analysis of the captured packets reveals the following features/signatures: Both ports 23 and 2323 are assigned for TELNET applications [29,30]. The TEL-NET [31] protocol is used for bidirectional byte-oriented communication. In the most widely used implementation of TELNET, a user with a terminal and running a TEL-NET client program, accesses a remote host running a TELNET server by requesting a connection to the remote host and logging in by providing its credentials. The most common application of TELNET is for configuring network devices such as routers. Now, IoT devices operate by continuously transmitting sensed data to and receiving commands from cloud servers through a gateway over a secure communication channel without external human input [32]. We claim that an IoT device is unlikely to be used to access or configure another device using TELNET, and therefore in the absence of malware infection, IoT devices should not open TELNET connections to any other device.
To verify our claim that uninfected IoT devices are not expected to open TELNET connections, the following experiment was conducted. We configured a Raspberry Pi 3 (Model B+) to act as a gateway and connected it to several real-world IoT devices such as IP cameras (D-Link), motion sensors (D-Link), smart bulbs (Philips Hue), smart switches (WeMo) and smart plugs (TPLink). We left the devices connected for a long time and for each device type mentioned above, we captured around 10,000 packets per device at the gateway interface. Later, the captured packets were analysed using Wireshark [33] and no SYN packets with destination ports 23 or 2323 were found. Thus, if a SYN packet from an IoT device with destination port number 23 or 2323 is received, it is sufficient evidence to conclude with certainty that the IoT device is infected with a Mirai-like malware. The above experiment also help us to rule out false positives, if any at all, if we use the identified scanning traffic signatures, which is a substantial advantage when it comes to practical intrusion detection.
The third Mirai signature related to keep-alive messages is not required since the port-scanning signatures is sufficient for detection with certainty. We may require the third signature to detect more advanced malware which do not use TELNET port-based scanning. It needs to be emphasized here that the TELNET port-scanning signatures can be used to identify not only bots infected by Mirai but also other Mirai-like malware such as BASHLITE, Remaiten, Hajime etc. which employ similar TELNET port brute forcing technique.
Mirai-like IoT Malware Bot Detection
The bot scanning traffic analyzed in the previous section can be detected using simple firewalls. However, since IoT devices are usually resource-constrained, they do not have firewalls installed on them. Moreover, network-level firewalls (protecting computers in a LAN/WAN/intranet) are usually not configured to block TELNET traffic. In this section, we discuss the network deployment of our bot detection solution. Further, we formulate the optimization problem arising out of detecting IoT bots with the accompanying computational resource constraints. Further, we propose an algorithm for bot detection based on our analysis.
Bot Detection Solution Deployment
It is proposed that our Mirai-like bot detection solution be run on the edge gateway connected to IoT devices or the aggregation router connected directly to the gateway. Assuming that we run the bot detection solution on the router, a prospective network deployment for our solution is shown in Fig. 3. The incoming packets at the router are arranged and stored in buffers according to their source devices.
We process only IoT device packets at the router, whereas the aggregate router traffic consists of IoT as well as non-IoT traffic (PCs, smartphones etc.) The authors in [34] distinguish between traffic generated by IoT and non-IoT devices from a single TCP session by analyzing user-agent HTTP property for smartphones and single-session binary classifiers for PCs. A classification accuracy of 100% for smartphones and false positive, negative rates of 0.003 each for PCs were claimed to be achieved. We can use their methods to distinguish between IoT and non-IoT device packets using a single session worth of packets. Further, once we identify a device as belonging to IoT or non-IoT type, we can continue to use this information in the future as the device type is not expected to change.
It is assumed that the ISPs already have access to the information regarding vulnerable and non-vulnerable devices. As explained earlier in Section 4.2, IoT devices installed in home environments can be regarded as vulnerable while the devices installed in enterprise/industrial/government networks can be deemed as non-vulnerable. We expect the firmware running on bot detection routers to be upgradeable so that in future, if more advanced bot detection algorithms are designed (e.g. for IoT malware which do not just rely on port based scanning), the corresponding software updates can be easily pushed to those routers.
Once the bots are detected by our proposed algorithm, the next step is to take mitigating actions to prevent the bots from spreading further damage. The network administrator can block the entire traffic originating from bots and bring them back online only after it is confirmed that the malware has been removed from those IoT devices. The concerned ISP can inform the device owners and ask them to secure their device (by using strong usernames/passwords, placing the device behind a firewall etc.). Another defense mechanism is that instead of blocking all the traffic, the bot can be allowed communications with a few secure domains for remediation of malware infection. This strategy has been mentioned as part of the bot remediation techniques [35] recommended for ISPs by IETF (Internet Engineering Task Force). The bot can also be placed under continuous monitoring and all other communication except that required for the underlying IoT device to function can be denied. Finally, security personnel can exploit bugs in the bot binary to disinfect them remotely.
Formulation of Optimization Problem
Processing all the incoming packets at the bot detection router to check if they originated from an IoT device and subsequently matching those packets against the Mirai traffic signature would require a lot of memory. To give an example, considering 10,000 IoT devices connected to the aggregation router with each device transmitting at 10% of the peak data rate of 250 kbits/s (according to IEEE 802.15.4 standard for low rate personal wireless devices), we need ≈ 9.36GB of storage for a 5 mins traffic session. However, typically WAN aggregation routers have 1-4GB RAM and few GBs of external storage only, of which a major part is used in packet routing. Therefore, for our bot detection solution, we propose to sample only a fraction of the IoT devices per unit time. However, this approach has the drawback that we may miss the scanning packets due to the sub-sampling operation. This leads to the formulation of the following optimization problem to detect infected devices.
Our objective in this optimization problem is to minimize the cost associated with the delay in detecting a compromised device. We define average detection delay (T D ) as the average time between the first occurrence of a scanning packet and the positive conclusion that the originating device is infected. Now, some IoT devices in a network are easier to infect with malware than others. Therefore, we split the IoT devices into two categories: vulnerable and non-vulnerable devices. Vulnerable devices are the devices which are easier to get successfully infected with Mirai-like malware and added to the botnet. The devices other than vulnerable ones are non-vulnerable devices.
For example, personal IoT devices installed at homes can be deemed as vulnerable since they are less likely to be behind a firewall (host-level firewalls not feasible on IoT devices due to resource constraints) and more likely to have their TELNET ports open (often owners buy cheap devices in which the manufacturer has left TELNET port open for remote configuration etc.). IoT devices installed in enterprise/industrial/government networks can be categorized as non-vulnerable since most likely, they would be behind a network-level firewall (blocking access to insecure TELNET connections) and they are much less likely to have to have their TELNET ports open (due to organizational IT security policies).
We define the sampling frequency for an IoT device as the fraction of the time when that device is selected for monitoring for possible infection. We also define the sampling matrix, Σ as a matrix with columns representing devices and rows representing the packets transmitted by those devices. An element of Σ is equal to 1 when the corresponding packet has been sampled and equal to 0 when the corresponding packet has not been sampled.
Further, our optimization problem imposes the following constraints that need to be satisfied:
-The sampling frequency for a vulnerable device ( f v n ) should be greater than the sampling frequency for a non-vulnerable device ( f nv n ). This is because vulnerable devices are more likely to be attacked than non-vulnerable devices and hence they need to be more frequently monitored.
-The total number of vulnerable and non-vulnerable devices selected within a certain time period (ρ v f v n T + ρ nv f nv n T ) should not exceed a maximum number ( f max n T ), where ρ v and ρ nv are the fractions of total number of devices that and vulnerable and non-vulnerable respectively. This is to limit the utilization of computational resources for if the total number of selected devices is more than an upper bound, it may require significant amounts of processing power defeating the purpose of packet sub-sampling.
-The maximum number of vulnerable devices selected at any time should have an upper bound (N max v ). Similarly, the maximum number of non-vulnerable devices selected at any time should have an upper bound (N max nv ). This is again to place a bound on computational resources utilization.
-After a certain number of sampling time units (T ), every device (in the set of all devices, Ω N ) should be covered by the sampling process. This is to ensure that every device is checked for malware infection within a certain time duration or else few devices which are infected may be missed by the sampling process.
We propose to minimize the cost associated with the average detection delay while satisfying the above constraints as follows:
minimize Σ , f v n , f nv n αT D ( f v n , f nv n ,Y v ,Y nv ) subject to f v n > f nv n ρ v f v n + ρ nv f nv n < f max n max[N v {Σ }] < N max v max[N nv {Σ }] < N max nv t start +T t=t start dev_set(Σ ,t) = Ω N
where α is defined as the cost incurred by the bot detection algorithm due to a unit average detection delay, N v {.}, N nv {.} denote the number of vulnerable and nonvulnerable devices selected in Σ at any point of time, Y v is the set of vulnerable devices, Y nv is the set of non-vulnerable devices, and dev_set(.) is a function that outputs the set of devices sampled in Σ at a time t. It is to be noted that the above optimization problem is a combinatorial one and it is computationally hard to find an optimal solution [36]. Hence, we devise a method to numerically solve the optimization problem. The results obtained from the numerical analysis are explained in Section 5. Based on our findings through the formulation of optimization problem, we have proposed an algorithm for detecting IoT bots (shown in Algorithm 1) which is practical in terms of lower number of packets that need to be monitored for infected device detection. The values for f v n and f nv n to be used while designing our algorithm will be discussed in our numerical analysis.
Evaluation of Proposed Algorithm
In this section, we analyze the the behavior of average detection delay for vulnerable and non-vulnerable devices with varying sampling rates. A few important background details are presented below: if src_dev(recv_pkt) ∈ list_dev then 4:
add_dev_to_list(src_dev(recv_pkt),list_dev) 5:
end if 6:
add_pkt_to_buf(recv_pkt, dev_buf(src_dev(recv_pkt)) 7: pktcnt=pktcnt+1 8: end for 9: while TRUE do 10:
sel_dev_set=dev_set(Σ ,t) 11:
for i = 1 to length(sel_dev_set) do 12:
sampled_pkts(t,:)=dev_buf(sel_dev_set(i), CURRENT_PKT) 13:
end for 14:
for j = 1 to length(sampled_pkts(t,:)) do 15:
if Check_TCP_flag(sampled_pkts(t,j)) = SYN & Check_dst_port(sampled_pkts(t,j)) = 23 OR 2323 then 16:
Bot_detected(src_dev(sampled_pkts(t,j))) = TRUE 17:
end if 18:
end for 19: t=t+1 20: end while -The set of attacked devices, Φ is selected based on the assumed probability model for malware attack on vulnerable and non-vulnerable devices. For example, we can assume the probability of attack on vulnerable devices within a given time duration (N p packets' transmission) as p 1 and that on non-vulnerable devices as p 2 . -The sampling matrix, Σ used in our evaluation has a staggered structure and may be visualized as in Fig. 4. Since the sampling frequency for vulnerable devices is greater than that for non-vulnerable devices, the portion of Σ containing packets transmitted by vulnerable devices has a more dense distribution of 1s than that for non-vulnerable devices. The structure of the matrix also ensures that every device is sampled after a certain number of sampling time units as required by one of the constraints in the optimization problem presented in Section 4.2. -We form a scanning matrix with size as (number of IoT devices) × (number of packets transmitted). The matrix uses 0 to represent a normal IoT device packet and 1 to represent a malware scanning packet. Only the devices in Φ would have 1s in their corresponding rows in the scanning matrix. -The elements where the scanning and the sampling matrices are both 1 represent detected scanning packets. This is because the matching elements would only be present where the scanning packet transmitted by an attacked device has been selected by the sampling process.
Moreover, we need to form a statistical model for scanning packet arrivals in the scanning matrix. Towards this, we used one of our emulated IoT devices and established a video streaming server to emulate the operation of an IP camera (IoT device used in Mirai attack on Dyn). Another emulated IoT device acted as a client connected to the video Based on the above empirical observations, we model the scanning packet arrivals as a Poisson process, i.e., the inter-packet arrival times for scanning packets are exponentially distributed with the average packet arrival rate calculated from the testbed measurements. At all other times, we assume that normal IoT traffic is transmitted, again based on above observations. The values assumed for the various parameters in our analysis are shown in Table 1. The plot for average detection delay vs sampling frequency for different values of attack probability on vulnerable devices (p 1 ) is shown in Fig. 6. The detection delay values are averaged over all the detected devices as well as over a number of trial runs (1000). The units of average detection delay are in number of packets elapsed while the units of sampling frequency are in per packet elapsed. It can be observed that the average detection delay decreases almost exponentially with increasing sampling frequency. This behavior can be intuitively explained as follows. Increasing the sampling frequency means that the vulnerable devices are sampled much more frequently, which in turn increases the likelihood of sampling the scanning packets transmitted by infected vulnerable devices. Once a scanning packet is sampled, it can be positively concluded that the corresponding source device is infected as discussed in section 3.3. Hence, an increase in the likelihood of sampling scanning packets should lead to a decrease in the average detection delay as defined in section 4.2. Further, it can also be noted from the plot that increasing the sampling frequency beyond a certain value (e.g. '0.33' for p 1 = 0.5) leads to slower reduction in average detection delay. This suggests that while designing the proposed Algorithm 1, the sampling frequency for vulnerable devices should be selected towards the upper half of the range of available values but not too high since higher sampling frequencies will not result in more benefit in terms of decrease in average detection delay. Instead, sampling frequencies which are too high may lead to greater consumption of computational resources.
One may observe that the average detection delay values decrease slightly as the attack probability increases. This is expected since an increase in attack probability means that more number of vulnerable devices are likely to be infected, thus increasing the likelihood of sampling the scanning packets transmitted by those infected devices resulting in a decrease in average detection delay. Lastly, the plots for the three attack probabilities, p 1 = 0.5, 0.7, 0.9, are quite close to each other, suggesting that changes in attack probability do not affect the average detection delay vs sampling frequency behavior significantly.
In Fig. 7, we have illustrated the distribution of average detection delays for vulnerable devices for a sampling frequency of 0.2 and attack probability of 0.6 using a histogram. The distribution closely fits an exponential distribution with a mean of ≈ 52, suggesting that the probability of achieving higher and higher average detection delays for vulnerable devices decreases almost exponentially. Vulnerable devices are sampled at a relatively higher frequency and also have a higher probability of being infected than non-vulnerable devices. Therefore, scanning packets can be detected with lower delays in most trials, resulting in higher probability for lower values and lower probabilities for higher values of average detection delays. Fig. 8, we have presented the plot for average detection delay vs sampling frequency for different values of attack probability on non-vulnerable devices (p 2 ). The plot behavior is somewhat irregular near lower sampling frequencies. For higher sampling frequencies, the average detection delay can be observed to decrease almost linearly with increasing sampling frequency. The intuitive explanation for the decreasing behavior is similar to the one given above for vulnerable devices. While designing the proposed Algorithm 1, a sampling frequency for non-vulnerable devices which is too high may lead to lower average detection delay but the corresponding increase in processing power and memory requirements may not be desirable since non-vulnerable devices are not expected to be compromised easily. A sampling frequency which is too low on the other hand, may increase the average detection delay significantly in the unexpected scenario when some of the non-vulnerable devices are compromised. Therefore, the algorithm designers may have to settle for a sampling frequency which falls in the mid of the range of available values. Fig. 9 shows the distribution of average detection delays for non-vulnerable devices for a sampling frequency of 0.025 and attack probability of 0.2 using a histogram. The distribution assumes the highest values for average detection delays between '0-10,000'. Thereafter, values taken by the distribution decrease slowly with increasing average detection delays. We are developing a software prototype of the proposed bot detection algorithm [37] which will be evaluated on a testbed consisting of physical IoT and non-IoT devices, gateways, and routers. The network behavior of Mirai will be emulated by replaying Mirai traffic captured from our virtualized testbed. In the future, we would like to develop solutions for detecting IoT bots infected with malware exploiting software vulnerabilities to hack the devices and add to the botnet. For instance, Linux.Darlloz, Reaper and Amnesia malware [38][39][40] use HTTP (Hyper Text Transfer Protocol)-based exploits to perform code injection and arbitrarily execute code on remote devices bypassing authentication. It should be noted here that the packet sub-sampling approach proposed in this paper is likely to be a part of the bot detection solution devised for such advanced malware. Finally, some malware may try to evade detection, e.g. by attempting to hide their scanning activity. It would be an interesting problem to detect such evasive IoT malware.
Conclusion
In this paper, we proposed an algorithm for detecting IoT devices infected by Mirai or similar malware. The bot detection algorithm uses Mirai traffic signatures and a two-dimensional sub-sampling approach. The deployment of our algorithm within a real-world network was discussed and prospective actions which can be taken after bot detection were also mentioned. Leveraging measurements taken from a testbed constructed to emulate the behavior of Mirai, we studied the relationship between average detection delays and sampling frequencies for vulnerable and non-vulnerable devices. Based on our analysis of the plots, we made suggestions regarding the process of selection of sampling frequencies while designing our proposed algorithm. Finally, we identified few interesting problems stemming out of this research which we would like to work upon in the future. | 4,858 |
1901.04626 | 2909698198 | We compare a novel Knowledge-based Reinforcement Learning (KB-RL) approach with the traditional Neural Network (NN) method in solving a classical task of the Artificial Intelligence (AI) field. Neural networks became very prominent in recent years and, combined with Reinforcement Learning, proved to be very effective for one of the frontier challenges in AI - playing the game of Go. Our experiment shows that a KB-RL system is able to outperform a NN in a task typical for NN, such as optimizing a regression problem. Furthermore, KB-RL offers a range of advantages in comparison to the traditional Machine Learning methods. Particularly, there is no need for a large dataset to start and succeed with this approach, its learning process takes considerably less effort, and its decisions are fully controllable, explicit and predictable. | As the result of NNs' popularity, little attention has been paid to other AI approaches, such as symbolism, evolutionarism, or Bayesian statistics @cite_0 . More recently, though, new studies emerge that show successful results in applying alternative approaches to AI tasks. For example, Denis G Willson at el. show that their evolutionary algorithm can outperform the deep neural network approach in playing Atari games @cite_2 . More studies are targeting General AI as, for instance, the CYC project @cite_19 . Some researchers advocate that the combination of different techniques into one powerful AI system is the way to go @cite_8 . | {
"abstract": [
"\"Wonderfully erudite, humorous, and easy to read.\" --KDNuggets In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible.",
"Cyc is a bold attempt to assemble a massive knowledge base (on the order of 10 8 axioms) spanning human consensus knowledge. This article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. The methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base.",
"During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general.",
"Cartesian Genetic Programming (CGP) has previously shown capabilities in image processing tasks by evolving programs with a function set specialized for computer vision. A similar approach can be applied to Atari playing. Programs are evolved using mixed type CGP with a function set suited for matrix operations, including image processing, but allowing for controller behavior to emerge. While the programs are relatively small, many controllers are competitive with state of the art methods for the Atari benchmark set and require less training time. By evaluating the programs of the best evolved individuals, simple but effective strategies can be found."
],
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_8",
"@cite_2"
],
"mid": [
"2275841169",
"2071718761",
"2806422867",
"2807874935"
]
} | Comparing Knowledge-based Reinforcement Learning to Neural Networks in a Strategy Game | The core difference between the hype and reality of AI is that machines do not have a human-like brain. Actually, machines do not understand, do not reason, and do not infer. Nevertheless, people have been seeking for decades to create a humanlike intelligent machine by trying to simulate and inherit the way the human brain operates. One of the most recent trends in AI is Machine Learning (ML). To a high extend, ML owns its popularity to the development of Neural Networks -the technology that was inspired by the biological neurons in the human brain. The fact that NNs can solve insolvable before tasks drew a lot of inspiration to the field. Thus, it is no surprise that nowadays the majority of research in AI is focused mostly on NNs, while little attention is paid to other methods.
However, NNs have their own limitations and difficulties. Firstly, they are data greedy. NNs require hundreds of thousands times more data to learn than a human, for example. While in such domains as speech or image recognition the Internet created the abundance of data, in some areas acquiring vast amounts of data is challenging [Ng, 2015]. The lack of data in such cases makes NNs ineffective and ill-fated. Secondly, handling huge amounts of data in NNs requires * Eschersheimer Landstr. 526-532, 60433 Frankfurt am Main extensive computational power. Giant companies such as Amazon, Google or Facebook have access to sufficient hardware resources and can train NNs for weeks under their budget. Yet, for smaller businesses and projects, the availability of CPUs and RAM becomes the overwhelming constraint. Another major drawback of NNs is that they are one-task models. Once trained, NN model can be incredibly effective at a specific task, such as detecting objects or playing a game. However, NNs cannot operate like a human brain, solving various tasks, generalizing concepts, and transferring knowledge between different domains. Combining NNs with Reinforcement Learning into Deep Reinforcement Learning (DRL) opened new possibilities for AI. However, DRL also inherited aforementioned disadvantages of Neural Networks.
The method discussed in this paper is based on the idea that human knowledge can be leveraged in automating problem solving. Rather than collecting tons of data for feeding a neural network, teaching the explicit rules to the machine can significantly shorten the time for finding the optimal solution. In many cases, people possess a lot of knowledge about the problem, and learn from each other when the knowledge is missing. Similarly, humans could share their knowledge with the machine. Then, instead of blank start, computers can begin problem solving like a human expert by reasoning about available knowledge, iterating through it and optimizing.
This idea motivated the Arago company to develop its Knowledge-base Reinforcement Learning approach. To demonstrate the capability of this approach, Arago decided to pick a problem that is enough challenging and closely related to the real world tasks. Thereby, the strategy game CIV-ILIZATION was chosen as a benchmark. The motivation for this choice can be summarized in the following reasons:
• Historically, games have been considered as an excellent test-bed for AI research [Laird and van Lent, 2000]. Such games as Maze, Chess, Checkers have been a universal benchmark for AI studies since the origin of AI. More recently, Atari console games [Mnih et al., 2013], Mario [Dann et al., 2017] and StarCraft [Hu et al., 2018] gained increasing interest due to their higher complexity and availability of well developed APIs. Most recently, AlphaGo managed to win against the World Champion [Silver et al., 2016] in Go.
• The complexity of the CIVILIZATION game. This game is considerably more complex than, for example, the Go game. The complexity of Go is estimated as approxi-mately 10 761 possible games [Stanek, 2017]. The game is played in a deterministic environment with static rules. In contrast, a player in the CIVILIZATION game has to manage numerous agents with a much bigger action space and in a non-deterministic environment. This brings the estimated complexity of this game to be over 10 15000 possible games.
• The paradigm of the CIVILIZATION game is close to the real world and real business. It means that playing the game can be easily translated into solving real world tasks. In the game, as well as in the business world, it is all about management within restricted resources and competing goals.
Particularly, the game's implementation FreeCiv was taken as a benchmark due to its open source availability and well defined API. The KB-RL system was setup to play FreeCiv and the HIRO FreeCiv Challenge [Arago, 2016] was called to demonstrate the concept. To illustrate the difference between the KB-RL approach and the NN framework, it was decided to conduct an experiment comparing them both on a game subtask. The subtask was chosen in such a way that 1. it can be implemented with both approaches, and 2. it would be a typical task for the NN. After careful consideration, we decided on the task of optimizing a regression problem.
Naturally, we also considered Deep Reinforcement Learning as an opponent for the KB-RL. However, early results showed that DRL could not give much advantage to the results of our experiment. The reasons for this are explained in section 3.
The results show that the KB-RL system is able to outperform NN in the selected subtask of the FreeCiv game. Moreover, contrary to the NN, the KB-RL system provides a number of advantages such as no need for a large dataset to start and succeed with the solution. Its learning process takes considerably less effort, and its decisions are transparent and controllable.
Experimental setup 3.1 Task definition
Image and speech recognition are the areas where NNs demonstrate the most exceptional performance. On the other hand, it is hardly possible to explicitly encode a solution for such tasks with classical programming. Therefore, we chose for our experiment an element of FreeCiv that incorporates perception of the image. Particularly, we picked up the task of evaluating the map's tile for building cities that would lead the game to the maximum of generated resources. In other words, we would like to maximize the game's generated resources by optimizing the cities locations on the game map. As it is shown in below, the amount of natural resources generated in one game is implemented through the points of different type that are adding every game turn. We call the amount of generated natural resource from all cities in one game: the total game output (TGO).
For a human player it is a matter of one look at the map to understand its multiple features and to estimate a tiles' value with regard to the future city output and strategic position. Estimation of all map features and their possible values in one script of traditional computer programming would result in dozens of 'if-else' blocks and endless code repetition, which is deficient and error prone. With sufficient amount of data, NNs can solve such task highly efficiently by analyzing the image of the map chunks and predicting their quality for the given task. Yet, solving given task by the KB-RL approach appears to be even more effective.
In FreeCiv, the settlement mechanism is implemented in the game by means of building settler units, creating cities, and their development. The wise choice of city location is a guarantee of its rapid growth, rich resources and consequently player's success.
Cities generate natural resources from the terrain tiles within city borders. City borders may reach terrain within the 5x5 region centered on the city, minus its corners. To extract resources from a tile, the player must have a citizen working there. Each working tile generates a number of food, production and trade points per turn. Trade points can be turned into gold, luxury or science points. These six types of pointsfood, production, trade, gold, luxury, and science -constitute the city output.
In this way, we calculate the city output as a sum of all points that are collected with every turn, and we double the production points as they can be used as half a gold point when buying the current city project. Thus, the formula for the city output is given by equation 1
OU T P U T T = T t=1 (gold t + luxury t + science t + f ood t + production t * 2 + trade t ) (1)
where t and T refer to the turn number.
Consequently, the total game output at turn T is the sum of all city outputs owned by the player until the T -th turn:
T GO = N n=1 OU T P U T n,T(2)
where N is the number of the player's cities. As previously mentioned, the goal of the experiment is to maximize the TGO by optimizing city positions on the game map. Let's take a look at the parameters related to the map tiles that are relevant for the city output.
Selected parameters
The output of each tile is affected by the terrain, the presence of special resources, and improvements such as roads, irrigation, or mines. The total city output can be affected by the city economy, city governor, and the government type. Also, a powerful mechanism to boost trade points are trade routes.
For the purpose of this experiment, we considered only those parameters that are relevant to the map qualities:
• (TERR) Terrain of the tile and terrain of the surrounding 5x5 tiles with cut corners. There are 9 possible terrain types in the game suitable for building a city: Desert, Forest, Grassland, Hills, Jungle, Mountains, Plains, Swamp, and Tundra. • (RES) Resources on the tile and surrounding 5x5 tiles with cut corners. Every type of terrain has a chance of an additional special resource that boosts one or two of the products. Special resource can be one of 17 types and only one per tile. • (WATER) Availability of water resources. Presence of Ocean or Deep Ocean terrain in the city has special significance due to their rich resources and strategic advantages. Therefore, we consider them as extra parameter separately from other terrain types. • (RIVERS) Availability of rivers. Rivers enable improvements of the terrain and enhance trade for some types of terrain.
FreeCiv is a very complex game and there might be more parameters that affect the city output. To include each of them in the experiment was not our objective. Firstly, we aimed to include the most relevant features, and secondly, we setup equal conditions for neural network and for KB-RL, and their performance against each other was our objective. The only two attributes that were included in the dataset unrelated to the map qualities were those that characterize neighboring cities: number of player's cities in the region 9x9 (with cut corners) centered on the city, and the number of enemy cities in this region. We mark them 'NEIGHB'.
Settling is happening in FreeCiv in its initial phase. After cities are built, the player mostly focuses on the developing economy, technologies and warfare. For the purpose of the experiment, we did not need to play the game until it finished. Stopping the game ahead of time gave us the advantage of significantly shorter episode duration: such episodes took about 10% − 20% of the whole game time. Analyzing the HIRO FreeCiv Challenge games [Arago, 2016], we chose to play only first 120 turns of the game, as it seemed to be a good trade-off between amount of generated data, game state, and the playing time.
Regression problem
In fact, we saw the total game output as a regression problem that determines the relationships between the aforementioned parameters and the total game output value:
f : (T ERR, RES, W AT ER, RIV ERS, N EIGHB) → T GO (3)
In other words, given the parameters of the map cluster (5x5), we aimed to predict a continuous integer value reflecting the future output of the city being built in the cluster center. When a new Settler is completed, the player evaluates each map tile and chooses the best suitable location to send the unit there for founding a city. This evaluation is the process that promises the future city output. Therefore, we set up our experiment to optimize the tiles' evaluation with two different approaches: KB-RL and NN, and compared the outcome.
Experiment structure
Firstly, playing the FreeCiv game was implemented with the knowledge-based approach without any optimization that Figure 1: The diagram for KB-RL and NN setups. The task of tile evaluation was implemented in two different ways. On the left, the knowledge-based rules were used within the KB-RL approach to evaluate the tile scores. On the right, the Neural Network model was used to predict the tile scores. would involve RL. At that stage, the system could play fairly well against embedded AI, and mostly won. After that, Arago announced the HIRO FreeCiv Challenge [Arago, 2016] asking human players for their expertise in playing the game. The ten best strategies were then implemented as expert knowledge for ten separate knowledge pools. The next stage was to mix all knowledge pools into one and find the best strategy via RL.
For the selected task, we designed two setups: one would evaluate the tile based on the rules derived from human players expertise, and another one would use a neural network. Figure 1 illustrates the difference in the two setups. It is important to note that both setups were exposed to reinforcement learning and used the same knowledge pool except the outlined tile evaluation. Designed this way, the difference in output of the two setups would be the result of different approaches for the tile evaluation, and thus, would become a point of comparison for these two methods.
Initially, we also considered the setup where learning would be performed first, and then NN would be plugged in to observe its performance against the pure KB-RL approach. However, in this case the NN would lead the game through a different set of states that were not that much experienced and learned in the KB-RL setup run. This fact would give a disadvantage to the NN setup. Thus, the decision was made in favor of running learning for both setups from scratch.
Neural Network setup
To create a dataset, we had 1100 fully played games acquired from the HIRO FreeCiv Challenge. Realistically, for training neural networks, it is very scarce data. Considering the limited resources we had, it took more than a month to collect these data. Spending more resources on obtaining more data was unreasonably expensive. Therefore, we worked our best to exploit the available data to their full potential.
Our goal was to create a dataset where data entries represent the map tile parameters as discussed above, and the value would be the output that the city could generate in the first 100 turns of its existence. We collected all tiles on which cities were built from 1100 games and determined their map parameters according to our design. To estimate the city output on these tiles, we faced a few challenges. Firstly, cities built on the same land in different games would differ significantly due to the different game strategies and player's progress.
Secondly, cities were built in different turns, but we had to estimate each tile independently from the turn built. Therefore, we could not use the formula 1 to set the value against our dataset entries. By analyzing the data and experimenting with hyperparameters for training the neural network, we chose to calculate city output as in formula 4
OU T P U T c = 100 i=1 (gold i + luxury i + science i + f ood i + production i * 2 + trade i ) (4)
where c refers to the city index, and i represents the age of the city in terms of turns. For example, i = 1 relates to the first turn after a city was built, and i = 100 is the 100th turn of city existence on the map. We replaced duplicate entries with one entry defining the city output as average output of these entries. By keeping only unique entries, we aimed to minimize the possible data imbalance [Kołcz et al., 2003].
As a result, we collected more than 2700 unique entries for training our NN model. The input dataset was normalized by min-max strategy, and the trained model had the following structure: • Input layer accepts 83-dimensional feature vector. • One hidden layer with 95 neurons and ReLU activation. • Weights are initialized using truncated normal distribution with zero mean and 0.0005 standard deviation.
• To avoid overfitting, a dropout with probability 0.5 is applied to the hidden layer. • Output is a single neuron, which is a continuous variable. • The mean tiled error is used as a loss function. • The ADAM optimizer has shown the best performance among other optimization algorithms. • Batch size is 30 and learning rate is 0.002. In order to find optimal hyperparameters, including the number of hidden layers, grid search has been applied to the model. For the model assessment, we chose K-fold cross validation with 10 splits and with shuffling. After training, the mean tile error for the test set reached the value 0,00637.
Having NN in place, we examined the possibility to set it up for DRL. As it turned out, each of the episodes could contribute almost no new entries to the dataset. Firstly, because the city had to exist at least 100 turns to calculate its output as in equation 4. Secondly, the game started each time at the same place and very few tiles were opened to the player at the beginning. Thus, the first two or three cities were built on a very narrow patch of land, and their data entries repeated from game to game. As such, 1000 games could contribute only 7 unique entries to the dataset.
Nevertheless, we decided not to change the setup. Reducing the number of turns from 100 to a smaller count would degrade the data quality as city output develops in a non-linear manner. We could not afford such harm to the prediction accuracy considering how small amount of data we had. On the other hand, playing the game more turns would result in very long episodes. Besides, even longer games did not look promising in delivering sufficient data for DRL. Moreover, the KB-RL system was set up in equal conditions and its performance was not diminished by such arrangement, that points out by itself to one of the KB-RL advantages.
The dataset and the games database are publicly available at [Nechepurenko and Voss, 2019] Figure 2: TGO averaged over a number of games. Both setups show improvement with more learning. The difference in output derives from the different methods in evaluating the tiles.
KB-RL setup
Our KB-RL system follows three core principles: Semantic map that maps the processes to semantic data graph so that the system has a contextual representation of the problem world. Knowledge about the solution. As opposed to recording these as a sequence of steps (like a script), the knowledge is recorded as discrete rules that allows the engine to reuse them for automating similar but different tasks without the recording of repeating knowledge. Decision-making engine that applies available knowledge to the problem's context from the semantic map. Critically, due to the integrated AI approaches, the engine is able to dynamically handle incomplete or ambiguous information.
Knowledge about FreeCiv arrived in the KB-RL system from the human experts. During the HIRO FreeCiv Challenge, we collected experience from top players about their settling strategies and their evaluation of the map for building cities. Their knowledge was recorded in form of the discrete rules that we call knowledge items. We used a scoring system to estimate the degree of tile suitability to deliver high city output. Each knowledge item contributed to the score of a particular tile independently of others.
Knowledge items addressed the same parameter set as it was outlined for training the neural network: terrain, resources, and water resources on the central tile and surrounding tiles. There were 14 features covered by knowledge items: 9 for different terrain types, and 5 for other features: (1) resources on the central tile, (2) resources on the surrounding tiles, (3) availability of water resources, (4) access to deep ocean, and (5) presence of whale resources. As whale is a rare resource that boosts two products (food and production) at the same time, many players favor it more than other resources. Thus, we treated it with additional rules.
Players have different strategies, and they value features differently. For instance, some prefer Grassland to Plains and Forest, while others put most value into special resources. Therefore, for each feature we implemented redundant knowledge items carrying alternative amounts of points added to the score. As such, for each of the 14 features we created a group of knowledge items where only one of them had to be selected for a particular tile at the given state. In this way, we had all human experts' strategies encoded into knowledge items and put together into a big knowledge pool. However, we did not care about the algorithm how to com- Figure 3: Total game output for each single game in the run of episodes. As the game is full of random events, the output has a great variation from game to game. In the beginning, the variation is greater due to high exploration factor. Later, the agent learns to avoid bad decisions, and the variation declines for both setups. bine these knowledge pieces into the optimal strategy. This task laid upon the KB-RL system intelligence. KB-RL system employs reasoning to combine the knowledge items into one solution, and reinforcement learning to handle redundant knowledge. In every situation when the system works on some task, it selects the best matching knowledge within the current context and executes it. Consequently, the executed command may change the context of the problem, and the next best matching knowledge can be applied. Hence, the KB-RL system solves the problem step by step by reacting to the problem situation with suitable knowledge. When it needs to choose between alternative knowledge, it relies on reinforcement learning to rank the knowledge items against the predefined goal.
In terms of reinforcement learning, total game output (equation 1) is the total cumulative return R that the agent collects in the environment defined as a Markov Decision Process. The state space S is defined by the clustering over the all tasks and their contexts in the system. The action space A is defined by all knowledge items that are known to it. We refer to the action-value (Q) or Q-value as the expected long-term return with discount factor λ taking action a under policy π, and to E as the expectation on the return. We use an onpolicy, model-free algorithm similar to Monte Carlo methods [Sutton and Barto, 1998] but adapted to the specifics of our problem to learn the Q-value based on the action performed by the current policy. The policy is represented by the normal distribution.
Results
To measure the performance of both setups, we chose the metric of total game output averaged over a number of games that was calculated after every game. Figure 2 visualizes the averaged total game output for KB-RL and neural network setups. Additionally, figure 3 shows the output of each single game in the run of episodes.
In the beginning, the game outputs had much variation with average TGO just under 15 000. As learning proceeded, the TGO steadily climbed up and the variation declined. We run the experiment for 1000+ episodes, and at this time the game play stabilized with the average total reward of 20 500 and 22 400 points for NN and KB-RL setups, respectively. The average total game output of the last hundred games reached 21 400 for the NN, and 24 000 points for the KB-RL setup.
Notwithstanding the difference, both setups considerably improved the total game output. For the KB-RL setup, the improvement constitutes 49% in contrast to the starting value, while for NN, it is 36% increase. While the total game output improved for both setups due to the reinforcement learning for the overall game, the difference between KB-RL and NN results stems from the settling strategies. Figure 4: TGO for both setups in contrast to human players, tournament games, and embedded AI players. Tournament games are games that were played by expert knowledge pools without RL optimization during FreeCiv Challenge. The contrast between two setups is in chosen terrain for founding a city. KB-RL setup favored plains the most, and then grassland with forest, while NN setup built majority of cities on the grassland. Figure 6: Distribution of the terrain by type within city borders. While both setups preferred plains and grassland the most, the difference is that KB-RL setup occupied almost twice more ocean tiles. On the contrary, NN setup resided more tiles of such types as forest, desert, hills, others.
To understand the achieved results, we compared them to the performance of human players and FreeCiv's own computer players. Figure 4 illustrates the average TGO for KB-RL, NN, human players and embedded AI. The human players output was acquired from human experts during the HIRO FreeCiv Challenge. For comparison, we show the TGO of the top 3 players. They are definitely great experts in playing the game as their play was quick and efficient, and they won against embedded AI with a big advantage.
Investigating the two setups in contrast to each other, it can be seen that the fundamental difference in settling cities lay in choosing the terrain type of the central tile, and less, but also significant asymmetry is in the terrain type of surrounding tiles. While both setups built comparatively similar number of cities, with the similar amount of resources and rivers within city borders, the terrain of city tiles differs significantly (figures 5 and 6). In the KB-RL setup, the majority of cities were built on one of three terrain types: plains (above 40%), grassland (just under 40%) and hills (around 20%). On the contrary, most of the cities in NN setup were built on grassland (above 60%) with a surprisingly big part of cities being built on the desert terrain (above 10%). Most likely, this is a consequence of the data deficit during training the NN model as desert terrain is an obvious disadvantage for the city development. Hence, the NN model cannot generalize well to game tiles with this terrain feature.
Cities of both setups occupied the terrain of type grassland and plains to a similar extent (figure 6). However, the KB-RL approach tended to build cities mostly on the coast with a high number of ocean tiles belonging to the city. At the same time, the NN setup shows more preference to the forest terrain, while coastal terrain takes almost 50% less than forest. Furthermore, cities in the NN setup occupied more terrain of types hills and desert in comparison to KB-RL setup.
Discussion
The goal of this article was to compare two approaches, knowledge-based reinforcement learning and the neural network, in solving a typical artificial intelligence task. The evaluation of map tiles for city sites was chosen considering that it relies on the perception of the image, and it is one of the most critical aspects in the game. The results show that both setups perform well in comparison to human performance and to the embedded AI players. With other conditions being equal, the KB-RL setup outperformed the NN in 13% on average.
Our experiment shows that leveraging experts' knowledge helps to beat one of the biggest drawbacks of using NNs: their demand for an extensive amount of data in order to achieve good results. Starting with no previous experience, KB-RL played the initial phase of the experiment equally well to NN being trained on the 1100 previously played games.
Based on human knowledge and empowered by reinforcement learning, KB-RL demonstrates the ability to optimize the complex policy for the high-dimensional action space with relatively small number of iterations. Meanwhile, the neural network could not deliver such optimization and became a bottleneck for city output improvement.
Moreover, in contrast to NN, KB-RL solutions are absolutely transparent and controllable. The ability to explain the system decisions can be imperative in many cases, especially when it comes to human health, security and well-being. | 4,750 |
1907.06143 | 2962163524 | In common real-world robotic operations, action and state spaces can be vast and sometimes unknown, and observations are often relatively sparse. How do we learn the full topology of action and state spaces when given only few and sparse observations? Inspired by the properties of grid cells in mammalian brains, we build a generative model that enforces a normalized pairwise distance constraint between the latent space and output space to achieve data-efficient discovery of output spaces. This method achieves substantially better results than prior generative models, such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs). Prior models have the common issue of mode collapse and thus fail to explore the full topology of output space. We demonstrate the effectiveness of our model on various datasets both qualitatively and quantitatively. | Recent works have made substantial progress in imposing diversity constraint on the latent space of a generative model. In particular, Liu et. al. @cite_49 proposes the normalized diversification technique that effectively solves the problem of mode collapsing. Building on top of their prior work, we use a similar technique to learn an accurate encoding of action and state spaces in physical manipulation tasks. To our knowledge, our model is the first to use normalized diversification in these applications. | {
"abstract": [
"Generating diverse yet specific data is the goal of the generative adversarial network (GAN), but it suffers from the problem of mode collapse. We introduce the concept of normalized diversity which force the model to preserve the normalized pairwise distance between the sparse samples from a latent parametric distribution and their corresponding high-dimensional outputs. The normalized diversification aims to unfold the manifold of unknown topology and non-uniform distribution, which leads to safe interpolation between valid latent variables. By alternating the maximization over the pairwise distance and updating the total distance (normalizer), we encourage the model to actively explore in the high-dimensional output space. We demonstrate that by combining the normalized diversity loss and the adversarial loss, we generate diverse data without suffering from mode collapsing. Experimental results show that our method achieves consistent improvement on unsupervised image generation, conditional image generation and hand pose estimation over strong baselines."
],
"cite_N": [
"@cite_49"
],
"mid": [
"2931659475"
]
} | Neural Embedding for Physical Manipulations | Grid cells, the grid-like neural circuit in mammalian brains, is known to dynamically map the external environment as the animal navigates the world [1]. Remarkably, this encoding preserves metric distance relationships, such that objects close in the real-world are close in the brain's intrinsic map [2]. Moreover, with a few observations and actions, the grid cells can rescale the mapping according to changes in the size and shape of the environment [3]. Such mental model allows quick adaptation to new surroundings, efficient localization and path-planning, and imagination of unseen events.
Inspired by the properties of grid cells, we propose a novel constraint on the latent space of a generative model that achieves data-efficient discovery of output spaces. Similar to the grid cell's distance-preserving encoding of the world, our proposed model preserves the normalized pairwise distance of samples between the parametric low-dimensional latent space and high dimensional output space. Intuitively, this approach encourages the neural network to actively explore the action space and "stretches" the latent space outward, thus making the learned embedding as diverse as possible. Such property enables the model to interpolate and decode to unseen states, mimicking the brain's ability to make accurate interpolations given sparse examples.
In real-world robotics operations, common tasks like predicting ball collision or rope manipulations can involve extremely vast action and state spaces. But often we only see relatively sparse observations. A common approach to encode such high-dimensional spaces is to use deep generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs). However, these generative models suffer from mode collapsing and mode dropping, where the models can only capture a partial real data distribution. These symptoms are especially problematic for our application, where the goal is to explore and represent the full unknown topological structure of the action and state spaces.
We propose a method that effectively solves the problem of mode collapsing by learning a distancepreserving mapping from the latent space to the output space. Moreover, our model is trained with adversarial learning that enforces the generated samples to be plausible. These properties enable the model to learn a latent encoding of a given task with a few samples, and interpolate based on the learned encoding to predict future events.
Our model has several practical applications: first, during many robotic operations, the constraints on the action space, such as safety or geometric limits specific to the task, are dynamic and largely unknown, whereas a wide range of possible maneuvers remains uncharted. Our proposed method can help the robot efficiently explore the action space and predict the future states. Second, the latent space in our model supports safe interpolation in the output space, thus enabling robots to output reliable and plausible action proposals. Third, our model can accurately predict future states given current states and sampled actions. Therefore, this method can help a robot build a physical mental mapping of the task at hand.
The main contributions of this work can be summarized as follows:
• We propose a generative model that mimics the idea of grid cell that can approximate the unknown action/state space with only sparse observations. • We generate a synthetic dataset and two simulation datasets, and have demonstrated our model can outperform strong baseline generative models on these datasets.
Methodology
In most real-world scenarios, a robot can perform many stochastic actions given a current state, and can reach a deterministic future state given a current state and an action. Thus, we consider the mapping from current state to action as a variational process, and the mapping from a pair of current state and action to the future state as a deterministic process. We learn a multimodal generative model to predict diverse actions given a current state, and a deterministic forward kinematics model to predict a future state when given a pair of current state and action. Our model is shown in figure 2. Conditioned on the input state, the discriminator takes actions as inputs and predicts the probability of whether the action is from training data or generator. Bottom Right: The kinematics forward model takes the input state and the action as input, and predicts a output state in a deterministic way.
Generative Model to Unfold Action Space
The first part of our method includes a generative model that approximates action space with unknown topology. Using sparse observations as our input, we train an auto-encoder that could guide the network to learn a meaningful feature embedding that encodes important state information. Such feature embedding enables the network to share knowledge across sparse observations of similar input states. Then, we concatenate the state embedding with a random latent variable sampled from an uniform distribution, and decode it to a predicted action through an action decoder. To encourage active exploration of action space, a normalized diversity loss is imposed to preserve the normalized pairwise distance between latent variables and sampled actions, as shown in figure 3. A discriminator is co-trained to predict probability of the actions coming from either the training data or the generator, which enforces the sampled actions to be plausible.
Active Exploration Via Normalized Diversification
When mapping random variables from the latent space to the action space, our generative model preserves the normalized pairwise distance of different generated samples in between the latent space and the action space. The distance metric d z (., .) between any two samples is simply a Euclidean distance. We denote z as latent variables, a as actions, and i, j as sample indices.
D z ij = d z (z i , z j ) j d z (z i , z j ) , D at ij = d at (a ti , a tj ) j d at (a ti , a tj )(2)
During training, we treat the normalizer in (2) as a constant when back-propagating the gradient to the generator network. This ensures that we optimize the absolute pairwise distance for a sample, rather than adjusting normalizer to satisfy the loss constraint. The normalized diversity loss function is defined as follows,
L ndiv (s t , a t , z) = 1 N 2 − N N i=1 N i =j max(0, αD z ij − D at ij )(3)
where α is a hyperparameter. We do not consider the diagonal elements of the distance matrix, which are all zeros.
Unlike GANs and VAEs, our generative model parameterizes the latent space as a uniform distribution U (0, 1) instead of Gaussian distribution. There are two reasons. First, the uniform distribution is bounded so that the sampled latent variables will never be too far away from each other. Sampling latent variables too far away from each other might induce extremely large pairwise distance and thus might lead to exploding gradients when optimizing the loss. Second, the Gaussian distribution makes a strong assumption that the data has a mode to fit the distribution, while uniform distribution has the flexibility to map to diverse modes in the data distribution. A simple way to think of this is to cut the uniform distribution space into many different pieces and learn mapping for each piece to fit each mode in the data distribution.
Safe Mapping Via Adversarial Training
While the normalized diversity loss encourages the model to actively explore in the action space, the adversarial loss puts a constraint during exploration so that the predicted actions are plausible. Our adversarial training framework is based on conditional GAN [41]. The action decoder takes an input state encoding, concatenates the encoding with a random variable sampled from a U (0, 1) latent space, and finally decodes to a predicted action. The discriminator takes both real and generated actions as inputs and predicts whether the action is real or fake conditioned on the input state. During implementation, we use the concatenation of the action and input state embedding as the input of the discriminator, and we use hinge loss [42] [43] to train the generator and discriminator,
L D (s t , a t , z) = E at∼q data (at) [min(0, 1 − D(a t |s t ))] + E z∼p ( z) [min(0, 1 + D(G(s t , z)|s t ))] (4) L G (s t , z) = −E z∼p ( z) [D(G(s t , z)|s t )] , L adv = L D + L G(5)
To stabilize training, spectral normalization [44] is applied to scale down the weight matrices in discriminator by their largest singular values, which effectively restricts the Lipschitz constant of the network. The generator and discriminator are updated alternatively in each iteration. After training converges to an equilibrium, the generator is able to sample diverse and plausible actions given a current state.
Forward Kinematics Model to Predict the Future State
In the second part of our method, we use a forward kinematics model to predict future states by inputting a pair of current state and action. We consider predicting a future state as a deterministic process, and thus we train a standard network to regress the predicted future state towards the ground truth future state with a Euclidean reconstruction loss function as follows,
L recon (s t+1 , s t+1 * ) = ||s t+1 − s t+1 * ||(6)
where s t+1 and s * t+1 are ground truth and the predicted future states respectively. The states could be high-dimensional images or some low-dimensional parameterizations depending on different applications.
Experimental Results
Preliminaries
We conduct experiments on one synthetic dataset and two simulation datasets. Both simulation datasets are generated on the Unity game engine.
The first simulation dataset (shown in the left-hand-side of Figure 4) contains an orange capsule and a ground plane both with fixed friction coefficients. For each data, a point on the capsule's waist is sampled, and an impulse with random direction and magnitude is applied onto the sampled point on the capsule. The dataset contains the images of the capsule before being hit, and 2 seconds after being hit. The wait time is selected empirically to ensure that the capsule does not disappear from the view of the camera when the second picture is taken.
The second simulation dataset (shown in the right-hand-side of Figure 4) contains a deformable rope object, a cylinder for pushing the rope, and a ground plane all with fixed friction coefficients. For each data, a node n on the rope and a point p are randomly sampled, such that | p − n| ∈ {2r, 1} where r is the radius of the cylinder. Then a magnitude and direction is randomly sampled, and the cylinder will move along the sampled displacement with a fixed velocity. Evaluation Metric. To evaluate whether the sampled actions are plausible or realistic, we use three evaluation metrics to quantify the similarity between the generated and real action distributions, including Fréchet Distance [45] and Jensen-Shannon Divergence (JS Divergence) [46].
Baseline Models. We conduct experiments in two settings. One is with a fixed initial state, and another is with various initial states. We use GAN [35] and VAE [34] as the baseline models for the first case, and conditional GAN [47] and conditional VAE [48] as baseline models for second case. Specifically, we use spectral normalization [44] to stabilize GAN training.
Evaluation of Unfolding Action Space
In the synthetic experiment, we model pushing a ball away from the center of a surface where the action space and state space are all unknown. The initial state is considered to be fixed at the center. We design the action space to be a star-like space that models the geometric constraint in the surface environment, and the state space to be a non-linear transformation of the action space that models irregular frictions or slopes on the surface. As shown in figure 9, the actions are denoted as blue dots and the states are denoted as green dots. The dots in the action space represents the orientation and magnitude of pushing force and the dots in the state space represent the location of objects after action applied.
Model
Fréchet Distance ↓ JS Divergence ↓ VAE [34] 21.55 ± 2.210 0.05 ± 0.002 GAN [35] 26.83 ± 19.40 0.16 ± 0.031 Ours 3.48 ± 0.748 0.02 ± 0.001 Figure 6: A table shows "JS Divergence" between approximate and real action distribution" versus "number of training samples". First, we conduct a comparison study on the synthetic dataset, we train VAE [34] by encoding and decoding the action to learn a compact latent space, and train GAN [35] by directly mapping the latent space to the action space. As shown in figure 9, VAE is not able to capture the complex structure of the real action space with a Gaussian prior on the action space. In addition, GAN has encountered the problem of mode collapse, which means many latent variables are mapped to the same point in the action space. Superior to both baseline methods, our model is able to actively explore and safely interpolate the full action state and approximates its topological structure. For all models, we train a same forward kinematics model to map the action space to the state space. In the experiments, we use 600 points as training and sample 10,000 points to approximate the action space after training. The quantitative evaluation also indicates our model can better approximate the unknown action space, as shown in table 2.
Second, we conduct evaluation on the rope and roller datasets. In figure 9, we demonstrate that our model can sample diverse and plausible actions given an input state. In table 2, we show our model can outperform VAE [34] and GAN [35] on both datasets using Fréchet Distance [45] and Jensen-Shannon Divergence [46].
Evaluation of Future State Prediction
The state space could be low-dimensional vectors or high-dimensional images depending on different applications. We evaluate the performance of future state prediction both qualitatively and quantitative as shown in figure and table below. With simple MSE reconstruction loss, the forward kinematics model can produce very good predictions of future states. Model Pixel MSE Error Rope 5.8908 Roller 54.7298
Visualization of Network Feature Embedding
In the many robotic operations, the observations of action and states are often very few and sparse. Thus, we want to ensure that the network has a nice property to share knowledge across sparse observations. Similar current states should correspond to similar action space, so it is crucial that the current state feature embedding encodes the important spatial and shape information of the target object in order to cluster them. We train an auto-encoder on top of the action prediction network to achieve this property. To evaluate the state feature embedding of the learned encoder, we use t-SNE [49] over image features to visualize the rope and roller images. Images with similar configuration appear near each other, which indicates our state encoder learns meaningful information to capture variations of the target objects.
Conclusion
In this work, we propose a generative model that can approximate vast and unknown action and state spaces using only sparse observations. Current generative models suffer from mode collapsing and mode dropping issues, and so we propose a method that solves these issues by learning a distance preserving mapping from latent space and produces plausible action solutions. We generate a synthetic dataset and two simulation datasets, and have demonstrated that our model can outperform strong baseline generative models on all of these datasets. Our work proves useful for applications in robotic operations where observations of action and state spaces are limited, and in cases where full exploration of the topology is needed. Future work includes building a better exploration strategy on top of our method for reinforcement learning. | 2,604 |
1907.06358 | 2963432486 | Due to the high resolution of pathological images, the automated semantic segmentation in the medical pathological images has shown greater challenges than that in natural images. Sliding Window method has shown its effect on solving problem caused by the high resolution of whole slide images (WSI). However, owing to its localization, Sliding Window method also suffers from lack of global information. In this paper, a dual input semantic segmentation network based on attention is proposed, in which, one input provides small-scale fine information, the other input provides large-scale coarse information. Compared with single input methods, our method based on dual inputs and attention: DA-RefineNet exhibits a dramatic performance improvement on ICIAR2018 breast cancer segmentation task. | In this paper we adopt Refinenet @cite_14 as the baseline. The main difference between Refinenet and Unet @cite_26 lies in the unique block "Refine Block". The Refine Block is a unique feature fusion block, which can be divided into three parts. (1) Residual Convolution Unit (RCU). This is a convolutional module based on the residual connection design. Compared with the original Resnet @cite_6 , the BN layer is removed, and the parameter amount is reduced to be used as a feature extractor. (2) Multi-size fusion. Our task is semantic segmentation, with the output and the input in the same size, the blocks except for Block No.4 being dual input, and the two input in different scales. Thus multi-size fusion is applied for upsampleing and feature fusion. (3) Chain residual pooling(CRP). The module efficiently fuses features through convolution pooling operations of different window sizes. Through this chained pooling operation, the receptive field is expanded. At the same time, multi-scale information is merged through short jump connections, which let gradient go to directly from one module to another. | {
"abstract": [
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ."
],
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_26"
],
"mid": [
"2563705555",
"2194775991",
"1901129140"
]
} | DA-RefineNet:A Dual Input Whole Slide Image Segmentation Algorithm Based on Attention | B REAST cancer is one of the most common cancers among women. In 2012, breast cancer caused more than 500,000 deaths, and 2 million new cases were added [1]. At present, the detection and diagnosis of breast cancer mainly depends on the observation and analysis by pathologists in the pathological section via hematoxylin and eosin staining. This method is subjective, qualitative, and seriously dependent on the professional skill of pathologists. With the development of efficient and stable slicing, staining and imaging techniques, the efficient, accurate and quantitative computer-aided diagnostic algorithms will effectively supplement the short of skilled pathologist, and increase the average accuracy of diagnostic greatly. In recent years, from LeNet [2], GoogLeNet [3], to Inception [4], with the improvement of the performance of depth feature extractor, the automatic analysis method of medical image based on deep convolutional neural network is booming. With the observing of features learned from deep neuron networks, it is generally believed that neurons in shallow layers learned the low-level edge or texture features while neurons in deeper layers learned the high-level semantic Bin features. Ronneberger et al first proposed an Encoder-Decoder model named Unet [5] for medical image segmentation. Future works based on Unet such as H-DenseNet [6] and GP-Unet [7] also retained the Encoder-Decoder architecture.This class of models usually contain two parts, named Encoder and Decoder respectively. The Encoder was used to extract high-level semantic features while the Decoder was used to decode segmentation information from the output learned features of Encoder by up-sampling and convolution operations. Moreover, the Encoder and the Decoder can be connected together for feature fusion through an operation named "jump connection".These UNET-based methods are widely used in natural image segmentation tasks, however, when it comes to segmentation tasks on whole slide images, the single input Unet-based methods will face the problem of small receptive field due to the large size or high dimension of WSI images. We proposed a dual input encoder-decoder structure named DA-RefineNet, which proved can tackle the above mentioned problem efficiently without high memory consumption.
We found that the same texture structure was labeled different at different locations through observations on the original images and its corresponding split masks (Fig.1). Intuitively, we think this is caused by the differences among the surrounding tissues where they located. In order to make the input image contain surrounding environment information as much as possible, we can increase the size of the input image, however, it will raises a high memory consumption, thus were not considered here. Based on this, we propose a dual-input attention network, here we denoted as DA-Refinenet, which combines fine texture features and coarse semantic features together to allow the network to obtain a large enough receptive field within an acceptable range. In our method, since a large range of images only provide semantic information, there is barely no need to pay much attention to its texture information, so it is down-sampled to accompany with the dimension of another input. Hence we can get enough receptive field under limited memory. Besides, Our method is applicable for but not limited to semantic segmentation, which is a universal idea for other WSI processing problems.
The main contribution of our paper can be summarized as below:
(1)Firstly, we proposed a new feature extraction method for WSI image segmentation. The proposed method can extract rough global features and fine local features simultaneously and thus can obtain a much larger receptive field. The proposed method can achieve better performance in terms of accuracy compared with methods that rely on single inputs on the WSI segmentation task. (2)Secondly, we explored the interaction between rough features and fine features and have the intuition that rough features can assist the fine features for reorganization. Consequently, we proposed a feature fusion mechanism with attention based on the intuition.
(3) Thirdly, we proposed a lightweight feature expression module based on the refine block and the residual connection, which can keep the accuracy while have the number of parameters greatly reduced.
The remain parts of the paper are: In Section II, we give a general introduction to the related works of semantic segmentation on natural and whole slide images. In Section III, we give the detailed information of our proposed methods, which include the model architecture, working scheme and implementation details. Experiment results were demonstrated and analyzed in Section IV, moreover, it also contains a simple introduction to the dataset used in these experiments. Finally, we summarized this work in Section V.
III. METHODOLOGY
A. Main Frame
Inspired by Unet,We also adopted the encoder-decoder structure in our work. In order to combine the coarse global semantic information with the fine local detail information and increase the receptive field of the network, we use two independent feature extractors to obtain the high-level semantic features of the fine image and the coarse image respectively. Each feature extractor uses four levels of Resnet, denoted by subscripts 1, 2, 3, and 4, respecti-vely. Each stage ends with a downsampling process, so that the size of each level of the feature map is half the size of the feature of the previous stage, which is beneficial to quickly expand the receptive field. The extracted features are recombined by the Attention-Refine(Attn-Refine) Block and then gradually returned to the original image size. The feature fusion of each step is a fine image feature recombination performed under the guidance of the coarse image semantics.We denoted the fine partial image and the rough global image as A and B respectively, moreover, A is part of B.M and L represent the corresponding segmentation result and label mask of image A respectively.Resnet1 and Resnet2 are fine small feature extractors for scaled images and rough larger-size images Hence, the data flow of DA-Refinenet ( Figure 2) can be formulated as::
X A 1 = Resnet1 1(A) (1) X B 1 = Resnet2 1(B)(2)
for i=2,3,4:
X A i = Resnet1 i(X A i−1 )(3)X B i = Resnet2 i(X B i−1 )(4)X 3 = Attn Ref ine 1(X A 4 , X B 4 )(5)X 2 = Attn Ref ine 2(X A 3 , X B 3 , X 3 )(6)X 1 = Attn Ref ine 3(X A 2 , X B 2 , X 2 )(7)M = Attn Ref ine 4(X A 1 , X B 1 , X 1 )(8)LOSS = N LLLoss2d(M, L)(9)
B. Attn-Refine Block
Based on the intuition that coarse images guide the reorganization of fine images features, we add the attention into the network and propose the Attention-Refine Block(Attn-Refine Block) (Fig. 3). To make comparison, we keep other parts unchanged, such as RCU and CRP, which are the same to the RefineNet.
Attention Block is designed to be used for feature fusion, for encoder-decoder structure of single input, the features are X A and X, X A provides structural information to assist X for decoding. But for dual input, In addition toX A and X, we also have a rough large-size featureX B . We use the semantic information of X B and X to weight the structural information of X A to generate more accurate structural information, thereby increasing the ability to express features. (Fig.4) The working scheme of the proposed Attention Block can be formulated as below:
X C = Concat(X A , X B , X)(10)X W = Residual(X C ) (11) Y = X A * X W + X(12)
WhereX A , X B are feature vectors extracted by the feature extractor from images A and B. X is the decoding feature generated by the previous layer Attn-Refine module. Here we use a 1*1 convolution to reduce the feature channels to the original number, and then use Global Average Pooling(GAP) and Sigmoid to generate a one-dimensional weight vector to have a weighted attention onX A . It is proved very effective to incorporate large scale rough features using the above mentioned scheme. Moreover, the large scale feature only be concatenated as auxiliary information in the feature fusion process and this network structure also allows the proposed model to be well fitted and easy optimized. In order to explore the relationship between several features,we also proposed several feature fusion schemes for comparison as below:. (Fig.5) (1) Concat:This scheme just fuse different features together simply by concatenation through the channel dimension, thus an increased number of feature channels will be derived. Moreover, the concatenated features are contributed equally, which is different with the Attention scheme. The computation complexity would increased and thus lead to more difficult optimization.
(2) Add: The direct addition of the corresponding channels of different features has the lowest in computational complexity, however the relationship among the original channels is destroyed during the addition process, so there are some feature information loss.
(3) Attention: This method has a certain degree of prior knowledge, which is consistent with the rough large-scale image mentioned in the above as an auxiliary information to promote the reorganization of fine image features.
C. LW-Attn Block
From the experiment results which will be demonstrated in the later section, we observed that the proposed method can achieve comparably high performance. Furthermore to illustrate the effectiveness of our proposed method, we remove some parameter-heavy parts and derived a lightweight version of Attention Block (here we denoted as LW-Attn Block). The experiment result shows that the model weights can be greatly reduced with the accuracy rate almost not affected.
LW-Attn Block (Fig.6.) is a lightweight feature fusion module based on attention. The parameter quantity is about onethird less than the Attn-Refine block, but hardly any reduction was observed in its accuracy. Compared with Attn-Refine block, the CRP layer is removes from it. Of course, we also turning the RCU stack into a simple residual module : SRB (Fig.7.), which is inspired from the architecture of ResNet [17] [30]. The first and the last component are 1*1 convolution layers. We use it to unify the number of channels. The remaining part is a residual block, here we deleted the BN layer and the Relu layer for simplify. This residual connection not only allows the gradient to spread quickly, but also allows multiple features of different scales to be directly fused, thereby increasing the expressive power of the segmentation network.
D. Evaluation Metrics
In order to evaluate the performance of our proposed method, we follow previous works [27] [28] and choose MIOU,Accuracy and score as the evaluation metric.score is a dedicated indicator for this task.
h = N i=1 max(|gt i − 0|, |gt i − 3|) * [1− (1 − pred i,bin )(1 − gt i,bin )] (13) score = 1 − N i=1 |pred i − gt i |/h(14)
Where "pred" is the output predictions on categories (0, 1, 2, 3), "gt" represent the ground truth, and the subscript bin indicates the result of binarization, which means that the real label is 0, then 0, and the others are 1. The indicator score is based on accuracy, but is designed to penalize more pixels away from real values. Note that, in the denominator, the cases in which the prediction and ground truth are both 0 (normal class) are not counted, since these can be seen as true negative cases.
MIOU is the standard measure of accuracy in sematic segmentation. It calculates the IOU for each class, and then averages the IOUs for all categories. And the IOU is:
IOU = DR ∩ GT DR ∪ GT(15)
Where DR is the detection result and GT is the ground truth
E. Implementation Details
In this work we use Negative Log Likelihood as the loss criteria for the proposed model. Negative Log Likelihood loss is also called Cross-Entropy loss, which can be writed as:
N LLLoss2d(t, y) = − i=1 t i log y i(16)
where t is one-hot vector for the labels(i=0,1,2,3), respectively, y is softmax output probabilities for the normal, benign, situ and invasive. We use the SGD [31] optimizer to train our model. Since imagenet pre-training exists in the encoding stage, we set different hyperparameters for the encoder and decoder. The encoder parameter suffix is ENC and the decoder suffix is DEC. At the same time, we divide the training into three steps, and each step have 25 epochs. The initial learning rate of each
step are LR EN C = [5e − 4, 2.5e − 4, 1e − 4], LR DEC = [5e − 3, 2.5e − 3, 1e − 3]
, the Momentum is set to 0.9 and WD is 1e-5. Batch size is 12.All code is written by Pytorch. And we use four GTX1080Ti for our training.
We mainly conducted three experiments to evaluate the effectiveness of our proposed model. The first experiment explored the role of the proposed dual-input structure on improving segmentation performance. The second experiment was aimed at exploring the effect of several different feature fusion mechanisms. And the last experiment was meant to derive a lightweight model with segmentation accuracy not decreased.
The training set consists of 3000 patches selected from image1, 2, 3, 4, 6, 7, 8, and 9. In order to ensure the relative balance of the four categories, we select a total of 2000 Fig. 7. Simple residual block(SRB). This block is inspired from the Resnet The first and last convolution kernels are 1*1, and the two convolution kernels on the residual connection are 3*3.
patches containing benign or situ. Since that there also exist some normal and invasive samples in these patches, thus we randomly choose another 1000 patches without benign and situ regions. The validation set consists of the 500 relatively balanced patches selected in image10. The test set consists of a total of 3000 patches of all the patches in image5. More detailed information of the dataset can be seen in [32].
We use some data augmentation such as random flip, random crop, and also we normalized the all image to ImageNet dataset. Nowadays, for the segmentation problem of WSI, a strong post-morphological processing is used to optimize the segmentation result, although this can increase the segmentation accuracy but cannot reflect the true performance of the network and the defect of the method. So in order to be able to visually represent the advancement of the method, we have not used any processing and post-processing methods. Although we do not use any post-processing, our method is still competitive compared to other methods.
IV. EXPERIMENTS AND RESULTS
A. ICIAR2018 Dateset
We used the dataset of ICIAR(International Conference on Image Analysis and Recognition) 2018 challenge [32]. The dataset is composed of Hematoxylin and eosin (H&E), stained breast histology microscopy, and whole-slide images. The dataset encompassed a total of 400 microscopy images which were labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image. The annotation was performed by two medical experts and images with disagreement were discarded. The dataset also contains 10 whole slide images. Whole-slide images are high resolution images containing the entire sampled tissue. In this sense, microscopy images just served as details of the whole-slide images. Because of that, each whole-slide image could have multiple normal, benign, in situ carcinoma and invasive carcinoma regions. The annotation of the wholeslide images was also performed by two medical experts and images with disagreement were discarded. Each image has a corresponding list of labelled coordinates that enclose benign, in situ carcinoma and invasive carcinoma regions (the remaning tissue is considered normal and thus is not relevant for performance evaluation).
Another thing need to mention is that in our work, we have not use the microscopy images for pre-training, which means we employed the whole slide images and microscopy images to train the model simultaneously.
B. The effect of Dual-Input
In this experiment, to evaluate the effect of the proposed dual-input mechanism, and to make the evaluation results more convincing, we adopted several encoder and decoder structure variants. For the feature extractor of different images (the fine small scale image A and the coarse large scale image B), we use three variants of ResNet (50, 101 and 152 respectively) for A and ResNet-50 for B for the consideration of controlling the model parameter amount. For the feature extractor of different images (the fine small scale image A and the coarse large scale image B), we use three variants of ResNet (50, 101 and 152 respectively) for A and ResNet-50 for B for the consideration of controlling the model parameter amount. As for the feature fusion part, we used mode (2) in Fig. 5 for the sake of simplicity. The results are displayed in Table I. In the encoders listed in Table I, Resnet50 means that we used the single input method, and this was to extract the characteristics of image A; Resnet50-50double means that we used the dual input method, and for the fine image and the coarse image, we used two independent Resnet50 as the feature extractor. For the decoder part, we used the two feature fusion modules described in Chapter III. The evaluation indicators, IOU0, IOU1, IOU2, and IOU3, represent the IOU scores of the four categories of normal, benign, situ and invasive cancer, respectively. MIOU represents the average IOU score in the four categories. Also, the loss of the experiment 4 and 5 is displayed (Fig.8).
Form the results we have:
(1) The segmentation accuracy of benign and situ are relatively low, which may have some relation with the class imbalance of our dataset. Although we haveadopted some techniques to balance different classes, it is inevitable that there will be more normal and invasive cancer than the other two types.
(2) According to the four corresponding pairs of comparison experiments, our method based on dual input has brought great improvement than the original method. Especially, the most important indicators of MIOU have increased by 28%, 10%, 18%, 18% respectively.
(3) Comparing results in experiments 3, 5, 7, and 9, we found that dual input could reduce the dependence on the feature extractor. Even if we used very shallow encoder and very simple decoder, acceptable results can be obtained. Comparing experiment 3 and 7, we found the results were very similar, which proved that our feature extraction method based on dual-input was efficient, and that we could use the shallow feature extractor to obtain similar results with the deep feature extractor. This has greatly relieved the predicament that most existing methods that merely focus on network depth, which indicates that a good feature extraction structure can make the network achieve better results.
(4) Our proposed lightweight network LW-Attn block is very competitive. By observing Experiments 7 and 9, we found that the results of the LW-Attn block and the original A-Refine Block were basically consistent. But the parameter size of our module was one third of the original.
(5) We compared the amount of parameters. Although our approach has led to an increase in the amount of parameters, we can see from Experiment 2 and 5 that we can use smaller parameters in our method to get better results.
(6) It can be seen from Fig.8. that our training loss declines more quickly, the model converges faster and the fitting effect is much higher than that in original method. This is because the single input method does not fit well to the situation shown in the box in Fig.1. What the single input network sees is: similar textures are differently labeled. As a result, the loss fluctuates downward, and the convergence values is high.
C. Exploration on different feature fusion schemes
We intuitively believe that rough global features provide auxiliary information for fine local features. This also can be seen as feature recombination. For this reason, we designed this experiment to explore the effects of different feature recombination schemes.. (see Fig.5).
Experiments were done with ResNet50 50 double, ResNet101 50 double for the encoder and Refine block for the decoder. The experimental results are shown in Table II. 101 50 1 indicates that Resnet101 was used as the feature extractor for fine small-scale images, Resnet50 was used as the feature extractor for rough large-size images, and (1) in Figure 5 was used as the feature fusion module.
From the results we have: a good feature fusion structure can reduce the dependence of the network model on the feature extractor. Compared with resnet101, the feature extraction capacity of resnet50 is relatively poor. When we used the feature fusion structure such as Add and Concat, similar conclusion can be derived from observing the corresponding results. But when we used the attention-based feature fusion structure, the result of resnet50 was better, which means that the coarse global information is indeed a priori of the fine local features, and that, by using Attention-based feature fusion, we can not only accelerate the convergence, but also get better expressed features. This not only reduces our reliance on hardware, but also saves time and provides a practical idea for real-time segmentation.
D. Additional Experiment
We proved that the performance improvemenwas derived from the proposed dual inputs scheme other than the increase of the parameter quantity, which further proved the superiority of our method. We compared our method with the multi-size dual input. Multi-size dual input refers to the input of two images of the same content but different sizes. We use three encoding structures resnet50, 101, 152, and Refine Block as the decoders. The results are shown in Table III.
From the results we can see:
(1) Compared with single input, Multi-size dual input still brings a big improvement in performance. This shows that the input of multi-size has a great effect on the semantic segmentation task. Because the target of our segmentation is irrelavant to the size of images, we hope that our segmentation network will be able to extract as many features of constant scale as possible. And this multi-size dual input just promotes the extraction of scale-invariant features of the network.
(2) The IOU0 and IOU3 of Multi-size dual input are the same as ours basically. But the IOU1 and IOU2 are lower. This means the method of Multi-size of dual input has limitations for difficult segmentation tasks and shows that the multisize method is as small as the single-input method and cannot use global information to optimize the results.
(3) Although we only used the feature fusion method of Add, this is enough to show the superiority of our method. In the case where the overall is better than Multi-size dual inputs, the indicators of IOU1 and IOU2 are greatly improved, which shows the advancement of our method is not due to the increase of parameter numbers.
E. Visualization Results
We also present the visual segmentation results for better illustration of the effectiveness of the proposed mode. As shown in Fig. 9, for the reason that we did not applied any post-processing techniques, thus lead to some blur and noise. Although there are some unsatisfying results, but these results can reflect some shortcomings of the whole slide image segmentation problem, which may be solved in future works. By observing the segmentation results of the third row, the fourth row of single input and the last row of multisize input, we can see that there exixt a lot of red noise on the black background, which means the network tend to divide the normal into benign. This is consistent with the phenomenon we mentioned in the motivation at the beginning of the paper. Due to the lack of global information, some parts of the training data with similar textures and normal areas are likely to be labelled as benign, which can mislead the network, causing the network to be inferior for benign and normal, resulting in misclassification. The segmentation results based on the dual-input network of row 5 and 6 corresponding to this have a relatively clean background. We also compare the feature fusion method based on Add in row 5 and the feature fusion method based on Attention in row 6. Compared to the simple addition of two features, we found that the method of using large-scale coarse semantic information to participate in small-scale fine texture feature reorganization has achieved better results, especially for test image 5.
All methods predict some normal areas on the right side of image 5 as benign, we have observed the original image in Fig. 9. Visual segmentation results of image 5(left) and image 1(right). The first and the second row represent the original image and its corresponding label image respectively. The third and forth row give the segmentation results of single input with ResNet-50 and ResNet-101 respectively. The fifth row and the sixth row demonstrate the segmentation result of dual inputs of Resnet101-50 with feature fusion scheme "Add" and "Attention" respectively. The last row showed the results of multi-size dual inputs. detail, and found that this part of the texture was indeed very different from the texture of other normal areas, so we sought help from experts, and were informed that there should be errors in the labels corresponding to the original data, which further explains the reason for some noisy unsatisfying results and the advancement of our method.
V. CONCLUSION In this paper, we propose a dual-input whole-slide breast image semantic segmentation framework based on attention. Using coarse global features as auxiliary information to promote fine local feature reThe idea of the proposed method, which includes the feature extraction and feature fusion schemes, was derived from the human intuition for solving segmentation tasks. The proposed method can give insight and provide a general framework for future works on WSI segmentation. Moreover, we also proposed a lightweight version feature fusion model named LW-Attn Block, which can achieve comparable performance while with much less model parameters. When the parameter quantity reduces by one-third, the segmentation accuracy is basically unchanged, which can reflect advancement of our method.
At the same time, we also compare the influence of several different feature fusion methods on our network, indicating that the coarse global information can be used as a priori of fine local information to guide its feature reorganization, thus accelerating network convergence and improving network expression ability. This attention-based approach reduces the network's dependence on feature extractor depth to a certain extent. We can use the shallower feature extraction network to get better results, which not only reduces the model size, but also gives us a lot of inspiration: Network performance is not only dependent on deep feature extractors, but correct prior knowledge and graceful feature fusion are the key factors that determining network performance.
In the future work, much further studies can be done to explore the generality of the proposed method on other WSI related tasks such as survival prediction, gastric cancer detection, and pancreas segmentation. | 4,442 |
1907.06385 | 2957959342 | We propose a method to learn unsupervised sentence representations in a non-compositional manner based on Generative Latent Optimization. Our approach does not impose any assumptions on how words are to be combined into a sentence representation. We discuss a simple Bag of Words model as well as a variant that models word positions. Both are trained to reconstruct the sentence based on a latent code and our model can be used to generate text. Experiments show large improvements over the related Paragraph Vectors. Compared to uSIF, we achieve a relative improvement of 5 when trained on the same data and our method performs competitively to Sent2vec while trained on 30 times less data. | Methods requiring labels generally use less training data as they can be more data efficient due to the better training signal that can obtained from labeled data. Examples include: InferSent which uses labelled entailment pairs, GenSen utilizing supervision from multiple tasks, and ParaNMT with paraphrase sentence pairs or conversational responses @cite_1 . | {
"abstract": [
"We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2794557536"
]
} | 0 |
||
1907.06212 | 2960316954 | In this paper, we propose an end to end joint radio and virtual network function (VNF) resource allocation for next-generation networks providing different types of services with different requirements in term of latency and data rate. We consider both the access and core parts of the network and formulate a novel optimization problem whose aim is to perform the radio resource allocation jointly with VNF embedding, scheduling, and resource allocation such that the network cost, defined as the consumed energy and the number of utilized network servers, is minimized. The proposed optimization problem is non-convex, NP-hard, and mathematically intractable, and hence, we use an alternative search method (ASM) to decouple the main problem into some sub-problems of lower complexity. We propose a novel heuristic algorithm for embedding and scheduling of VNFs by proposing a novel admission control (AC) algorithm. We compare the performance of the proposed algorithm with a greedy-based solution in terms of the acceptance ratio and the number of active servers. Our simulation results show that the proposed algorithm outperforms the conventional ones. | T he fifth generation of wireless cellular networks (5G) provides a wide range of services black with various requirements that should be guaranteed in the network @cite_3 @cite_7 . In the traditional network, dedicated and specific hardware equipment are required. Therefore, in order to provide a new service in these networks, it is necessary for each operator to purchase the hardware resources and install it on the network @cite_25 . | {
"abstract": [
"Software-Defined Networking and Network Functions Virtualization have initiated a new landscape within the telecom market landscape. Initial proof-of-concept prototypes for NFV-enabled solutions are being developed at the same time SDN models are identified as the futures solutions within the telecom realm. In this article, we provide a brief overview of the application and state-of-the-art of SDN and NFV technologies over optical networks. At the same time, we provide the first formalisation model for the VNF complex scheduling problem, using the complex job formalisation. The article aims at being used as starting point in order to optimally solve the scheduling problem of virtual network functions that compose network services to be provisioned within the SDN paradigm. Finally, we also provide an example of the virtualization of the routing function over an SDN-enabled domain.",
"",
"Network function virtualization has received attention from both academia and industry as an important shift in the deployment of telecommunication networks and services. It is being proposed as a path towards cost efficiency, reduced time-to-markets, and enhanced innovativeness in telecommunication service provisioning. However, efficiently running virtualized services is not trivial as, among other initialization steps, it requires first mapping virtual networks onto physical networks, and thereafter mapping and scheduling virtual functions onto the virtual networks. This paper formulates the online virtual function mapping and scheduling problem and proposes a set of algorithms for solving it. Our main objective is to propose simple algorithms that may be used as a basis for future work in this area. To this end, we propose three greedy algorithms and a tabu search-based heuristic. We carry out evaluations of these algorithms considering parameters such as successful service mappings, total service processing times, revenue, cost etc, under varying network conditions. Simulations show that the tabu search-based algorithm performs only slightly better than the best greedy algorithm."
],
"cite_N": [
"@cite_25",
"@cite_7",
"@cite_3"
],
"mid": [
"2044684891",
"",
"1823841943"
]
} | Energy Cost Minimization by Joint Radio and NFV Resource Allocation: E2E QoS Framework | B. Our Main Contributions
Obviously, in a real network and practical scenarios, QoS is the E2E concept and depends on radio access and core networks. In fact, guaranteeing QoS for different applications refers to ensure all of requirements, such as data rate by all parts of the network. These reasons, motivate us to propose a framework which radio and NFV RA is considered for E2E service provisioning where a new AC mechanism is devised for the service requests. The main contributions of this paper can be summarized as follows:
• In this paper, in order to guaranteeing E2E QoS by utilizing resources, efficiently, we propose a novel E2E QoS-aware framework by consideration of the radio and NFV RA that has not been considered in the literature.
• More importantly, we introduce a new approach for VNF scheduling with considering the network service latency. We introduce a new scheduling variable using which the latency of each VNF is obtained by calculating the processing and waiting time of all VNFs scheduled before it. This means that the time each NS is finished can be calculated as sum of the waiting time and processing time elapsed from the packet entrance to the packet receiving by the destination [21]. On the other hand, we consider a maximum tolerable latency for each packet of different services which should be ensured by the network. We propose a novel efficient and low complexity algorithm based on minimizing the number of active VMs (servers) and guaranteeing the requested service QoS requirements. • We formulate a new optimization problem for radio and NFV RA with the aim of minimizing cost function in terms of radio and NFV resources. In the proposed optimization problem, subcarrier assignment, power allocation, VNF embedding, scheduling, ordering, and server utilization are optimization variables. Our main aim is to minimize the network cost in terms of the transmit power and the number of active nodes while guaranteeing the service QoS metrics.
• To overcome the infeasibility issue in the solution of the proposed optimization problem, we propose a new elastication method and a novel AC method to reject some users and guarantee the other users requested service requirements. Based on the proposed AC, the user which has the most effect on the infeasibility on optimization problem i.e., needs more resources to guarantee its QoS is found and its service is rejected. • We prove the convergence of the proposed algorithm and analyze its computational complexity.
• We provide numerical results for the performance evaluation of the proposed problem and algorithm for different network configurations and greedy-based algorithm. Our simulation results show our proposed algorithm outperforms greedy-based by approximately 8% for same computational complexity.
C. Paper Organization
The rest of the paper is organized as follows. In Section II, the system model is explained. In Section II-E, the problem formulation is presented. The problem solution is presented in Section III. In Section IV, the computational complexity and convergence of the proposed algorithm are investigated. The simulation results are presented in Section V. Finally, in Section VI, the paper is concluded.
Notations: Vector and matrices are indicated by bold lower-case and upper-case characters, respectively. |.| and . p represent the absolute value and p-norm, respectively. A denotes set {1, . . . , A}, A(i) is i-th element of set A, and R n is the set of n dimension real numbers.
Moreover, U d [a, b] denotes the uniform distribution in interval a and b.
II. SYSTEM MODEL AND PROBLEM FORMULATION
A. Radio RA Parameters
We consider a single-cell network with U users whose set is U = {1, . . . , U } and K subcarriers whose set is K = {1, . . . , K}. We define the subcarrier assignment variable ρ k u with ρ k u = 1 if subcarrier k is allocated to user u and otherwise ρ k u = 0. We assume orthogonal frequency division multiple access (OFDMA) as the transmission technology in which each subcarrier is assigned at most to one user. To consider this, the following constraint is considered:
u∈U ρ k u ≤ 1, ∀k ∈ K.(1)
Let h k u be the channel coefficient between user u and the BS on subcarrier k, p k u be the transmit power from the BS to user u on subcarrier k, and σ k u be the power of additive white Gaussian noise (AWGN) 3 at user u on subcarrier k. The received signal to noise ratio (SNR) of user u
on subcarrier k is γ k u = p k u h k u σ k u
, and the achievable rate of user u on subcarrier k is given by
r k u = ρ k u log(1 + γ k u ), ∀u ∈ U, k ∈ K.(2)
Hence, the total achievable rate of user u is given by R u = k∈K r k u , ∀u ∈ U. The following constraint states the power limitation of BS:
k∈K u∈U ρ k u p k u ≤ P max ,
where P max is the maximum transmit power of BS.
B. NFV RA Framework
In this subsection, we explain how the generated traffic of each user is handled in the network by performing different NFs in the requested user's NS 4 on the different servers/physical nodes 3 In this paper, we assume that an AWGN interfering source (IS) interferes at the BS and all users on each subcarrier. We consider a single cell with a BS, in a scenario with many cells and no coordination between BSs, the inter-cell interference distribution converges to a Gaussian and can be integrated into the interferences of other cells which can be modeled by the IS [22]. 4 Defined by European Telecommunications Standards Institute (ETSI) as the composition of Network Function(s) and/or Network Service(s), defined by its functional and behavioral specification [23]. Server mapping between NF f s m of service s, user u and node n t Ordering indicator between NF f s m of service s user u and f s m of service s user u by leveraging NFV 5 . In this regard, we consider NFV RA that consists of a new approach for the embedding and scheduling phases. In the embedding phase, we map each NF on the server that is capable to run that NF. Note that we do not consider mapping virtual links on the physical links and leave it as an interesting future work as [1], [25].
We consider S communication service (CS) 6 types whose set is S = {1, 2, ..., S} and M NFs whose set is F = {f m |m = 1, . . . , M }. The considered parameters of the paper are stated in It is worth noting that some of NFs have some association and precedence over some others, for instance, the NF decryption is performed after encryption. We consider a set of VMs denoted by 5 Standardized by ETSI organization for 5G and beyond in the telecommunication [24]. 6 In this paper, the NS and CS are paired together. That means each CS s has a NS with corresponding NFs that is denoted by set Ωs. Note that CS is defined by the 3rd generation partnership project (3GPP) technical specification 28.530 [26]. N = {1, ..., N } in the network each of which has computing and storage resources. We assume that each server can process at most one function at a time [1]. To improve energy efficiency (EE) in our proposed system, we introduce a new variable η n to determine the active nodes which is defined as
η n = 1, Node n is active, 0, Otherwise.
We consider a generalized model for resource sharing of VMs that is introduced in [1]. Therefore, we introduce a binary variable β f s m u,n which denotes that NF f s m for user u in NS s is executed at node n, and is defined as
β f s m u,n =
1, NF f s m for user u in NS s is executed at server n.
0, Otherwise.
When β f s m u,n is set to 1 i.e., f s m in the requested NS s for user u is mapped on server n, and this server should be active, i.e., η n = 1. Therefore, we have the following constraint:
β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S.
Each NF of each NS is performed completely at only one server at a time. Therefore, we have
n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S.(4)
Moreover, we assume that each NF needs a specific number of CPU cycles per bit i.e., α f s m to run on a mapped server. From the physical resources perspective, we assume that each server can provide at most L n CPU cycles per unit time and hence, we have the following constraint:
u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(5)
where y u is the packet size of service user u and here is assumed to be equal to the number of bits generated in a time unit, i.e., R min s . Hence, the processing time of each function f s m for each bit on server n ∈ N , i.e., is as follows: Therefore, the total processing latency for each packet with packet size y u is obtained as
τ f s m n = α f s m L n , ∀n ∈ N , f s m ∈ Ω s .(6)τ f s m n =τ f s m n y u , ∀n ∈ N , f s m ∈ Ω s .(7)
Additionally, we assume that each NF needs specific buffer size, i.e., ψ f s m , when it is running on the server. Hence, from the storage resource perspective, we consider that each server has the limited buffer size i.e., Υ n , which leads to the following constraint:
u∈U s∈S ∀f s m ∈Ωs (ψ f s m + y u )β f s m u,n ≤ Υ n , ∀n ∈ N .(8)
C. Latency Model
In NFV RA, our main aim is to guarantee the service requirement includes maximum tolerable latency for each packet with size y u of the requested services with minimizing consumption of servers. The total latency that we consider in our system model results from executing NFs and queuing (waiting) time. In the following, we calculate the total latency resulting from scheduling.
Remark 1. In this paper, our main aim is to model and investigate the effect of processing and scheduling latency on the service acceptance and the network cost. Hence, we do not consider the other latency factors such as propagation and transmission latencies in our model. In fact, our proposed scenario is focused on the intra data centers and not appropriate for the nationalwide networks. It is worth noting that the aforementioned latency is coming from the high order distance from the source and application servers. Therefore, these concerned can be treated by exploiting the mobile edge computing (MEC) technology to bring the application servers close to clients. In this regard, we generalize this work for the MEC-enabled networks in future works.
1) Scheduling and Chaining: Each NF should wait until its preceding function is processed before its processing can commence. The processing of NS s ends when its last function is processed. Therefore, the total processing time is the summation of the processing times of the NFs at the various servers. For scheduling of each NF on a server, we need to determine the start time of it. Therefore, we define t if NF f s m of user u is running after NF f s m of user u , its value is 1, otherwise is 0. By these definitions, the starting time of each NF can be obtained as follows:
t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U.(9)
To more clarify, we illustrate the proposed scheduling policy in Fig. 2. The total service chain latency for each user u on the requested service is inferred as follows:
D Total u = max ∀n∈N ,f s m ∈Ωs,s∈S t f s m u,n β f s m u,n + τ f s m n β f s m u,n , ∀u ∈ U.(10)
D. Cost Model: Objective Function
Our aim in this paper is to minimize the total cost of the network. In this regard, we define cost Ψ as the total amount of radio and NFV resources that are utilized in the network to provide services. In particular, the cost function is given as follows:
Ψ = µ u∈U ,k∈K p k u ρ k u + ν n∈N η n ,(11)
where µ and ν are constants for scaling and balancing the costs of different resource types.
U1 Request S=1 U2 Request S=2 =1 =1 =1 =1 =1 =1 =1 =1 Time =1 Starting time =max(A, B) A B =1 { , , , } { , , , , } Fig. 2.
Schematic illustration of scheduling and formulation of (9) for the requests as an example with considering 5 servers, 2 users, and 6 NFs.
E. Problem Formulation
Based on these definitions, our aim is to solve the following optimization problem:
min P,ρ,T,X,β,η Ψ(P, ρ, η) (12a)
S.T:
R u ≥ R min u , ∀u ∈ U, (12b) u∈U ρ k u ≤ 1, ∀k ∈ K, (12c) k∈K u∈U ρ k u p k u ≤ P max , (12d) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(12e)
u∈U s∈S ∀f s m ∈Ωs
ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n ≤ Υ n , ∀n ∈ N ,(12f)t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U, (12g) D Total u ≤ D max s , ∀u ∈ U, (12h) 0 ≤ p k u , ∀u ∈ U, k ∈ K, (12i) β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, f s m ∈ Ω s , (12j) n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S, (12k) ρ k u ∈ {0, 1}, ∀u ∈ U, k ∈ K, (12l) β f s m u,n ∈ {0, 1}, ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S, (12m) x f s m ,f s m u,u ∈ {0, 1}, ∀u, u ∈ U, u = u , ∀f s m , ∀f s m ∈ Ω s ,(12n)η n ∈ {0, 1}, ∀n ∈ N ,(12o)where ρ = [ρ k u ], β = [β
III. SOLUTION OF THE PROPOSED PROBLEMS
Optimization problem (12) is non-convex including both mixed binary and continues variables with non-linear and non-convex constraints. Hence, it belongs to the NP-hard and mathematically intractable optimization problem and obtaining an optimal solution is not trivial and leads to high computational complexity and algorithm run time. Therefore, we cannot apply the common convex optimization methods for solving it.
Without considering NFV RA, the radio RA problem, separately, on the power and subcarrier allocation variables is convex optimization problem, and hence, each of them can be solved efficiently. While NFV RA is non-linear mixed integer programming with large number of variables, i.e., T, X, η, β. These motivate us to develop a new low complexity heuristic algorithm to solve NFV RA sub-problem that is stated with details in Algorithm 2. However, we investigate our proposed algorithm from the different aspects and compare it with other methods.
To solve the optimization problem (12) in an efficient manner, we utilize alternate search method (ASM). To use ASM, we need to calculate initial values of the optimization variables which should satisfy the corresponding constraints of (12). Since the optimization problem (12) would be infeasible, we propose a novel elasticizing approach by introducing a new elastic variable. Based on this method, the constraints that would make the optimization problem infeasible are changed as follows. Assume that we have constraint g(y) ≤ 0, where y ∈ R n is the objective variable. We elasticize it by g(y) ≤ A, where A ≥ 0 is the objective variable. By applying this method, we solve the following optimization problem:
min P,ρ,T,X,η,β,A Ψ(P, ρ, η) + W A (13a) S.T: R min u − R u ≤ A, ∀u ∈ U, (13b) u∈U s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n ≤ A, ∀n ∈ N , (13c) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ≤ A, ∀n ∈ N , (13d) D Total u − D max s ≤ A, ∀u ∈ U,(13e)A ≥ 0,(13f)(12c), (12d), (12j), (12k), (12g), (12i) − (12o),
where A is the elastic optimization variable and W is a large number, i.e., W 1. Note that since A can be any non-negative value, the optimization problem (13) is feasible. By solving the optimization problem (13), the infeasibility of the main optimization problem (12) is determined. Therefore, if the elastic variable A is positive, problem (12) is infeasible. To overcome the infeasibility of problem (12), we introduce a new AC method to reject some services providing rooms for the remaining ones. The block diagram illustrating the main steps of the proposed method to solve the optimization problem (12) which is based on solving the optimization problem (13) and AC is shown in Fig. 3. (12) is feasible and based on the proposed AC method, all the requested services are accepted.
Proposition 1. Problem (13) is equivalent to problem (12), if we have A = 0. That means problem
The elasticated problem (13) is also non-convex and NP-hard. In this regard, we solve it by dividing it into three sub-problems by utilizing ASM. The first sub-problem is power allocation and elastication, the second one is subcarrier allocation, and the last one is NFV RA. In fact, the first and second sub-problems are the radio RA sub-problem and it is stated in Section III-A. In the NFV RA sub-problem, all the optimization variables are integer and the problem formulation and solution are presented in Section III-B. More details of the proposed iterative solution of optimization problem (13) is stated in Algorithm 6. In the next subsection, we explain the solution of the aforementioned sub-problems.
A. Radio RA
The radio RA problem is divided into two sub-problems as follows.
1) Power Allocation and Elasticated Subproblem:
The power allocation and elasticated subproblem is presented as follows: 2) Subcarrier Allocation Subproblem: The subcarrier allocation sub-problem is as follows:
min P,A u∈U k∈K ρ k u p k u + W A(min ρ u∈U k∈K ρ k u p k u ,(15a)
S.T: (13b), (12c), (12d), (12l).
We solve sub-problem (15) by using MOSEK in MATLAB toolbox [28].
B. NFV RA
The NFV RA sub-problem is as follows:
min T,X,η,β n∈N η n (16a) S.T: (13c) − (13e), (12g), (12j) − (12o).(16b)
To solve problem (16), we propose a new heuristic algorithm where the functions are mapped and scheduled on the servers whose have the minimum processing latency. Moreover, our proposed algorithm is based on minimizing the number of active servers. To this end, we ascendingly sort the servers by the total processing latency metric. After that, the server with the best rank, i.e., the high available capacity in the sorted list is turned on. Then, we activate another server, if the previously activated servers cannot satisfy the resource demands by NFs or QoS of users is degraded. Based on the algorithm, we ascendingly sort users according to latency requirements and then we start to map and schedule each of NFs on the servers. The details of the proposed NFV RA are stated in Algorithm 2. Return to line 5
Output: T , X, η, and β
C. Admission Control
Our proposed AC is based on the value of elastic variable of problem (13). Whereas, if A is non-zero, the original problem (12) is infeasible. This means that one or more elasticated constraints, i.e., (13b)-(13e), are not satisfied. To ensure these constraints, we can increase network resources (e.g., server's capacities) or reject some of the users service requests. Since the first method is not practical in more cases, we propose to reject some requested services by adopting the proposed AC. In behind of AC, one of the major questions is which one of the requested services should be rejected. In this case, the requested services have diverse characteristics and different effects on the utilization of the network resources, and consequently on the infeasibility of problem (12). To find the user which has the most effects on the infeasibility and reject its service, we do as follows:
u = argmax u u κ 1 (R min s − R u ) + κ 2 n∈N s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n + κ 3 n∈N s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ,(17)
where κ 1 , κ 2 , and κ 3 are the fitting parameters to balance u , and we emphasize that in (17) we use the values of the optimization variables of (13) obtained by Algorithm (6). Based on this, we reject user u . Then, solve problem (13) with U = U − {u }. We repeat this procedure until, we have A = 0 in the solution of problem (13).
IV. CONVERGENCE AND COMPUTATIONAL COMPLEXITY
A. Convergence of the Solution Algorithm
Based on ASM, after each iteration the objective function in each sub-problem is enhanced and finally it converges. Fig. 4 shows an example about the convergence of our proposed iterative algorithm. Clearly, our proposed solution converges after few iterations. We have the following relations between iterations (z is the iteration number):
O P[z], ρ[z], η[z], A[z] = min P,A O P[z], ρ[z], η[z], A[z] ≤ O P[z − 1], ρ[z], η[z], A[z − 1] = min ρ O P[z − 1], ρ[z], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z], A[z − 1] = min η O P[z − 1], ρ[z − 1], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z − 1], A[z − 1] .
This means that the objective function of ASM decreases as the iteration number increases. In addition, with QoS and ensuring the resource demand constraints, i.e., (13b)-(13e), the ASM algorithm converges to a sub-optimal solution which corresponds to the sub-optimal solution of problem (13). (2). Based on equation (7) and server selection policy of Algorithm (2), R u is directly proportional to η. Hence, if the value of R u is fixed or reduced at each iteration
Method Complexity
Heuristic
O(U 2 × F × N ) Greedy-based solution O(U 2 × F × N ) Power Allocation: CVX log C 1 ξ /log(ς) Subcarrier Allocation: CVX-MOSEK log C 2 t 0 /log(ς) z, i.e., if R (z) u ≤ R (z−1) u , then we have η (z) ≤ η (z−1)
. As a result the proposed algorithm is monotonic.
B. Computational Complexity
By utilizing ASM, the overall complexity of the algorithm is a linear combination of the complexity of each sub-problem.
1)
Radio RA: For the radio RA sub-problem, we utilize geometric programming (GP) and
IPM via CVX toolbox in MATLAB [27]. Based on this method, the computational complexity order of power allocation sub-problem is given by
log( C 1 ξ )
log(ς) where C 1 = U + N + N × U + U + 1 is the total number of constraints of sub-problem (14), ξ is the initial point for approximating the accuracy of IPM, 0 < 1 is the stopping criterion for IPM, and ς is the accuracy of IPM [27]. Similarly, the complexity of sub-problem (15) is given by
log( C 2 ξ )
log(ς) where C 2 = U + K + 1 is the total number of constraints of (15). Table II.
V. EXPERIMENTAL EVALUATION
A. Simulation Environment
In this section, the simulation results are presented to evaluate the performance of the proposed system model. We consider U = 50 users which are randomly distributed in the converge of a
B. Simulation Results
The investigation of the proposed algorithm under different network settings and parameters is started by the simulation results that are shown in Figures 5(a)-7(b). These results are obtained by considering the Monte-Carlo method iterations is 500. We discuss these results in the following.
1) Acceptance Ratio:
The acceptance ratio is defined by the ratio of the number of accepted services by the network to the total number of the requested services by users and is obtained by κ = 1 −Û U whereÛ is the number of users that their services are rejected based on the proposed AC. It is a criterion to investigate the efficiency of the proposed algorithm in utilizing total network resources to guarantee the requested QoS and accept the service demands.
As can be seen from Figures 5(a), 5(b), and 5(c), the value of the acceptance ratio depends on two main factors, i) the network resources capacity; ii) the number of users (service demands) and service QoS characteristics (latency and data rate). Therefore, we have some challenges to address high data rate and provide low latency services. Clearly, by increasing the number of users (service requests) the acceptance ratio is decreased, especially for low latency services that have the main contribution on the acceptance ratio. We observe that increasing the number of low latency services leads to reducing the acceptance ratio. For the large number of users, the network guarantees some users' service requirements and other users are rejected. For this cases, based on κ equation, the value ofÛ is increased and U −Û approximately reaches to a fixed value. Therefore, the latency and buffering requirements are satisfied and the acceptance ratio of services is improved. From this figure, we conclude that the impact of the number of active servers on the high data rate and low latency services e.g., process automation [29] is more than that of other services. Furthermore, comparing Fig. 5(c) and Fig 5(a), we obtain that the effect of the server processing capacity is more considerable than the number of active servers on the low latency services. That means low latency services are rejected by the network because their requirements need more resources in the network to reduce waiting and processing times.
2) Network Cost: Fig. 6(a) illustrates the network cost versus the variation of the number of users for R min s = 10 bps/Hz and service deadline 2 second. The network cost is comprised of both radio and NFV resources costs in terms of power and spectrum consumption and utilizing servers in the network. It can be observed that by increasing the number of users the network cost increases due to increase in both the radio and NFV costs. It is clear that by increasing the number of users the NFV cost increases rapidly than that of the radio cost. where r U is the amount of the resources utilized by the users and r T is the total server's resources. From this figure, we infer that not only the packet size has a direct effect on the utilization ratio, but also the service deadline has a major impact on this. This is due to the fact that a large packet size needs more storage and processing capacity and low service deadline needs minimum waiting and processing times. Therefore, we should make active more servers and exploit their resources for low latency services. Obviously, increasing the number of users increases the utilization ratio approximately in a linear form. From the cost perspective, we can conclude that by increasing the utilization of network resources, the cost network is also increased, especially in terms of power consumption.
3) Service Deadline: Figures 6(b) and 7(b) show the total cost of the network versus different values of the service deadline for various scenarios. Clearly, the requested service deadline has a major effect on the utilization of processing and buffering resources in servers. Form Fig. 7 to process the NFVs of corresponding services. That means for providing low latency services, we should pay more costs in terms of radio and NFV resources. By increasing the number of servers, the waiting time for each NF in a NS that it is in queue is minimized, and hence, server availability and probability of QoS guarantee for users are increased. For higher latency values in some cases, one (or two) active server(s) is sufficient. By comparing Fig. 6(b) and Fig. 6(a), we obtain that by reducing the value of the latency, the network cost increases significantly compared to the case where the number of users (the numbers of service requested) increases.
C. Benchmark Algorithms 1) Performance Comparison: To the best of our knowledge, this is the first work (refer to the related works) tackling the E2E RA with proposing new AC and a new closed form formulation of NFV scheduling which is comprised of both waiting time and SFC ordering. Moreover, we propose a new heuristic algorithm to solve the formulated optimization problem. It is worth nothing that in the related works, a greedy-based search is exploited [1], [8], [30]. We compare our proposed algorithm with a modified version of greedy-based algorithm that is proposed in [1]. In greedy-based search, different objectives can be considered, for example, minimizing the total flow time [1]. The greedy-based scheduling and embedding the arrival service requests are performed sequentially based on the greedy criteria. Based on the modified greedy algorithm 10 to solve sub-problem (16), first, we search servers that are appropriate for embedding and then find the best server by greedy criterion, i.e., the shortest server queues [1]. The steps of the greedy-based algorithm with the minimum queue time criterion is stated in Algorithm 3.
time. In some cases, the algorithm adds servers that are release and have more capacities. While it is possible to satisfy the latency of other functions without utilizing this server. In constant, the proposed algorithm activates a server when the previously added servers (activated servers) cannot satisfy the constraints of the problem and users QoS. More importantly, in the greedy algorithm, the number of active servers is fixed after increasing the values of the service deadlines which is the consequent of its server selection policy which is based on the queuing time. While in the proposed algorithm it is reduced.
To more clarify this, assuming that we have five servers in the network with specific capacities as [1000 2000 1500 3000 1800] and two service requests with 2 functions with capacity requirements 20 and 40, R min s = 10 bps/Hz and service deadlines 0.3 and 0.7, respectively. Based on Algorithm 2, the service finishing time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and service finishing time of user 2 is 0.2 + 20×10 3000 + 40×10 3000 = 0.4 < 0.7. That means one active server is sufficient to all users. While based on the greedy algorithm the finishing service time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and that of user 2 is 20×10 2000 + 40×10 2000 = 0.3 < 0.7, since 0.3 < 0.4, the greedy algorithm selects a server with capacity 2000 instead of the server with capacity of 3000. As a result, based on the greedy algorithm, two servers are utilized while in the proposed algorithm, only one server in both cases is utilized. Clearly, the greedy algorithm utilizes servers inefficiently, and hence, the acceptance ratio is decreased especially for a large number of users (see Fig. 8(a)).
2) Optimality Gap: Another metric for investigation of the performance of the proposed algorithm is optimality gap. In this regard, we adopt the exhaustive search method [31]. Since the complexity of the exhaustive search method is very high and exponentially grows with the size of system parameters, we exploit it for a small scaled network. The considered parameters and the corresponding solution methods values are stated in Table V. The other parameters are based on Tables IV and III Proposed algorithm, R u min =10, U=30
Proposed algorithm, R u min =5, U=15
Greedy-based algorithm, R u min =5, U=30
Greedy-based algorithm, R u min =10, U=30
Greedy-based algorithm, R u min =5, U=15
Are the same (b) Number of active servers versus the service deadline. VI. CONCLUSION
In this paper, we proposed a novel joint radio and NFV RA for heterogeneous services. Our aim is to minimize the utilization of the radio resources and servers. Therefore, we proposed a novel scheduling and energy efficient scheme in terms of minimizing the number of activated servers based on a new heuristic algorithm. More importantly, our scheduling scheme includes queuing effect such as queuing waiting time. To solve the proposed problem, first, we divided it into three sub-problems, and then efficiently solved each of them. To solve NFV RA, we proposed a novel low complex heuristic algorithm that is based on minimizing the number of active servers in the network. By this scheme, we significantly reduced the resources cost such as processing, buffering, and power consumption. Moreover, we proposed a novel AC scheme determining which one of the requested services have critical requirements and needs more resources in terms of radio and NFV resources to ensure their QoS requirements and then rejects their services.
We evaluated the performance of the proposed scheme with different network parameters and variables such as service demands, service QoS, and network resource capacities. Our simulation results is carried out with different values of the network parameters and service requested with various metrics such as service acceptance ratio, the number of active servers, and network predefined cost. Moreover, to verify the performance of the proposed algorithm, we compared it with the conventional one from the performance perspective. Our simulation results demonstrate that the proposed algorithm outperforms the conventional one by approximately 8%. | 6,035 |
1907.06212 | 2960316954 | In this paper, we propose an end to end joint radio and virtual network function (VNF) resource allocation for next-generation networks providing different types of services with different requirements in term of latency and data rate. We consider both the access and core parts of the network and formulate a novel optimization problem whose aim is to perform the radio resource allocation jointly with VNF embedding, scheduling, and resource allocation such that the network cost, defined as the consumed energy and the number of utilized network servers, is minimized. The proposed optimization problem is non-convex, NP-hard, and mathematically intractable, and hence, we use an alternative search method (ASM) to decouple the main problem into some sub-problems of lower complexity. We propose a novel heuristic algorithm for embedding and scheduling of VNFs by proposing a novel admission control (AC) algorithm. We compare the performance of the proposed algorithm with a greedy-based solution in terms of the acceptance ratio and the number of active servers. Our simulation results show that the proposed algorithm outperforms the conventional ones. | A dynamic service function chain deployment is proposed in @cite_13 in which the authors consider a trade-off between resource consumption and operational overhead. In @cite_22 , NF placement in the network is studied. Moreover, its impact on network performance with the aim of minimizing the cost of having virtual machines (VMs) In this paper, VM, node, and server have the same meaning. and the cost of steering traffic into servers are investigated. Service function chain (SFC) placement in the cloud-based network with the aim of minimizing end-to-end (E2E) latency of SFCs and enhancing QoS is investigated in @cite_30 . An automated decentralized method for online placement and optimization of VMs in NFV-based network is proposed in 8501940 . In @cite_8 , VNF embedding with the aim of minimizing physical machine and taking into consideration users' SFC requests and factors such as basic resource consumption and time-varying workload is studied. | {
"abstract": [
"Network function virtualization (NFV) has been introduced by network service providers to overcome various challenges that hinder them from satisfying the growing demand for networking services with higher return-on-investment. The association of NFV with the leading technologies of information technology virtualization and software defined networking is paving the way for flexible and dynamic orchestration of the VNFs, but still, various challenges need to be addressed. The VNFs instantiation and placement problems on data center’s (DC) servers are key enablers to achieve the desired flexible and dynamic NFV applications. In this paper, we have addressed the VNF placement problem by providing a novel mixed integer linear programming (MILP) optimization model and a novel heuristic solution, Betweenness centrality Algorithm for Component Orchestration of NFV platform (BACON), for small- and large-scale DC networks. The proposed solution addresses the VNF placement while taking into consideration the carrier-grade nature of the NFV applications and at the same time, minimizing the intra- and end-to-end delays of the service function chain (SFC). Also, the proposed approach enhances the reliability and the quality of service (QoS) of the SFC by maximizing the count of the functional group members. To evaluate the performance of the proposed solution, this paper conducts a comparative analysis with an NFV-agnostic algorithm and a greedy-k-NFV approach, which is proposed in the literature work. Also, this paper defines the complexity and the order of magnitude of the MILP model and BACON. BACON outperforms the greedy algorithms especially the greedy-k-NFV solution and has a lower complexity, which is calculated as @math . The simulation results show that finding an optimized VNF placement can achieve minimal SFCs delays and enhance the QoS accordingly.",
"Network function virtualization (NFV) is a promising technology to decouple the network functions from dedicated hardware elements, leading to the significant cost reduction in network service provisioning. As more and more users are trying to access their services wherever and whenever, we expect the NFV-related service function chains (SFCs) to be dynamic and adaptive, i.e., they can be readjusted to adapt to the service requests’ dynamics for better user experience. In this paper, we study how to optimize SFC deployment and readjustment in the dynamic situation. Specifically, we try to jointly optimize the deployment of new users’ SFCs and the readjustment of in-service users’ SFCs while considering the trade-off between resource consumption and operational overhead. We first formulate an integer linear programming (ILP) model to solve the problem exactly. Then, to reduce the time complexity, we design a column generation (CG) model for the optimization. Simulation results show that the proposed CG-based algorithm can approximate the performance of the ILP and outperform an existing benchmark in terms of the profit from service provisioning.",
"Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.",
"Network function virtualization (NFV) brings great conveniences and benefits for the enterprises to outsource their network functions to the cloud datacenter. In this paper, we address the virtual network function (VNF) placement problem in cloud datacenter considering users’ service function chain requests (SFCRs). To optimize the resource utilization, we take two less-considered factors into consideration, which are the time-varying workloads, and the basic resource consumptions (BRCs) when instantiating VNFs in physical machines (PMs). Then the VNF placement problem is formulated as an integer linear programming (ILP) model with the aim of minimizing the number of used PMs. Afterwards, a Two-StAge heurisTic solution (T-SAT) is designed to solve the ILP. T-SAT consists of a correlation-based greedy algorithm for SFCR mapping (first stage) and a further adjustment algorithm for virtual network function requests (VNFRs) in each SFCR (second stage). Finally, we evaluate T-SAT with the artificial data we compose with Gaussian function and trace data derived from Google's datacenters. The simulation results demonstrate that the number of used PMs derived by T-SAT is near to the optimal results and much smaller than the benchmarks. Besides, it improves the network resource utilization significantly."
],
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_22",
"@cite_8"
],
"mid": [
"2912731102",
"2623646697",
"1578960134",
"2792866491"
]
} | Energy Cost Minimization by Joint Radio and NFV Resource Allocation: E2E QoS Framework | B. Our Main Contributions
Obviously, in a real network and practical scenarios, QoS is the E2E concept and depends on radio access and core networks. In fact, guaranteeing QoS for different applications refers to ensure all of requirements, such as data rate by all parts of the network. These reasons, motivate us to propose a framework which radio and NFV RA is considered for E2E service provisioning where a new AC mechanism is devised for the service requests. The main contributions of this paper can be summarized as follows:
• In this paper, in order to guaranteeing E2E QoS by utilizing resources, efficiently, we propose a novel E2E QoS-aware framework by consideration of the radio and NFV RA that has not been considered in the literature.
• More importantly, we introduce a new approach for VNF scheduling with considering the network service latency. We introduce a new scheduling variable using which the latency of each VNF is obtained by calculating the processing and waiting time of all VNFs scheduled before it. This means that the time each NS is finished can be calculated as sum of the waiting time and processing time elapsed from the packet entrance to the packet receiving by the destination [21]. On the other hand, we consider a maximum tolerable latency for each packet of different services which should be ensured by the network. We propose a novel efficient and low complexity algorithm based on minimizing the number of active VMs (servers) and guaranteeing the requested service QoS requirements. • We formulate a new optimization problem for radio and NFV RA with the aim of minimizing cost function in terms of radio and NFV resources. In the proposed optimization problem, subcarrier assignment, power allocation, VNF embedding, scheduling, ordering, and server utilization are optimization variables. Our main aim is to minimize the network cost in terms of the transmit power and the number of active nodes while guaranteeing the service QoS metrics.
• To overcome the infeasibility issue in the solution of the proposed optimization problem, we propose a new elastication method and a novel AC method to reject some users and guarantee the other users requested service requirements. Based on the proposed AC, the user which has the most effect on the infeasibility on optimization problem i.e., needs more resources to guarantee its QoS is found and its service is rejected. • We prove the convergence of the proposed algorithm and analyze its computational complexity.
• We provide numerical results for the performance evaluation of the proposed problem and algorithm for different network configurations and greedy-based algorithm. Our simulation results show our proposed algorithm outperforms greedy-based by approximately 8% for same computational complexity.
C. Paper Organization
The rest of the paper is organized as follows. In Section II, the system model is explained. In Section II-E, the problem formulation is presented. The problem solution is presented in Section III. In Section IV, the computational complexity and convergence of the proposed algorithm are investigated. The simulation results are presented in Section V. Finally, in Section VI, the paper is concluded.
Notations: Vector and matrices are indicated by bold lower-case and upper-case characters, respectively. |.| and . p represent the absolute value and p-norm, respectively. A denotes set {1, . . . , A}, A(i) is i-th element of set A, and R n is the set of n dimension real numbers.
Moreover, U d [a, b] denotes the uniform distribution in interval a and b.
II. SYSTEM MODEL AND PROBLEM FORMULATION
A. Radio RA Parameters
We consider a single-cell network with U users whose set is U = {1, . . . , U } and K subcarriers whose set is K = {1, . . . , K}. We define the subcarrier assignment variable ρ k u with ρ k u = 1 if subcarrier k is allocated to user u and otherwise ρ k u = 0. We assume orthogonal frequency division multiple access (OFDMA) as the transmission technology in which each subcarrier is assigned at most to one user. To consider this, the following constraint is considered:
u∈U ρ k u ≤ 1, ∀k ∈ K.(1)
Let h k u be the channel coefficient between user u and the BS on subcarrier k, p k u be the transmit power from the BS to user u on subcarrier k, and σ k u be the power of additive white Gaussian noise (AWGN) 3 at user u on subcarrier k. The received signal to noise ratio (SNR) of user u
on subcarrier k is γ k u = p k u h k u σ k u
, and the achievable rate of user u on subcarrier k is given by
r k u = ρ k u log(1 + γ k u ), ∀u ∈ U, k ∈ K.(2)
Hence, the total achievable rate of user u is given by R u = k∈K r k u , ∀u ∈ U. The following constraint states the power limitation of BS:
k∈K u∈U ρ k u p k u ≤ P max ,
where P max is the maximum transmit power of BS.
B. NFV RA Framework
In this subsection, we explain how the generated traffic of each user is handled in the network by performing different NFs in the requested user's NS 4 on the different servers/physical nodes 3 In this paper, we assume that an AWGN interfering source (IS) interferes at the BS and all users on each subcarrier. We consider a single cell with a BS, in a scenario with many cells and no coordination between BSs, the inter-cell interference distribution converges to a Gaussian and can be integrated into the interferences of other cells which can be modeled by the IS [22]. 4 Defined by European Telecommunications Standards Institute (ETSI) as the composition of Network Function(s) and/or Network Service(s), defined by its functional and behavioral specification [23]. Server mapping between NF f s m of service s, user u and node n t Ordering indicator between NF f s m of service s user u and f s m of service s user u by leveraging NFV 5 . In this regard, we consider NFV RA that consists of a new approach for the embedding and scheduling phases. In the embedding phase, we map each NF on the server that is capable to run that NF. Note that we do not consider mapping virtual links on the physical links and leave it as an interesting future work as [1], [25].
We consider S communication service (CS) 6 types whose set is S = {1, 2, ..., S} and M NFs whose set is F = {f m |m = 1, . . . , M }. The considered parameters of the paper are stated in It is worth noting that some of NFs have some association and precedence over some others, for instance, the NF decryption is performed after encryption. We consider a set of VMs denoted by 5 Standardized by ETSI organization for 5G and beyond in the telecommunication [24]. 6 In this paper, the NS and CS are paired together. That means each CS s has a NS with corresponding NFs that is denoted by set Ωs. Note that CS is defined by the 3rd generation partnership project (3GPP) technical specification 28.530 [26]. N = {1, ..., N } in the network each of which has computing and storage resources. We assume that each server can process at most one function at a time [1]. To improve energy efficiency (EE) in our proposed system, we introduce a new variable η n to determine the active nodes which is defined as
η n = 1, Node n is active, 0, Otherwise.
We consider a generalized model for resource sharing of VMs that is introduced in [1]. Therefore, we introduce a binary variable β f s m u,n which denotes that NF f s m for user u in NS s is executed at node n, and is defined as
β f s m u,n =
1, NF f s m for user u in NS s is executed at server n.
0, Otherwise.
When β f s m u,n is set to 1 i.e., f s m in the requested NS s for user u is mapped on server n, and this server should be active, i.e., η n = 1. Therefore, we have the following constraint:
β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S.
Each NF of each NS is performed completely at only one server at a time. Therefore, we have
n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S.(4)
Moreover, we assume that each NF needs a specific number of CPU cycles per bit i.e., α f s m to run on a mapped server. From the physical resources perspective, we assume that each server can provide at most L n CPU cycles per unit time and hence, we have the following constraint:
u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(5)
where y u is the packet size of service user u and here is assumed to be equal to the number of bits generated in a time unit, i.e., R min s . Hence, the processing time of each function f s m for each bit on server n ∈ N , i.e., is as follows: Therefore, the total processing latency for each packet with packet size y u is obtained as
τ f s m n = α f s m L n , ∀n ∈ N , f s m ∈ Ω s .(6)τ f s m n =τ f s m n y u , ∀n ∈ N , f s m ∈ Ω s .(7)
Additionally, we assume that each NF needs specific buffer size, i.e., ψ f s m , when it is running on the server. Hence, from the storage resource perspective, we consider that each server has the limited buffer size i.e., Υ n , which leads to the following constraint:
u∈U s∈S ∀f s m ∈Ωs (ψ f s m + y u )β f s m u,n ≤ Υ n , ∀n ∈ N .(8)
C. Latency Model
In NFV RA, our main aim is to guarantee the service requirement includes maximum tolerable latency for each packet with size y u of the requested services with minimizing consumption of servers. The total latency that we consider in our system model results from executing NFs and queuing (waiting) time. In the following, we calculate the total latency resulting from scheduling.
Remark 1. In this paper, our main aim is to model and investigate the effect of processing and scheduling latency on the service acceptance and the network cost. Hence, we do not consider the other latency factors such as propagation and transmission latencies in our model. In fact, our proposed scenario is focused on the intra data centers and not appropriate for the nationalwide networks. It is worth noting that the aforementioned latency is coming from the high order distance from the source and application servers. Therefore, these concerned can be treated by exploiting the mobile edge computing (MEC) technology to bring the application servers close to clients. In this regard, we generalize this work for the MEC-enabled networks in future works.
1) Scheduling and Chaining: Each NF should wait until its preceding function is processed before its processing can commence. The processing of NS s ends when its last function is processed. Therefore, the total processing time is the summation of the processing times of the NFs at the various servers. For scheduling of each NF on a server, we need to determine the start time of it. Therefore, we define t if NF f s m of user u is running after NF f s m of user u , its value is 1, otherwise is 0. By these definitions, the starting time of each NF can be obtained as follows:
t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U.(9)
To more clarify, we illustrate the proposed scheduling policy in Fig. 2. The total service chain latency for each user u on the requested service is inferred as follows:
D Total u = max ∀n∈N ,f s m ∈Ωs,s∈S t f s m u,n β f s m u,n + τ f s m n β f s m u,n , ∀u ∈ U.(10)
D. Cost Model: Objective Function
Our aim in this paper is to minimize the total cost of the network. In this regard, we define cost Ψ as the total amount of radio and NFV resources that are utilized in the network to provide services. In particular, the cost function is given as follows:
Ψ = µ u∈U ,k∈K p k u ρ k u + ν n∈N η n ,(11)
where µ and ν are constants for scaling and balancing the costs of different resource types.
U1 Request S=1 U2 Request S=2 =1 =1 =1 =1 =1 =1 =1 =1 Time =1 Starting time =max(A, B) A B =1 { , , , } { , , , , } Fig. 2.
Schematic illustration of scheduling and formulation of (9) for the requests as an example with considering 5 servers, 2 users, and 6 NFs.
E. Problem Formulation
Based on these definitions, our aim is to solve the following optimization problem:
min P,ρ,T,X,β,η Ψ(P, ρ, η) (12a)
S.T:
R u ≥ R min u , ∀u ∈ U, (12b) u∈U ρ k u ≤ 1, ∀k ∈ K, (12c) k∈K u∈U ρ k u p k u ≤ P max , (12d) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(12e)
u∈U s∈S ∀f s m ∈Ωs
ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n ≤ Υ n , ∀n ∈ N ,(12f)t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U, (12g) D Total u ≤ D max s , ∀u ∈ U, (12h) 0 ≤ p k u , ∀u ∈ U, k ∈ K, (12i) β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, f s m ∈ Ω s , (12j) n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S, (12k) ρ k u ∈ {0, 1}, ∀u ∈ U, k ∈ K, (12l) β f s m u,n ∈ {0, 1}, ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S, (12m) x f s m ,f s m u,u ∈ {0, 1}, ∀u, u ∈ U, u = u , ∀f s m , ∀f s m ∈ Ω s ,(12n)η n ∈ {0, 1}, ∀n ∈ N ,(12o)where ρ = [ρ k u ], β = [β
III. SOLUTION OF THE PROPOSED PROBLEMS
Optimization problem (12) is non-convex including both mixed binary and continues variables with non-linear and non-convex constraints. Hence, it belongs to the NP-hard and mathematically intractable optimization problem and obtaining an optimal solution is not trivial and leads to high computational complexity and algorithm run time. Therefore, we cannot apply the common convex optimization methods for solving it.
Without considering NFV RA, the radio RA problem, separately, on the power and subcarrier allocation variables is convex optimization problem, and hence, each of them can be solved efficiently. While NFV RA is non-linear mixed integer programming with large number of variables, i.e., T, X, η, β. These motivate us to develop a new low complexity heuristic algorithm to solve NFV RA sub-problem that is stated with details in Algorithm 2. However, we investigate our proposed algorithm from the different aspects and compare it with other methods.
To solve the optimization problem (12) in an efficient manner, we utilize alternate search method (ASM). To use ASM, we need to calculate initial values of the optimization variables which should satisfy the corresponding constraints of (12). Since the optimization problem (12) would be infeasible, we propose a novel elasticizing approach by introducing a new elastic variable. Based on this method, the constraints that would make the optimization problem infeasible are changed as follows. Assume that we have constraint g(y) ≤ 0, where y ∈ R n is the objective variable. We elasticize it by g(y) ≤ A, where A ≥ 0 is the objective variable. By applying this method, we solve the following optimization problem:
min P,ρ,T,X,η,β,A Ψ(P, ρ, η) + W A (13a) S.T: R min u − R u ≤ A, ∀u ∈ U, (13b) u∈U s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n ≤ A, ∀n ∈ N , (13c) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ≤ A, ∀n ∈ N , (13d) D Total u − D max s ≤ A, ∀u ∈ U,(13e)A ≥ 0,(13f)(12c), (12d), (12j), (12k), (12g), (12i) − (12o),
where A is the elastic optimization variable and W is a large number, i.e., W 1. Note that since A can be any non-negative value, the optimization problem (13) is feasible. By solving the optimization problem (13), the infeasibility of the main optimization problem (12) is determined. Therefore, if the elastic variable A is positive, problem (12) is infeasible. To overcome the infeasibility of problem (12), we introduce a new AC method to reject some services providing rooms for the remaining ones. The block diagram illustrating the main steps of the proposed method to solve the optimization problem (12) which is based on solving the optimization problem (13) and AC is shown in Fig. 3. (12) is feasible and based on the proposed AC method, all the requested services are accepted.
Proposition 1. Problem (13) is equivalent to problem (12), if we have A = 0. That means problem
The elasticated problem (13) is also non-convex and NP-hard. In this regard, we solve it by dividing it into three sub-problems by utilizing ASM. The first sub-problem is power allocation and elastication, the second one is subcarrier allocation, and the last one is NFV RA. In fact, the first and second sub-problems are the radio RA sub-problem and it is stated in Section III-A. In the NFV RA sub-problem, all the optimization variables are integer and the problem formulation and solution are presented in Section III-B. More details of the proposed iterative solution of optimization problem (13) is stated in Algorithm 6. In the next subsection, we explain the solution of the aforementioned sub-problems.
A. Radio RA
The radio RA problem is divided into two sub-problems as follows.
1) Power Allocation and Elasticated Subproblem:
The power allocation and elasticated subproblem is presented as follows: 2) Subcarrier Allocation Subproblem: The subcarrier allocation sub-problem is as follows:
min P,A u∈U k∈K ρ k u p k u + W A(min ρ u∈U k∈K ρ k u p k u ,(15a)
S.T: (13b), (12c), (12d), (12l).
We solve sub-problem (15) by using MOSEK in MATLAB toolbox [28].
B. NFV RA
The NFV RA sub-problem is as follows:
min T,X,η,β n∈N η n (16a) S.T: (13c) − (13e), (12g), (12j) − (12o).(16b)
To solve problem (16), we propose a new heuristic algorithm where the functions are mapped and scheduled on the servers whose have the minimum processing latency. Moreover, our proposed algorithm is based on minimizing the number of active servers. To this end, we ascendingly sort the servers by the total processing latency metric. After that, the server with the best rank, i.e., the high available capacity in the sorted list is turned on. Then, we activate another server, if the previously activated servers cannot satisfy the resource demands by NFs or QoS of users is degraded. Based on the algorithm, we ascendingly sort users according to latency requirements and then we start to map and schedule each of NFs on the servers. The details of the proposed NFV RA are stated in Algorithm 2. Return to line 5
Output: T , X, η, and β
C. Admission Control
Our proposed AC is based on the value of elastic variable of problem (13). Whereas, if A is non-zero, the original problem (12) is infeasible. This means that one or more elasticated constraints, i.e., (13b)-(13e), are not satisfied. To ensure these constraints, we can increase network resources (e.g., server's capacities) or reject some of the users service requests. Since the first method is not practical in more cases, we propose to reject some requested services by adopting the proposed AC. In behind of AC, one of the major questions is which one of the requested services should be rejected. In this case, the requested services have diverse characteristics and different effects on the utilization of the network resources, and consequently on the infeasibility of problem (12). To find the user which has the most effects on the infeasibility and reject its service, we do as follows:
u = argmax u u κ 1 (R min s − R u ) + κ 2 n∈N s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n + κ 3 n∈N s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ,(17)
where κ 1 , κ 2 , and κ 3 are the fitting parameters to balance u , and we emphasize that in (17) we use the values of the optimization variables of (13) obtained by Algorithm (6). Based on this, we reject user u . Then, solve problem (13) with U = U − {u }. We repeat this procedure until, we have A = 0 in the solution of problem (13).
IV. CONVERGENCE AND COMPUTATIONAL COMPLEXITY
A. Convergence of the Solution Algorithm
Based on ASM, after each iteration the objective function in each sub-problem is enhanced and finally it converges. Fig. 4 shows an example about the convergence of our proposed iterative algorithm. Clearly, our proposed solution converges after few iterations. We have the following relations between iterations (z is the iteration number):
O P[z], ρ[z], η[z], A[z] = min P,A O P[z], ρ[z], η[z], A[z] ≤ O P[z − 1], ρ[z], η[z], A[z − 1] = min ρ O P[z − 1], ρ[z], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z], A[z − 1] = min η O P[z − 1], ρ[z − 1], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z − 1], A[z − 1] .
This means that the objective function of ASM decreases as the iteration number increases. In addition, with QoS and ensuring the resource demand constraints, i.e., (13b)-(13e), the ASM algorithm converges to a sub-optimal solution which corresponds to the sub-optimal solution of problem (13). (2). Based on equation (7) and server selection policy of Algorithm (2), R u is directly proportional to η. Hence, if the value of R u is fixed or reduced at each iteration
Method Complexity
Heuristic
O(U 2 × F × N ) Greedy-based solution O(U 2 × F × N ) Power Allocation: CVX log C 1 ξ /log(ς) Subcarrier Allocation: CVX-MOSEK log C 2 t 0 /log(ς) z, i.e., if R (z) u ≤ R (z−1) u , then we have η (z) ≤ η (z−1)
. As a result the proposed algorithm is monotonic.
B. Computational Complexity
By utilizing ASM, the overall complexity of the algorithm is a linear combination of the complexity of each sub-problem.
1)
Radio RA: For the radio RA sub-problem, we utilize geometric programming (GP) and
IPM via CVX toolbox in MATLAB [27]. Based on this method, the computational complexity order of power allocation sub-problem is given by
log( C 1 ξ )
log(ς) where C 1 = U + N + N × U + U + 1 is the total number of constraints of sub-problem (14), ξ is the initial point for approximating the accuracy of IPM, 0 < 1 is the stopping criterion for IPM, and ς is the accuracy of IPM [27]. Similarly, the complexity of sub-problem (15) is given by
log( C 2 ξ )
log(ς) where C 2 = U + K + 1 is the total number of constraints of (15). Table II.
V. EXPERIMENTAL EVALUATION
A. Simulation Environment
In this section, the simulation results are presented to evaluate the performance of the proposed system model. We consider U = 50 users which are randomly distributed in the converge of a
B. Simulation Results
The investigation of the proposed algorithm under different network settings and parameters is started by the simulation results that are shown in Figures 5(a)-7(b). These results are obtained by considering the Monte-Carlo method iterations is 500. We discuss these results in the following.
1) Acceptance Ratio:
The acceptance ratio is defined by the ratio of the number of accepted services by the network to the total number of the requested services by users and is obtained by κ = 1 −Û U whereÛ is the number of users that their services are rejected based on the proposed AC. It is a criterion to investigate the efficiency of the proposed algorithm in utilizing total network resources to guarantee the requested QoS and accept the service demands.
As can be seen from Figures 5(a), 5(b), and 5(c), the value of the acceptance ratio depends on two main factors, i) the network resources capacity; ii) the number of users (service demands) and service QoS characteristics (latency and data rate). Therefore, we have some challenges to address high data rate and provide low latency services. Clearly, by increasing the number of users (service requests) the acceptance ratio is decreased, especially for low latency services that have the main contribution on the acceptance ratio. We observe that increasing the number of low latency services leads to reducing the acceptance ratio. For the large number of users, the network guarantees some users' service requirements and other users are rejected. For this cases, based on κ equation, the value ofÛ is increased and U −Û approximately reaches to a fixed value. Therefore, the latency and buffering requirements are satisfied and the acceptance ratio of services is improved. From this figure, we conclude that the impact of the number of active servers on the high data rate and low latency services e.g., process automation [29] is more than that of other services. Furthermore, comparing Fig. 5(c) and Fig 5(a), we obtain that the effect of the server processing capacity is more considerable than the number of active servers on the low latency services. That means low latency services are rejected by the network because their requirements need more resources in the network to reduce waiting and processing times.
2) Network Cost: Fig. 6(a) illustrates the network cost versus the variation of the number of users for R min s = 10 bps/Hz and service deadline 2 second. The network cost is comprised of both radio and NFV resources costs in terms of power and spectrum consumption and utilizing servers in the network. It can be observed that by increasing the number of users the network cost increases due to increase in both the radio and NFV costs. It is clear that by increasing the number of users the NFV cost increases rapidly than that of the radio cost. where r U is the amount of the resources utilized by the users and r T is the total server's resources. From this figure, we infer that not only the packet size has a direct effect on the utilization ratio, but also the service deadline has a major impact on this. This is due to the fact that a large packet size needs more storage and processing capacity and low service deadline needs minimum waiting and processing times. Therefore, we should make active more servers and exploit their resources for low latency services. Obviously, increasing the number of users increases the utilization ratio approximately in a linear form. From the cost perspective, we can conclude that by increasing the utilization of network resources, the cost network is also increased, especially in terms of power consumption.
3) Service Deadline: Figures 6(b) and 7(b) show the total cost of the network versus different values of the service deadline for various scenarios. Clearly, the requested service deadline has a major effect on the utilization of processing and buffering resources in servers. Form Fig. 7 to process the NFVs of corresponding services. That means for providing low latency services, we should pay more costs in terms of radio and NFV resources. By increasing the number of servers, the waiting time for each NF in a NS that it is in queue is minimized, and hence, server availability and probability of QoS guarantee for users are increased. For higher latency values in some cases, one (or two) active server(s) is sufficient. By comparing Fig. 6(b) and Fig. 6(a), we obtain that by reducing the value of the latency, the network cost increases significantly compared to the case where the number of users (the numbers of service requested) increases.
C. Benchmark Algorithms 1) Performance Comparison: To the best of our knowledge, this is the first work (refer to the related works) tackling the E2E RA with proposing new AC and a new closed form formulation of NFV scheduling which is comprised of both waiting time and SFC ordering. Moreover, we propose a new heuristic algorithm to solve the formulated optimization problem. It is worth nothing that in the related works, a greedy-based search is exploited [1], [8], [30]. We compare our proposed algorithm with a modified version of greedy-based algorithm that is proposed in [1]. In greedy-based search, different objectives can be considered, for example, minimizing the total flow time [1]. The greedy-based scheduling and embedding the arrival service requests are performed sequentially based on the greedy criteria. Based on the modified greedy algorithm 10 to solve sub-problem (16), first, we search servers that are appropriate for embedding and then find the best server by greedy criterion, i.e., the shortest server queues [1]. The steps of the greedy-based algorithm with the minimum queue time criterion is stated in Algorithm 3.
time. In some cases, the algorithm adds servers that are release and have more capacities. While it is possible to satisfy the latency of other functions without utilizing this server. In constant, the proposed algorithm activates a server when the previously added servers (activated servers) cannot satisfy the constraints of the problem and users QoS. More importantly, in the greedy algorithm, the number of active servers is fixed after increasing the values of the service deadlines which is the consequent of its server selection policy which is based on the queuing time. While in the proposed algorithm it is reduced.
To more clarify this, assuming that we have five servers in the network with specific capacities as [1000 2000 1500 3000 1800] and two service requests with 2 functions with capacity requirements 20 and 40, R min s = 10 bps/Hz and service deadlines 0.3 and 0.7, respectively. Based on Algorithm 2, the service finishing time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and service finishing time of user 2 is 0.2 + 20×10 3000 + 40×10 3000 = 0.4 < 0.7. That means one active server is sufficient to all users. While based on the greedy algorithm the finishing service time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and that of user 2 is 20×10 2000 + 40×10 2000 = 0.3 < 0.7, since 0.3 < 0.4, the greedy algorithm selects a server with capacity 2000 instead of the server with capacity of 3000. As a result, based on the greedy algorithm, two servers are utilized while in the proposed algorithm, only one server in both cases is utilized. Clearly, the greedy algorithm utilizes servers inefficiently, and hence, the acceptance ratio is decreased especially for a large number of users (see Fig. 8(a)).
2) Optimality Gap: Another metric for investigation of the performance of the proposed algorithm is optimality gap. In this regard, we adopt the exhaustive search method [31]. Since the complexity of the exhaustive search method is very high and exponentially grows with the size of system parameters, we exploit it for a small scaled network. The considered parameters and the corresponding solution methods values are stated in Table V. The other parameters are based on Tables IV and III Proposed algorithm, R u min =10, U=30
Proposed algorithm, R u min =5, U=15
Greedy-based algorithm, R u min =5, U=30
Greedy-based algorithm, R u min =10, U=30
Greedy-based algorithm, R u min =5, U=15
Are the same (b) Number of active servers versus the service deadline. VI. CONCLUSION
In this paper, we proposed a novel joint radio and NFV RA for heterogeneous services. Our aim is to minimize the utilization of the radio resources and servers. Therefore, we proposed a novel scheduling and energy efficient scheme in terms of minimizing the number of activated servers based on a new heuristic algorithm. More importantly, our scheduling scheme includes queuing effect such as queuing waiting time. To solve the proposed problem, first, we divided it into three sub-problems, and then efficiently solved each of them. To solve NFV RA, we proposed a novel low complex heuristic algorithm that is based on minimizing the number of active servers in the network. By this scheme, we significantly reduced the resources cost such as processing, buffering, and power consumption. Moreover, we proposed a novel AC scheme determining which one of the requested services have critical requirements and needs more resources in terms of radio and NFV resources to ensure their QoS requirements and then rejects their services.
We evaluated the performance of the proposed scheme with different network parameters and variables such as service demands, service QoS, and network resource capacities. Our simulation results is carried out with different values of the network parameters and service requested with various metrics such as service acceptance ratio, the number of active servers, and network predefined cost. Moreover, to verify the performance of the proposed algorithm, we compared it with the conventional one from the performance perspective. Our simulation results demonstrate that the proposed algorithm outperforms the conventional one by approximately 8%. | 6,035 |
1907.06212 | 2960316954 | In this paper, we propose an end to end joint radio and virtual network function (VNF) resource allocation for next-generation networks providing different types of services with different requirements in term of latency and data rate. We consider both the access and core parts of the network and formulate a novel optimization problem whose aim is to perform the radio resource allocation jointly with VNF embedding, scheduling, and resource allocation such that the network cost, defined as the consumed energy and the number of utilized network servers, is minimized. The proposed optimization problem is non-convex, NP-hard, and mathematically intractable, and hence, we use an alternative search method (ASM) to decouple the main problem into some sub-problems of lower complexity. We propose a novel heuristic algorithm for embedding and scheduling of VNFs by proposing a novel admission control (AC) algorithm. We compare the performance of the proposed algorithm with a greedy-based solution in terms of the acceptance ratio and the number of active servers. Our simulation results show that the proposed algorithm outperforms the conventional ones. | In @cite_3 , an online scheduling and embedding algorithm by considering the capacity of available buffers and the processing time of each VNF is proposed for NFV. The authors propose a set of greedy algorithms and tabu search algorithm for mapping and scheduling. Moreover, cost, revenue, and acceptance ratio of these algorithms are compared together. VNF placement in a network with several mobile virtual network operators (MVNOs) are investigated in @cite_16 in which the slice scheduling mechanism is introduced in order to isolate the traffic flow of MVNOs. In this paper, the goal is optimizing VNF placement based on the available radio resources. black Joint VNF placement and admission control (AC) are studied in @cite_24 . In this paper, the aim is to maximize networks provider revenue in terms of bandwidth and capacity. black The authors in @cite_0 , propose an RA algorithm which integrates placement and scheduling of VNF together. In @cite_15 , a VNF scheduling problem is investigated and joint VNF scheduling and traffic steering is formulated as a mixed integer linear program. In the optimization problem, both the processing latency of VNFs and service chain transmission latency at virtual links are considered. | {
"abstract": [
"Network function virtualization has received attention from both academia and industry as an important shift in the deployment of telecommunication networks and services. It is being proposed as a path towards cost efficiency, reduced time-to-markets, and enhanced innovativeness in telecommunication service provisioning. However, efficiently running virtualized services is not trivial as, among other initialization steps, it requires first mapping virtual networks onto physical networks, and thereafter mapping and scheduling virtual functions onto the virtual networks. This paper formulates the online virtual function mapping and scheduling problem and proposes a set of algorithms for solving it. Our main objective is to propose simple algorithms that may be used as a basis for future work in this area. To this end, we propose three greedy algorithms and a tabu search-based heuristic. We carry out evaluations of these algorithms considering parameters such as successful service mappings, total service processing times, revenue, cost etc, under varying network conditions. Simulations show that the tabu search-based algorithm performs only slightly better than the best greedy algorithm.",
"",
"The design, management, and operation of network infrastructure have evolved during the last few years, leveraging on innovative technologies and architectures. With such a huge trend, due to the flexibility and significant economic potential of these technologies, software-defined networking (SDN) and network functions virtualization (NFV) are emerging as the most indispensable key catalysers. SDN NFV enhancing the infrastructure agility, thus network operators and service providers are able to program their own network functions (e.g., gateways, routers, load balancers) on vendor-independent hardware substrate. One of the most important considerations in NFV deployment is how to allocate the virtual resources that are needed to provide flexible virtual network services in an NFV-based network infrastructure. Thus, the most important prerequisite for NFV deployment is achieved fast, scalable and dynamic composition and allocation of networks functions (NFs) to implement network services (NSs). We have proposed a revised RA algorithm that integrates embedding and scheduling of virtual network functions (VNFs) simultaneously. In this paper, performance evaluation of the revised RA algorithm is performed and the proposed algorithm is applied more effectively in NFV environment where the demand for resources is flexible and concentrated.",
"To accelerate the implementation of network functions middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan latency of the overall VNFs’ schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators’ revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15 –20 in the simulated scenarios.",
"Network function virtualization (NFV) sits firmly on the networking evolutionary path. By migrating network functions from dedicated devices to general purpose computing platforms, NFV can help reduce the cost to deploy and operate large IT infrastructures. In particular, NFV is expected to play a pivotal role in mobile networks where significant cost reductions can be obtained by dynamically deploying and scaling virtual network functions (VNFs) in the core network. However, in order to achieve its full potential, NFV needs to extend its reach also to the radio access segment. Here, mobile virtual network operators shall be allowed to request radio access VNFs with custom resource allocation solutions. Such a requirement raises several challenges in terms of performance isolation and resource provisioning. In this work, we formalize the wireless VNF placement problem in the radio access network as an integer linear programming problem and we propose a VNF placement heuristic, named wireless network embedding (WiNE), to solve the problem. Moreover, we present a proof-of-concept implementation of an NFV management and orchestration framework for enterprise WLANs. The proposed architecture builds on a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing capable nodes."
],
"cite_N": [
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_15",
"@cite_16"
],
"mid": [
"1823841943",
"",
"2787140710",
"2430218455",
"2334600287"
]
} | Energy Cost Minimization by Joint Radio and NFV Resource Allocation: E2E QoS Framework | B. Our Main Contributions
Obviously, in a real network and practical scenarios, QoS is the E2E concept and depends on radio access and core networks. In fact, guaranteeing QoS for different applications refers to ensure all of requirements, such as data rate by all parts of the network. These reasons, motivate us to propose a framework which radio and NFV RA is considered for E2E service provisioning where a new AC mechanism is devised for the service requests. The main contributions of this paper can be summarized as follows:
• In this paper, in order to guaranteeing E2E QoS by utilizing resources, efficiently, we propose a novel E2E QoS-aware framework by consideration of the radio and NFV RA that has not been considered in the literature.
• More importantly, we introduce a new approach for VNF scheduling with considering the network service latency. We introduce a new scheduling variable using which the latency of each VNF is obtained by calculating the processing and waiting time of all VNFs scheduled before it. This means that the time each NS is finished can be calculated as sum of the waiting time and processing time elapsed from the packet entrance to the packet receiving by the destination [21]. On the other hand, we consider a maximum tolerable latency for each packet of different services which should be ensured by the network. We propose a novel efficient and low complexity algorithm based on minimizing the number of active VMs (servers) and guaranteeing the requested service QoS requirements. • We formulate a new optimization problem for radio and NFV RA with the aim of minimizing cost function in terms of radio and NFV resources. In the proposed optimization problem, subcarrier assignment, power allocation, VNF embedding, scheduling, ordering, and server utilization are optimization variables. Our main aim is to minimize the network cost in terms of the transmit power and the number of active nodes while guaranteeing the service QoS metrics.
• To overcome the infeasibility issue in the solution of the proposed optimization problem, we propose a new elastication method and a novel AC method to reject some users and guarantee the other users requested service requirements. Based on the proposed AC, the user which has the most effect on the infeasibility on optimization problem i.e., needs more resources to guarantee its QoS is found and its service is rejected. • We prove the convergence of the proposed algorithm and analyze its computational complexity.
• We provide numerical results for the performance evaluation of the proposed problem and algorithm for different network configurations and greedy-based algorithm. Our simulation results show our proposed algorithm outperforms greedy-based by approximately 8% for same computational complexity.
C. Paper Organization
The rest of the paper is organized as follows. In Section II, the system model is explained. In Section II-E, the problem formulation is presented. The problem solution is presented in Section III. In Section IV, the computational complexity and convergence of the proposed algorithm are investigated. The simulation results are presented in Section V. Finally, in Section VI, the paper is concluded.
Notations: Vector and matrices are indicated by bold lower-case and upper-case characters, respectively. |.| and . p represent the absolute value and p-norm, respectively. A denotes set {1, . . . , A}, A(i) is i-th element of set A, and R n is the set of n dimension real numbers.
Moreover, U d [a, b] denotes the uniform distribution in interval a and b.
II. SYSTEM MODEL AND PROBLEM FORMULATION
A. Radio RA Parameters
We consider a single-cell network with U users whose set is U = {1, . . . , U } and K subcarriers whose set is K = {1, . . . , K}. We define the subcarrier assignment variable ρ k u with ρ k u = 1 if subcarrier k is allocated to user u and otherwise ρ k u = 0. We assume orthogonal frequency division multiple access (OFDMA) as the transmission technology in which each subcarrier is assigned at most to one user. To consider this, the following constraint is considered:
u∈U ρ k u ≤ 1, ∀k ∈ K.(1)
Let h k u be the channel coefficient between user u and the BS on subcarrier k, p k u be the transmit power from the BS to user u on subcarrier k, and σ k u be the power of additive white Gaussian noise (AWGN) 3 at user u on subcarrier k. The received signal to noise ratio (SNR) of user u
on subcarrier k is γ k u = p k u h k u σ k u
, and the achievable rate of user u on subcarrier k is given by
r k u = ρ k u log(1 + γ k u ), ∀u ∈ U, k ∈ K.(2)
Hence, the total achievable rate of user u is given by R u = k∈K r k u , ∀u ∈ U. The following constraint states the power limitation of BS:
k∈K u∈U ρ k u p k u ≤ P max ,
where P max is the maximum transmit power of BS.
B. NFV RA Framework
In this subsection, we explain how the generated traffic of each user is handled in the network by performing different NFs in the requested user's NS 4 on the different servers/physical nodes 3 In this paper, we assume that an AWGN interfering source (IS) interferes at the BS and all users on each subcarrier. We consider a single cell with a BS, in a scenario with many cells and no coordination between BSs, the inter-cell interference distribution converges to a Gaussian and can be integrated into the interferences of other cells which can be modeled by the IS [22]. 4 Defined by European Telecommunications Standards Institute (ETSI) as the composition of Network Function(s) and/or Network Service(s), defined by its functional and behavioral specification [23]. Server mapping between NF f s m of service s, user u and node n t Ordering indicator between NF f s m of service s user u and f s m of service s user u by leveraging NFV 5 . In this regard, we consider NFV RA that consists of a new approach for the embedding and scheduling phases. In the embedding phase, we map each NF on the server that is capable to run that NF. Note that we do not consider mapping virtual links on the physical links and leave it as an interesting future work as [1], [25].
We consider S communication service (CS) 6 types whose set is S = {1, 2, ..., S} and M NFs whose set is F = {f m |m = 1, . . . , M }. The considered parameters of the paper are stated in It is worth noting that some of NFs have some association and precedence over some others, for instance, the NF decryption is performed after encryption. We consider a set of VMs denoted by 5 Standardized by ETSI organization for 5G and beyond in the telecommunication [24]. 6 In this paper, the NS and CS are paired together. That means each CS s has a NS with corresponding NFs that is denoted by set Ωs. Note that CS is defined by the 3rd generation partnership project (3GPP) technical specification 28.530 [26]. N = {1, ..., N } in the network each of which has computing and storage resources. We assume that each server can process at most one function at a time [1]. To improve energy efficiency (EE) in our proposed system, we introduce a new variable η n to determine the active nodes which is defined as
η n = 1, Node n is active, 0, Otherwise.
We consider a generalized model for resource sharing of VMs that is introduced in [1]. Therefore, we introduce a binary variable β f s m u,n which denotes that NF f s m for user u in NS s is executed at node n, and is defined as
β f s m u,n =
1, NF f s m for user u in NS s is executed at server n.
0, Otherwise.
When β f s m u,n is set to 1 i.e., f s m in the requested NS s for user u is mapped on server n, and this server should be active, i.e., η n = 1. Therefore, we have the following constraint:
β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S.
Each NF of each NS is performed completely at only one server at a time. Therefore, we have
n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S.(4)
Moreover, we assume that each NF needs a specific number of CPU cycles per bit i.e., α f s m to run on a mapped server. From the physical resources perspective, we assume that each server can provide at most L n CPU cycles per unit time and hence, we have the following constraint:
u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(5)
where y u is the packet size of service user u and here is assumed to be equal to the number of bits generated in a time unit, i.e., R min s . Hence, the processing time of each function f s m for each bit on server n ∈ N , i.e., is as follows: Therefore, the total processing latency for each packet with packet size y u is obtained as
τ f s m n = α f s m L n , ∀n ∈ N , f s m ∈ Ω s .(6)τ f s m n =τ f s m n y u , ∀n ∈ N , f s m ∈ Ω s .(7)
Additionally, we assume that each NF needs specific buffer size, i.e., ψ f s m , when it is running on the server. Hence, from the storage resource perspective, we consider that each server has the limited buffer size i.e., Υ n , which leads to the following constraint:
u∈U s∈S ∀f s m ∈Ωs (ψ f s m + y u )β f s m u,n ≤ Υ n , ∀n ∈ N .(8)
C. Latency Model
In NFV RA, our main aim is to guarantee the service requirement includes maximum tolerable latency for each packet with size y u of the requested services with minimizing consumption of servers. The total latency that we consider in our system model results from executing NFs and queuing (waiting) time. In the following, we calculate the total latency resulting from scheduling.
Remark 1. In this paper, our main aim is to model and investigate the effect of processing and scheduling latency on the service acceptance and the network cost. Hence, we do not consider the other latency factors such as propagation and transmission latencies in our model. In fact, our proposed scenario is focused on the intra data centers and not appropriate for the nationalwide networks. It is worth noting that the aforementioned latency is coming from the high order distance from the source and application servers. Therefore, these concerned can be treated by exploiting the mobile edge computing (MEC) technology to bring the application servers close to clients. In this regard, we generalize this work for the MEC-enabled networks in future works.
1) Scheduling and Chaining: Each NF should wait until its preceding function is processed before its processing can commence. The processing of NS s ends when its last function is processed. Therefore, the total processing time is the summation of the processing times of the NFs at the various servers. For scheduling of each NF on a server, we need to determine the start time of it. Therefore, we define t if NF f s m of user u is running after NF f s m of user u , its value is 1, otherwise is 0. By these definitions, the starting time of each NF can be obtained as follows:
t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U.(9)
To more clarify, we illustrate the proposed scheduling policy in Fig. 2. The total service chain latency for each user u on the requested service is inferred as follows:
D Total u = max ∀n∈N ,f s m ∈Ωs,s∈S t f s m u,n β f s m u,n + τ f s m n β f s m u,n , ∀u ∈ U.(10)
D. Cost Model: Objective Function
Our aim in this paper is to minimize the total cost of the network. In this regard, we define cost Ψ as the total amount of radio and NFV resources that are utilized in the network to provide services. In particular, the cost function is given as follows:
Ψ = µ u∈U ,k∈K p k u ρ k u + ν n∈N η n ,(11)
where µ and ν are constants for scaling and balancing the costs of different resource types.
U1 Request S=1 U2 Request S=2 =1 =1 =1 =1 =1 =1 =1 =1 Time =1 Starting time =max(A, B) A B =1 { , , , } { , , , , } Fig. 2.
Schematic illustration of scheduling and formulation of (9) for the requests as an example with considering 5 servers, 2 users, and 6 NFs.
E. Problem Formulation
Based on these definitions, our aim is to solve the following optimization problem:
min P,ρ,T,X,β,η Ψ(P, ρ, η) (12a)
S.T:
R u ≥ R min u , ∀u ∈ U, (12b) u∈U ρ k u ≤ 1, ∀k ∈ K, (12c) k∈K u∈U ρ k u p k u ≤ P max , (12d) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n ≤ L n , ∀n ∈ N ,(12e)
u∈U s∈S ∀f s m ∈Ωs
ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n ≤ Υ n , ∀n ∈ N ,(12f)t f s m u,n β f s m u,n ≥ max max ∀f s m ∈Ω s ,u ∈U x f s m ,f s m u,u β f s m u ,n (t f s m u ,n + τ f s m n ) , max ∀f s m ∈{Ωs−f s m },n ∈{N −n} x f s m ,f s m u,u β f s m u,n (t f s m u,n + τ f s m n ) , ∀f s m ∈ Ω s , f s m ∈ Ω s , ∀s, s ∈ S, ∀n ∈ N , ∀u ∈ U, (12g) D Total u ≤ D max s , ∀u ∈ U, (12h) 0 ≤ p k u , ∀u ∈ U, k ∈ K, (12i) β f s m u,n ≤ η n , ∀n ∈ N , ∀u ∈ U, f s m ∈ Ω s , (12j) n∈N β f s m u,n ≤ 1, ∀u ∈ U, f s m ∈ Ω s , s ∈ S, (12k) ρ k u ∈ {0, 1}, ∀u ∈ U, k ∈ K, (12l) β f s m u,n ∈ {0, 1}, ∀u ∈ U, ∀f s m ∈ Ω s , ∀s ∈ S, (12m) x f s m ,f s m u,u ∈ {0, 1}, ∀u, u ∈ U, u = u , ∀f s m , ∀f s m ∈ Ω s ,(12n)η n ∈ {0, 1}, ∀n ∈ N ,(12o)where ρ = [ρ k u ], β = [β
III. SOLUTION OF THE PROPOSED PROBLEMS
Optimization problem (12) is non-convex including both mixed binary and continues variables with non-linear and non-convex constraints. Hence, it belongs to the NP-hard and mathematically intractable optimization problem and obtaining an optimal solution is not trivial and leads to high computational complexity and algorithm run time. Therefore, we cannot apply the common convex optimization methods for solving it.
Without considering NFV RA, the radio RA problem, separately, on the power and subcarrier allocation variables is convex optimization problem, and hence, each of them can be solved efficiently. While NFV RA is non-linear mixed integer programming with large number of variables, i.e., T, X, η, β. These motivate us to develop a new low complexity heuristic algorithm to solve NFV RA sub-problem that is stated with details in Algorithm 2. However, we investigate our proposed algorithm from the different aspects and compare it with other methods.
To solve the optimization problem (12) in an efficient manner, we utilize alternate search method (ASM). To use ASM, we need to calculate initial values of the optimization variables which should satisfy the corresponding constraints of (12). Since the optimization problem (12) would be infeasible, we propose a novel elasticizing approach by introducing a new elastic variable. Based on this method, the constraints that would make the optimization problem infeasible are changed as follows. Assume that we have constraint g(y) ≤ 0, where y ∈ R n is the objective variable. We elasticize it by g(y) ≤ A, where A ≥ 0 is the objective variable. By applying this method, we solve the following optimization problem:
min P,ρ,T,X,η,β,A Ψ(P, ρ, η) + W A (13a) S.T: R min u − R u ≤ A, ∀u ∈ U, (13b) u∈U s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n ≤ A, ∀n ∈ N , (13c) u∈U s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ≤ A, ∀n ∈ N , (13d) D Total u − D max s ≤ A, ∀u ∈ U,(13e)A ≥ 0,(13f)(12c), (12d), (12j), (12k), (12g), (12i) − (12o),
where A is the elastic optimization variable and W is a large number, i.e., W 1. Note that since A can be any non-negative value, the optimization problem (13) is feasible. By solving the optimization problem (13), the infeasibility of the main optimization problem (12) is determined. Therefore, if the elastic variable A is positive, problem (12) is infeasible. To overcome the infeasibility of problem (12), we introduce a new AC method to reject some services providing rooms for the remaining ones. The block diagram illustrating the main steps of the proposed method to solve the optimization problem (12) which is based on solving the optimization problem (13) and AC is shown in Fig. 3. (12) is feasible and based on the proposed AC method, all the requested services are accepted.
Proposition 1. Problem (13) is equivalent to problem (12), if we have A = 0. That means problem
The elasticated problem (13) is also non-convex and NP-hard. In this regard, we solve it by dividing it into three sub-problems by utilizing ASM. The first sub-problem is power allocation and elastication, the second one is subcarrier allocation, and the last one is NFV RA. In fact, the first and second sub-problems are the radio RA sub-problem and it is stated in Section III-A. In the NFV RA sub-problem, all the optimization variables are integer and the problem formulation and solution are presented in Section III-B. More details of the proposed iterative solution of optimization problem (13) is stated in Algorithm 6. In the next subsection, we explain the solution of the aforementioned sub-problems.
A. Radio RA
The radio RA problem is divided into two sub-problems as follows.
1) Power Allocation and Elasticated Subproblem:
The power allocation and elasticated subproblem is presented as follows: 2) Subcarrier Allocation Subproblem: The subcarrier allocation sub-problem is as follows:
min P,A u∈U k∈K ρ k u p k u + W A(min ρ u∈U k∈K ρ k u p k u ,(15a)
S.T: (13b), (12c), (12d), (12l).
We solve sub-problem (15) by using MOSEK in MATLAB toolbox [28].
B. NFV RA
The NFV RA sub-problem is as follows:
min T,X,η,β n∈N η n (16a) S.T: (13c) − (13e), (12g), (12j) − (12o).(16b)
To solve problem (16), we propose a new heuristic algorithm where the functions are mapped and scheduled on the servers whose have the minimum processing latency. Moreover, our proposed algorithm is based on minimizing the number of active servers. To this end, we ascendingly sort the servers by the total processing latency metric. After that, the server with the best rank, i.e., the high available capacity in the sorted list is turned on. Then, we activate another server, if the previously activated servers cannot satisfy the resource demands by NFs or QoS of users is degraded. Based on the algorithm, we ascendingly sort users according to latency requirements and then we start to map and schedule each of NFs on the servers. The details of the proposed NFV RA are stated in Algorithm 2. Return to line 5
Output: T , X, η, and β
C. Admission Control
Our proposed AC is based on the value of elastic variable of problem (13). Whereas, if A is non-zero, the original problem (12) is infeasible. This means that one or more elasticated constraints, i.e., (13b)-(13e), are not satisfied. To ensure these constraints, we can increase network resources (e.g., server's capacities) or reject some of the users service requests. Since the first method is not practical in more cases, we propose to reject some requested services by adopting the proposed AC. In behind of AC, one of the major questions is which one of the requested services should be rejected. In this case, the requested services have diverse characteristics and different effects on the utilization of the network resources, and consequently on the infeasibility of problem (12). To find the user which has the most effects on the infeasibility and reject its service, we do as follows:
u = argmax u u κ 1 (R min s − R u ) + κ 2 n∈N s∈S ∀f s m ∈Ωs ψ f s m β f s m u,n + y u ∀f s m ∈Ωs β f s m u,n − Υ n + κ 3 n∈N s∈S f s m ∈Ωs y u α f s m β f s m u,n − L n ,(17)
where κ 1 , κ 2 , and κ 3 are the fitting parameters to balance u , and we emphasize that in (17) we use the values of the optimization variables of (13) obtained by Algorithm (6). Based on this, we reject user u . Then, solve problem (13) with U = U − {u }. We repeat this procedure until, we have A = 0 in the solution of problem (13).
IV. CONVERGENCE AND COMPUTATIONAL COMPLEXITY
A. Convergence of the Solution Algorithm
Based on ASM, after each iteration the objective function in each sub-problem is enhanced and finally it converges. Fig. 4 shows an example about the convergence of our proposed iterative algorithm. Clearly, our proposed solution converges after few iterations. We have the following relations between iterations (z is the iteration number):
O P[z], ρ[z], η[z], A[z] = min P,A O P[z], ρ[z], η[z], A[z] ≤ O P[z − 1], ρ[z], η[z], A[z − 1] = min ρ O P[z − 1], ρ[z], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z], A[z − 1] = min η O P[z − 1], ρ[z − 1], η[z], A[z − 1] ≤ O P[z − 1], ρ[z − 1], η[z − 1], A[z − 1] .
This means that the objective function of ASM decreases as the iteration number increases. In addition, with QoS and ensuring the resource demand constraints, i.e., (13b)-(13e), the ASM algorithm converges to a sub-optimal solution which corresponds to the sub-optimal solution of problem (13). (2). Based on equation (7) and server selection policy of Algorithm (2), R u is directly proportional to η. Hence, if the value of R u is fixed or reduced at each iteration
Method Complexity
Heuristic
O(U 2 × F × N ) Greedy-based solution O(U 2 × F × N ) Power Allocation: CVX log C 1 ξ /log(ς) Subcarrier Allocation: CVX-MOSEK log C 2 t 0 /log(ς) z, i.e., if R (z) u ≤ R (z−1) u , then we have η (z) ≤ η (z−1)
. As a result the proposed algorithm is monotonic.
B. Computational Complexity
By utilizing ASM, the overall complexity of the algorithm is a linear combination of the complexity of each sub-problem.
1)
Radio RA: For the radio RA sub-problem, we utilize geometric programming (GP) and
IPM via CVX toolbox in MATLAB [27]. Based on this method, the computational complexity order of power allocation sub-problem is given by
log( C 1 ξ )
log(ς) where C 1 = U + N + N × U + U + 1 is the total number of constraints of sub-problem (14), ξ is the initial point for approximating the accuracy of IPM, 0 < 1 is the stopping criterion for IPM, and ς is the accuracy of IPM [27]. Similarly, the complexity of sub-problem (15) is given by
log( C 2 ξ )
log(ς) where C 2 = U + K + 1 is the total number of constraints of (15). Table II.
V. EXPERIMENTAL EVALUATION
A. Simulation Environment
In this section, the simulation results are presented to evaluate the performance of the proposed system model. We consider U = 50 users which are randomly distributed in the converge of a
B. Simulation Results
The investigation of the proposed algorithm under different network settings and parameters is started by the simulation results that are shown in Figures 5(a)-7(b). These results are obtained by considering the Monte-Carlo method iterations is 500. We discuss these results in the following.
1) Acceptance Ratio:
The acceptance ratio is defined by the ratio of the number of accepted services by the network to the total number of the requested services by users and is obtained by κ = 1 −Û U whereÛ is the number of users that their services are rejected based on the proposed AC. It is a criterion to investigate the efficiency of the proposed algorithm in utilizing total network resources to guarantee the requested QoS and accept the service demands.
As can be seen from Figures 5(a), 5(b), and 5(c), the value of the acceptance ratio depends on two main factors, i) the network resources capacity; ii) the number of users (service demands) and service QoS characteristics (latency and data rate). Therefore, we have some challenges to address high data rate and provide low latency services. Clearly, by increasing the number of users (service requests) the acceptance ratio is decreased, especially for low latency services that have the main contribution on the acceptance ratio. We observe that increasing the number of low latency services leads to reducing the acceptance ratio. For the large number of users, the network guarantees some users' service requirements and other users are rejected. For this cases, based on κ equation, the value ofÛ is increased and U −Û approximately reaches to a fixed value. Therefore, the latency and buffering requirements are satisfied and the acceptance ratio of services is improved. From this figure, we conclude that the impact of the number of active servers on the high data rate and low latency services e.g., process automation [29] is more than that of other services. Furthermore, comparing Fig. 5(c) and Fig 5(a), we obtain that the effect of the server processing capacity is more considerable than the number of active servers on the low latency services. That means low latency services are rejected by the network because their requirements need more resources in the network to reduce waiting and processing times.
2) Network Cost: Fig. 6(a) illustrates the network cost versus the variation of the number of users for R min s = 10 bps/Hz and service deadline 2 second. The network cost is comprised of both radio and NFV resources costs in terms of power and spectrum consumption and utilizing servers in the network. It can be observed that by increasing the number of users the network cost increases due to increase in both the radio and NFV costs. It is clear that by increasing the number of users the NFV cost increases rapidly than that of the radio cost. where r U is the amount of the resources utilized by the users and r T is the total server's resources. From this figure, we infer that not only the packet size has a direct effect on the utilization ratio, but also the service deadline has a major impact on this. This is due to the fact that a large packet size needs more storage and processing capacity and low service deadline needs minimum waiting and processing times. Therefore, we should make active more servers and exploit their resources for low latency services. Obviously, increasing the number of users increases the utilization ratio approximately in a linear form. From the cost perspective, we can conclude that by increasing the utilization of network resources, the cost network is also increased, especially in terms of power consumption.
3) Service Deadline: Figures 6(b) and 7(b) show the total cost of the network versus different values of the service deadline for various scenarios. Clearly, the requested service deadline has a major effect on the utilization of processing and buffering resources in servers. Form Fig. 7 to process the NFVs of corresponding services. That means for providing low latency services, we should pay more costs in terms of radio and NFV resources. By increasing the number of servers, the waiting time for each NF in a NS that it is in queue is minimized, and hence, server availability and probability of QoS guarantee for users are increased. For higher latency values in some cases, one (or two) active server(s) is sufficient. By comparing Fig. 6(b) and Fig. 6(a), we obtain that by reducing the value of the latency, the network cost increases significantly compared to the case where the number of users (the numbers of service requested) increases.
C. Benchmark Algorithms 1) Performance Comparison: To the best of our knowledge, this is the first work (refer to the related works) tackling the E2E RA with proposing new AC and a new closed form formulation of NFV scheduling which is comprised of both waiting time and SFC ordering. Moreover, we propose a new heuristic algorithm to solve the formulated optimization problem. It is worth nothing that in the related works, a greedy-based search is exploited [1], [8], [30]. We compare our proposed algorithm with a modified version of greedy-based algorithm that is proposed in [1]. In greedy-based search, different objectives can be considered, for example, minimizing the total flow time [1]. The greedy-based scheduling and embedding the arrival service requests are performed sequentially based on the greedy criteria. Based on the modified greedy algorithm 10 to solve sub-problem (16), first, we search servers that are appropriate for embedding and then find the best server by greedy criterion, i.e., the shortest server queues [1]. The steps of the greedy-based algorithm with the minimum queue time criterion is stated in Algorithm 3.
time. In some cases, the algorithm adds servers that are release and have more capacities. While it is possible to satisfy the latency of other functions without utilizing this server. In constant, the proposed algorithm activates a server when the previously added servers (activated servers) cannot satisfy the constraints of the problem and users QoS. More importantly, in the greedy algorithm, the number of active servers is fixed after increasing the values of the service deadlines which is the consequent of its server selection policy which is based on the queuing time. While in the proposed algorithm it is reduced.
To more clarify this, assuming that we have five servers in the network with specific capacities as [1000 2000 1500 3000 1800] and two service requests with 2 functions with capacity requirements 20 and 40, R min s = 10 bps/Hz and service deadlines 0.3 and 0.7, respectively. Based on Algorithm 2, the service finishing time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and service finishing time of user 2 is 0.2 + 20×10 3000 + 40×10 3000 = 0.4 < 0.7. That means one active server is sufficient to all users. While based on the greedy algorithm the finishing service time of user 1 is 20×10 3000 + 40×10 3000 = 0.2 < 0.3 and that of user 2 is 20×10 2000 + 40×10 2000 = 0.3 < 0.7, since 0.3 < 0.4, the greedy algorithm selects a server with capacity 2000 instead of the server with capacity of 3000. As a result, based on the greedy algorithm, two servers are utilized while in the proposed algorithm, only one server in both cases is utilized. Clearly, the greedy algorithm utilizes servers inefficiently, and hence, the acceptance ratio is decreased especially for a large number of users (see Fig. 8(a)).
2) Optimality Gap: Another metric for investigation of the performance of the proposed algorithm is optimality gap. In this regard, we adopt the exhaustive search method [31]. Since the complexity of the exhaustive search method is very high and exponentially grows with the size of system parameters, we exploit it for a small scaled network. The considered parameters and the corresponding solution methods values are stated in Table V. The other parameters are based on Tables IV and III Proposed algorithm, R u min =10, U=30
Proposed algorithm, R u min =5, U=15
Greedy-based algorithm, R u min =5, U=30
Greedy-based algorithm, R u min =10, U=30
Greedy-based algorithm, R u min =5, U=15
Are the same (b) Number of active servers versus the service deadline. VI. CONCLUSION
In this paper, we proposed a novel joint radio and NFV RA for heterogeneous services. Our aim is to minimize the utilization of the radio resources and servers. Therefore, we proposed a novel scheduling and energy efficient scheme in terms of minimizing the number of activated servers based on a new heuristic algorithm. More importantly, our scheduling scheme includes queuing effect such as queuing waiting time. To solve the proposed problem, first, we divided it into three sub-problems, and then efficiently solved each of them. To solve NFV RA, we proposed a novel low complex heuristic algorithm that is based on minimizing the number of active servers in the network. By this scheme, we significantly reduced the resources cost such as processing, buffering, and power consumption. Moreover, we proposed a novel AC scheme determining which one of the requested services have critical requirements and needs more resources in terms of radio and NFV resources to ensure their QoS requirements and then rejects their services.
We evaluated the performance of the proposed scheme with different network parameters and variables such as service demands, service QoS, and network resource capacities. Our simulation results is carried out with different values of the network parameters and service requested with various metrics such as service acceptance ratio, the number of active servers, and network predefined cost. Moreover, to verify the performance of the proposed algorithm, we compared it with the conventional one from the performance perspective. Our simulation results demonstrate that the proposed algorithm outperforms the conventional one by approximately 8%. | 6,035 |
1907.06292 | 2957586579 | With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge. While previous datasets haveconcentrated on question answering (QA) forformal text like news and Wikipedia, wepresent the first large-scale dataset for QA oversocial media data. To ensure that the tweetswe collected are useful, we only gather tweetsused by journalists to write news articles. Wethen ask human annotators to write questionsand answers upon these tweets. Unlike otherQA datasets like SQuAD in which the answersare extractive, we allow the answers to be ab-stractive. We show that two recently proposedneural models that perform well on formaltexts are limited in their performance when ap-plied to our dataset. In addition, even the fine-tuned BERT model is still lagging behind hu-man performance with a large margin. Our re-sults thus point to the need of improved QAsystems targeting social media text. | Traditional core NLP research typically focuses on English newswire datasets such as the Penn Treebank @cite_12 . In recent years, with the increasing usage of social media platforms, several NLP techniques and datasets for processing social media text have been proposed. For example, build a Twitter part-of-speech tagger based on 1,827 manually annotated tweets. annotated 800 tweets, and performed an empirical study for part-of-speech tagging and chunking on a new Twitter dataset. They also investigated the task of Twitter Named Entity Recognition, utilizing a dataset of 2,400 annotated tweets. annotated 929 tweets, and built the first dependency parser for tweets, whereas built the Chinese counterpart based on 1,000 annotated Weibo posts. To the best of our knowledge, question answering and reading comprehension over short and noisy social media data are rarely studied in NLP, and our annotated dataset is also an order of magnitude large than the above public social-media datasets. | {
"abstract": [
"Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skelet al grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant."
],
"cite_N": [
"@cite_12"
],
"mid": [
"1632114991"
]
} | TWEETQA: A Social Media Focused Question Answering Dataset | Social media is now becoming an important realtime information source, especially during natural disasters and emergencies. It is now very common for traditional news media to frequently probe users and resort to social media platforms to obtain real-time developments of events. According to a recent survey by Pew Research Center 2 , in 2017, more than two-thirds of Americans read some of their news on social media. Even for American people who are 50 or older, 55% of them report getting news from social media, 1 The Dataset can be found at https://tweetqa. github.io/.
2 http://www.journalism.org/2017/09/07/news-useacross-social-media-platforms-2017/ Passage: Oh man just read about Paul Walkers death. So young. Ugggh makes me sick especially when it's caused by an accident. God bless his soul. -Jay Sean (@jaysean) December 1, 2013 Q: why is sean torn over the actor's death? A: walker was young Table 1:
An example showing challenges of TWEETQA. Note the highly informal nature of the text and the presence of social media specific text like usernames which need to be comprehended to accurately answer the question. which is 10% points higher than the number in 2016. Among all major social media sites, Twitter is most frequently used as a news source, with 74% of its users obtaining their news from Twitter. All these statistical facts suggest that understanding user-generated noisy social media text from Twitter is a significant task.
In recent years, while several tools for core natural language understanding tasks involving syntactic and semantic analysis have been developed for noisy social media text (Gimpel et al., 2011;Ritter et al., 2011;Wang et al., 2014), there is little work on question answering or reading comprehension over social media, with the primary bottleneck being the lack of available datasets. We observe that recently proposed QA datasets usually focus on formal domains, e.g. CNN/DAILYMAIL (Hermann et al., 2015) and NewsQA (Trischler et al., 2016) on news articles; SQuAD (Rajpurkar et al., 2016) and WIKI-MOVIES (Miller et al., 2016) that use Wikipedia.
In this paper, we propose the first large-scale dataset for QA over social media data. Rather than naively obtaining tweets from Twitter using the Twitter API 3 which can yield irrelevant tweets with no valuable information, we restrict ourselves only to tweets which have been used by journalists in news articles thus implicitly implying that such tweets contain useful and relevant information. To obtain such relevant tweets, we crawled thousands of news articles that include tweet quotations and then employed crowd-sourcing to elicit questions and answers based on these event-aligned tweets. Table 1 gives an example from our TWEETQA dataset. It shows that QA over tweets raises challenges not only because of the informal nature of oral-style texts (e.g. inferring the answer from multiple short sentences, like the phrase "so young" that forms an independent sentence in the example), but also from tweet-specific expressions (such as inferring that it is "Jay Sean" feeling sad about Paul's death because he posted the tweet).
Furthermore, we show the distinctive nature of TWEETQA by comparing the collected data with traditional QA datasets collected primarily from formal domains. In particular, we demonstrate empirically that three strong neural models which achieve good performance on formal data do not generalize well to social media data, bringing out challenges to developing QA systems that work well on social media domains.
In summary, our contributions are:
• We present the first question answering dataset, TWEETQA, that focuses on social media context;
• We conduct extensive analysis of questions and answer tuples derived from social media text and distinguish it from standard question answering datasets constructed from formaltext domains;
• Finally, we show the challenges of question answering on social media text by quantifying the performance gap between human readers and recently proposed neural models, and also provide insights on the difficulties by analyzing the decomposed performance over different question types.
TweetQA
In this section, we first describe the three-step data collection process of TWEETQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TWEETQA and discuss several evaluation metrics. To better understand the characteristics of the TWEETQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set.
Data Collection
Tweet Crawling One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots 4 of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach. After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic structures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: Figure 1: An example we use to guide the crowdworkers when eliciting question answer pairs. We elicit question that are neither too specific nor too general, do not require background knowledge.
"Wanted to share this today -@IAmSteveHarvey". This tweet is actually talking about an image attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey information, we utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005(He et al., 2017 to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, 22.8% of them were filtered via semantic role labeling. For tweets from NBC, 24.1% of the tweets were filtered.
Question-Answer Writing
We then use Amazon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than 95%. Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions.
To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge. We explicitly state the following items in the HIT instructions for question writing:
• No Yes-no questions should be asked.
• The question should have at least five words.
• Videos, images or inserted links should not be considered.
• No background knowledge should be required to answer the question.
To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our instructions. Figure 1 shows the example we use to guide the workers. As for the answers, since the context we consider is relatively shorter than the context of previous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets.
After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered 13% of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the original news article and news titles for each tweet, our dataset can also be used to explore more challenging generation tasks. Table 2 shows the statistics of our current collection, and the frequency of different types of questions is shown in Table 3. All QA pairs were written by 492 individual workers. Answer Validation For the purposes of human performance evaluation and inter-annotator agreement checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as "NA" if they think the questions are not answerable. We find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be "Hillary Clinton" while the other is "@HillaryClinton". As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that 90% of the answers pairs are semantically equivalent, 2% of them are partially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers participated in this process.
Task and Evaluation
As described in the question-answer writing process, the answers in our dataset are different from those in some existing extractive datasets. Thus we consider the task of answer generation for TWEETQA and we use several standard metrics for natural language generation to evaluate QA systems on our dataset, namely we consider BLEU-1 5 (Papineni et al., 2002), Meteor (Denkowski and Lavie, 2011) and Rouge-L (Lin, 2004) in this paper.
To evaluate machine systems, we compute the scores using both the original answer and validation answer as references. For human performance, we use the validation answers as generated ones and the original answers as references to calculate the scores.
Analysis
In this section, we analyze our dataset and outline the key properties that distinguish it from standard QA datasets like SQuAD (Rajpurkar et al., 2016). First, our dataset is derived from social media text which can be quite informal and user-centric as opposed to SQuAD which is derived from Wikipedia and hence more formal in nature. We observe that the shared vocabulary between SQuAD and TWEETQA is only 10.79%, suggesting a significant difference in their lexical content. Figure 2 shows the 1000 most distinctive words in each domain as extracted from SQuAD and TWEETQA. Note the stark differences in the words seen in the TWEETQA dataset, which include a large number of user accounts with a heavy tail. Examples include @realdonaldtrump, @jdsutter, @justinkirkland and #cnnworldcup, #goldenglobes. In contrast, the SQuAD dataset rarely has usernames or hashtags that are used to signify events or refer to the authors. It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table 1. In addition, while SQuAD requires all answers to be spans from the given passage, we do not enforce any such restriction and answers can be free-form text. In fact, we observed that 43% of our QA pairs consists of answers which do not have an exact substring matching with their corresponding passages. All of the above distinguishing factors have implications to existing models 5 The answer phrases in our dataset are relatively short so we do not consider other BLEU scores in our experiments which we analyze in upcoming sections.
We conduct analysis on a subset of TWEETQA to get a better understanding of the kind of reasoning skills that are required to answer these questions. We sample 150 questions from the development set, then manually label their reasoning categories. Table 4 shows the analysis results. We use some of the categories in SQuAD (Rajpurkar et al., 2016) and also proposes some tweet-specific reasoning types.
Our first observation is that almost half of the questions only require the ability to identify paraphrases. Although most of the "paraphrasing only" questions are considered as fairly easy questions, we find that a significant amount (about 3/4) of these questions are asked about event-related topics, such as information about "who did what to whom, when and where". This is actually consistent with our motivation to create TWEETQA, as we expect this dataset could be used to develop systems that automatically collect information about real-time events.
Apart from these questions, there are also a group of questions that require understanding common sense, deep semantics (i.e. the answers cannot be derived from the literal meanings of the tweets), and relations of sentences 6 (including coreference resolution), which are also appeared in other RC datasets (Rajpurkar et al., 2016). On the other hand, the TWEETQA also has its unique properties. Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data:
• Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. • Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires understanding some tweet-specific English, like conversation-style English. • Understanding of user IDs & hashtags:
Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person-or event-related questions. Table 4: Types of reasoning abilities required by TWEETQA. Underline indicates tweet-specific reasoning types, which are common in TWEETQA but are rarely observed in previous QA datasets. Note that the first type represents questions that only require the ability of paraphrasing, while the rest of the types require some other more salient abilities besides paraphrasing. Overlaps could exist between different reasoning types in the table. For example, the second example requires both the understanding of sentences relations and tweet language habits to answer the question; and the third example requires both the understanding of sentences relations and authorship.
Experiments
To show the challenge of TweetQA for existing approaches, we consider four representative methods as baselines. For data processing, we first remove the URLs in the tweets and then tokenize the QA pairs and tweets using NLTK. 7 This process is consistent for all baselines.
Query Matching Baseline
We first consider a simple query matching baseline similar to the IR baseline in Kociský et al. (2017). But instead of only considering several genres of spans as potential answers, we try to match the question with all possible spans in the tweet context and choose the span with the highest BLEU-1 score as the final answer, which follows the method and implementation 8 of answer span selection for open-domain QA . We include this baseline to show that TWEETQA is a nontrivial task which cannot be easily solved with superficial text matching.
Neural Baselines
We then explore three typical neural models that perform well on existing formal-text datasets. One takes a generative perspective and learns to decode the answer conditioned on the question and context, while the others learns to extract a text span from the context that best answers the question.
Generative QA RNN-based encoder-decoder models have been widely used for natural language generation tasks. Here we consider a recently pro-posed generative model (Song et al., 2017) that first encodes the context and question into a multi-perspective memory via four different neural matching layers, then decodes the answer using an attention-based model equipped with both copy and coverage mechanisms. The model is trained on our dataset for 15 epochs and we choose the model parameters that achieve the best BLEU-1 score on the development set.
BiDAF Unlike the aforementioned generative model, the Bi-Directional Attention Flow (BiDAF) (Seo et al., 2016) network learns to directly predict the answer span in the context. BiDAF first utilizes multi-level embedding layers to encode both the question and context, then uses bi-directional attention flow to get a query-aware context representation, which is further modeled by an RNN layer to make the span predictions. Since our TWEETQA does not have labeled answer spans as in SQuAD, we need to use the human-written answers to retrieve the answerspan labels for training. To get the approximate answer spans, we consider the same matching approach as in the query matching baseline. But instead of using questions to do matching, we use the human-written answers to get the spans that achieve the best BLEU-1 scores.
Fine-Tuning BERT This is another extractive RC model that benefits from the recent advance in pretrained general language encoders (Peters et al., 2018;Devlin et al., 2018). In our work, we select the BERT model (Devlin et al., 2018) which has achieved the best performance on SQuAD.
In our experiments, we use the PyTorch reimple- Table 5: Overall performance of baseline models. EXTRACT-UB refers to our estimation of the upper bound of extractive methods. mentation 9 of the uncased base model. The batch size is set as 12 and we fine-tune the model for 2 epochs with learning rate 3e-5.
Evaluation
Overall Performance
We test the performance of all baseline systems using the three generative metrics mentioned in Section 3.2. As shown in Table 5, there is a large performance gap between human performance and all baseline methods, including BERT, which has achieved superhuman performance on SQuAD. This confirms than TWEETQA is more challenging than formal-test RC tasks. We also show the upper bound of the extractive models (denoted as EXTRACT-UPPER). In the upper bound method, the answers are defined as n-grams from the tweets that maximize the BLEU-1/METEOR/ROUGE-L compared to the annotated groundtruth. From the results, we can see that the BERT model still lags behind the upper bound significantly, showing great potential for future research. It is also interesting to see that the HUMAN performance is slightly worse compared to the upper bound. This indicates (1) the difficulty of our problem also exists for humanbeings and (2) for the answer verification process, the workers tend to also extract texts from tweets as answers.
According to the comparison between the two non-pretraining baselines, our generative baseline yields better results than BiDAF. We believe this is largely due to the abstractive nature of our dataset, since the workers can sometimes write the answers using their own words. Table 6: BiDAF's and the Generative model's performance on questions that require different types of reasoning. and † denote the three most difficult reasoning types for the Generative and the BERT models.
Performance Analysis over Human-Labeled Question Types
To better understand the difficulty of the TWEETQA task for current neural models, we analyze the decomposed model performance on the different kinds of questions that require different types of reasoning (we tested on the subset which has been used for the analysis in Table 4). Table 6 shows the results of the best performed non-pretraining and pretraining approach, i.e., the generative QA baseline and the fine-tuned BERT. Our full comparison including the BiDAF performance and evaluation on more metrics can be found in Appendix A. Following previous RC research, we also include analysis on automaticallylabeled question types in Appendix B.
As indicated by the results on METEOR and ROUGE-L (also indicated by a third metric, BLEU-1, as shown in Appendix A), both baselines perform worse on questions that require the understanding deep semantics and userID&hashtags. The former kind of questions also appear in other benchmarks and is known to be challenging for many current models. The second kind of questions is tweet-specific and is related to specific properties of social media data. Since both models are designed for formal-text passages and there is no special treatment for understanding user IDs and hashtags, the performance is severely limited on the questions requiring such reasoning abilities. We believe that good segmentation, disambiguation and linking tools developed by the social media community for processing the userIDs and hashtags will significantly help these question types.
On non-pretraining model Besides the easy questions requiring mainly paraphrasing skill, we also find that the questions requiring the understanding of authorship and oral/tweet English habits are not very difficult. We think this is due to the reason that, except for these tweet-specific tokens, the rest parts of the questions are rather simple, which may require only simple reasoning skill (e.g. paraphrasing).
On pretraining model Although BERT was demonstrated to be a powerful tool for reading comprehension, this is the first time a detailed analysis has been done on its reasoning skills. From the results, the huge improvement of BERT mainly comes from two types. The first is paraphrasing, which is not surprising because that a well pretrained language model is expected to be able to better encode sentences. Thus the derived embedding space could work better for sentence comparison. The second type is commonsense, which is consistent with the good performance of BERT (Devlin et al., 2018) on SWAG (Zellers et al., 2018). We believe that this provides further evidence about the connection between largescaled deep neural language model and certain kinds of commonsense.
Conclusion
We present the first dataset for QA on social media data by leveraging news media and crowdsourcing. The proposed dataset informs us of the distinctiveness of social media from formal domains in the context of QA. Specifically, we find that QA on social media requires systems to comprehend social media specific linguistic patterns like informality, hashtags, usernames, and authorship. These distinguishing linguistic factors bring up important problems for the research of QA that currently focuses on formal text. We see our dataset as a first step towards enabling not only a deeper understanding of natural language in social media but also rich applications that can extract essential real-time knowledge from social media. Table 7 gives our full evaluation on human annotated question types.
Compared with the BiDAF model, one interesting observation is that the generative baseline gets much worse results on ambiguous questions. We conjecture that although these questions are meaningless, they still have many words that overlapped with the contexts. This can give BiDAF potential advantage over the generative baseline.
B Performance Analysis over Automatically-Labeled Question Types
Besides the analysis on different reasoning types, we also look into the performance over questions with different first tokens in the development set, which provide us an automatic categorization of questions. According to the results in Table 8, the three neural baselines all perform the best on "Who" and "Where" questions, to which the answers are often named entities. Since the tweet contexts are short, there are only a small number of named entities to choose from, which could make the answer pattern easy to learn. On the other hand, the neural models fail to perform well on the "Why" questions, and the results of neural baselines are even worse than that of the matching baseline. We find that these questions generally have longer answer phrases than other types of questions, with the average answer length being 3.74 compared to 2.13 for any other types. Also, since all the answers are written by humans instead of just spans from the context, these abstractive answers can make it even harder for current models to handle. We also observe that when people write "Why" questions, they tend to copy word spans from the tweet, potentially making the task easier for the matching baseline. | 4,129 |
1907.06032 | 2956802405 | Subspace segmentation or subspace learning is a challenging and complicated task in machine learning. This paper builds a primary frame and solid theoretical bases for the minimal subspace segmentation (MSS) of finite samples. Existence and conditional uniqueness of MSS are discussed with conditions generally satisfied in applications. Utilizing weak prior information of MSS, the minimality inspection of segments is further simplified to the prior detection of partitions. The MSS problem is then modeled as a computable optimization problem via self-expressiveness of samples. A closed form of representation matrices is first given for the self-expressiveness, and the connection of diagonal blocks is then addressed. The MSS model uses a rank restriction on the sum of segment ranks. Theoretically, it can retrieve the minimal sample subspaces that could be heavily intersected. The optimization problem is solved via a basic manifold conjugate gradient algorithm, alternative optimization and hybrid optimization, taking into account of solving both the primal MSS problem and its pseudo-dual problem. The MSS model is further modified for handling noisy data, and solved by an ADMM algorithm. The reported experiments show the strong ability of the MSS method on retrieving minimal sample subspaces that are heavily intersected. | As shown in Theorem , the LRR gets the MSDR if and only if the subspaces are independent, that is, @math . This condition implies that each subspace does not intersect with the sum of other subspaces, or equivalently, @math , which is much stricter than that given in Theorem . It is proven by @cite_2 that the iPursuit can separate two subspaces ( @math ) with a high probability. This is one of the special cases shown in Corollary . The condition for LRSSC is similar with that of SSC in the same form and is stricter. We omit the comparison the condition with that of LRSSC, but a detailed comparison with SSC is given below. | {
"abstract": [
"In subspace clustering, a group of data points belonging to a union of subspaces are assigned membership to their respective subspaces. This paper presents a new approach dubbed Innovation Pursuit (iPursuit) to the problem of subspace clustering using a new geometrical idea whereby subspaces are identified based on their relative novelties. We present two frameworks in which the idea of innovation pursuit is used to distinguish the subspaces. Underlying the first framework is an iterative method that finds the subspaces consecutively by solving a series of simple linear optimization problems, each searching for a direction of innovation in the span of the data potentially orthogonal to all subspaces except for the one to be identified in one step of the algorithm. A detailed mathematical analysis is provided establishing sufficient conditions for iPursuit to correctly cluster the data. The proposed approach can provably yield exact clustering even when the subspaces have significant intersections. It is shown that the complexity of the iterative approach scales only linearly in the number of data points and subspaces, and quadratically in the dimension of the subspaces. The second framework integrates iPursuit with spectral clustering to yield a new variant of spectral-clustering-based algorithms. The numerical simulations with both real and synthetic data demonstrate that iPursuit can often outperform the state-of-the-art subspace clustering algorithms, more so for subspaces with significant intersections, and that it significantly improves the state-of-the-art result for subspace-segmentation-based face clustering."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2185081213"
]
} | 0 |
||
1907.06032 | 2956802405 | Subspace segmentation or subspace learning is a challenging and complicated task in machine learning. This paper builds a primary frame and solid theoretical bases for the minimal subspace segmentation (MSS) of finite samples. Existence and conditional uniqueness of MSS are discussed with conditions generally satisfied in applications. Utilizing weak prior information of MSS, the minimality inspection of segments is further simplified to the prior detection of partitions. The MSS problem is then modeled as a computable optimization problem via self-expressiveness of samples. A closed form of representation matrices is first given for the self-expressiveness, and the connection of diagonal blocks is then addressed. The MSS model uses a rank restriction on the sum of segment ranks. Theoretically, it can retrieve the minimal sample subspaces that could be heavily intersected. The optimization problem is solved via a basic manifold conjugate gradient algorithm, alternative optimization and hybrid optimization, taking into account of solving both the primal MSS problem and its pseudo-dual problem. The MSS model is further modified for handling noisy data, and solved by an ADMM algorithm. The reported experiments show the strong ability of the MSS method on retrieving minimal sample subspaces that are heavily intersected. | For the SSC, @cite_15 showed that if the samples are uniformly distributed in a union of subspaces @math and for the basis matrices @math of @math , @math with where @math is a give parameter, then SSC could give a block-diagonal solution partitioned as the ideal subspace segmentation with a probability approximately equal to one, depending on @math , @math , @math , and @math . We remark this claim does not imply a connected solution as we have explained in the early discussion or mentioned by @cite_3 . | {
"abstract": [
"This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [11], which signicantly broadens the range of problems where it is provably eective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the eectiveness of these methods.",
"Sparse Subspace Clustering (SSC) is one of the recent approaches to subspace segmentation. In SSC a graph is constructed whose nodes are the data points and whose edges are inferred from the L1-sparse representation of each point by the others. It has been proved that if the points lie on a mixture of independent subspaces, the graphical structure of each subspace is disconnected from the others. However, the problem of connectivity within each subspace is still unanswered. This is important since the subspace segmentation in SSC is based on finding the connected components of the graph. Our analysis is built upon the connection between the sparse representation through L1-norm minimization and the geometry of convex poly-topes proposed by the compressed sensing community. After introduction of some assumptions to make the problem well-defined, it is proved that the connectivity within each subspace holds for 2- and 3-dimensional subspaces. The claim of connectivity for general d-dimensional case, even for generic configurations, is proved false by giving a counterexample in dimensions greater than 3."
],
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"2139054653",
"2054199929"
]
} | 0 |
||
1907.06091 | 2959959234 | We present a novel method for motion segmentation called LAAV (Locally Affine Atom Voting). Our model's main novelty is using sets of features to segment motion for all features in the scene. LAAV acts as a pre-processing pipeline stage for features in the image, followed by a fine-tuned version of the state-of-the-art Random Voting (RV) method. Unlike standard approaches, LAAV segments motion using feature-set affinities instead of pair-wise affinities between all features; therefore, it significantly simplifies complex scenarios and reduces the computational cost without a loss of accuracy. We describe how the challenges encountered by using previously suggested approaches are addressed using our model. We then compare our algorithm with several state-of-the-art methods. Experiments shows that our approach achieves the most accurate motion segmentation results and, in the presence of measurement noise, achieves comparable results to the other algorithms. | The Two-view approach is derived merely from the relative camera poses from multiple views, called relative-pose constraints, without any additional assumptions of the scene. The epipolar constraint is such a constraint between two views @cite_18 . | {
"abstract": [
"In this paper we analyze in some detail the geometry of a pair of cameras, i.e., a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by Longuet-Higgins (1981). This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3×3 matrix of rank 2 which we propose to call the Fundamental matrix."
],
"cite_N": [
"@cite_18"
],
"mid": [
"2145713909"
]
} | 0 |
||
1907.06091 | 2959959234 | We present a novel method for motion segmentation called LAAV (Locally Affine Atom Voting). Our model's main novelty is using sets of features to segment motion for all features in the scene. LAAV acts as a pre-processing pipeline stage for features in the image, followed by a fine-tuned version of the state-of-the-art Random Voting (RV) method. Unlike standard approaches, LAAV segments motion using feature-set affinities instead of pair-wise affinities between all features; therefore, it significantly simplifies complex scenarios and reduces the computational cost without a loss of accuracy. We describe how the challenges encountered by using previously suggested approaches are addressed using our model. We then compare our algorithm with several state-of-the-art methods. Experiments shows that our approach achieves the most accurate motion segmentation results and, in the presence of measurement noise, achieves comparable results to the other algorithms. | Random Voting (RV) @cite_31 , which is considered as the leading geometric method for motion segmentation partly because of its robustness to noise, has shown particularly successful results with a low computational cost. The algorithm, based on epipolar geometry, is an iterative process of randomized feature selection between two frames, estimating a fundamental matrix from the selected features and vote scores for the rest of the remaining features to be associated with a certain motion model. Since the method uses random initialization, it never loses any information even when the selected features do not represent a motion model. However, this approach only works well when the independent moving object is big enough, such that it consists of enough features to properly estimate the object's motion. In addition, objects in the scene need to be in a certain size so that the background object features ratio is not too high in order for the object's features to be selected in the randomized features selection. Finally, its accuracy rate results can vary due to the random initialization. | {
"abstract": [
"In this paper, we propose a novel rigid motion segmentation algorithm called randomized voting (RV). This algorithm is based on epipolar geometry, and computes a score using the distance between the feature point and the corresponding epipolar line. This score is accumulated and utilized for final grouping. Our algorithm basically deals with two frames, so it is also applicable to the two-view motion segmentation problem. For evaluation of our algorithm, Hopkins 155 dataset, which is a representative test set for rigid motion segmentation, is adopted, it consists of two and three rigid motions. Our algorithm has provided the most accurate motion segmentation results among all of the state-of-the-art algorithms. The average error rate is 0.77 . In addition, when there is measurement noise, our algorithm is comparable with other state-of-the-art algorithms."
],
"cite_N": [
"@cite_31"
],
"mid": [
"1996134027"
]
} | 0 |
||
1907.06091 | 2959959234 | We present a novel method for motion segmentation called LAAV (Locally Affine Atom Voting). Our model's main novelty is using sets of features to segment motion for all features in the scene. LAAV acts as a pre-processing pipeline stage for features in the image, followed by a fine-tuned version of the state-of-the-art Random Voting (RV) method. Unlike standard approaches, LAAV segments motion using feature-set affinities instead of pair-wise affinities between all features; therefore, it significantly simplifies complex scenarios and reduces the computational cost without a loss of accuracy. We describe how the challenges encountered by using previously suggested approaches are addressed using our model. We then compare our algorithm with several state-of-the-art methods. Experiments shows that our approach achieves the most accurate motion segmentation results and, in the presence of measurement noise, achieves comparable results to the other algorithms. | The Multiview approach utilizes the trajectory of the feature points. PAC @cite_23 and SSC @cite_17 methods have quite accurate results in multiple motion cases in a sequence and are also robust to noise. However, those algorithms are extremely slow. Latent low-rank representation-based method (LatLRR) @cite_20 is faster and more accurate, but this method becomes degraded in extremely noisy environments. The ICLM-based approach @cite_16 is very fast, but has lower accuracy than other state-of-the-art approaches. In addition, while Multiview approaches are more accurate than Two-view approaches, they do not have good performance when there are only a few frames. | {
"abstract": [
"The problem of rigid motion segmentation of trajectory data under orthography has been long solved for non-degenerate motions in the absence of noise. But because real trajectory data often incorporates noise, outliers, motion degeneracies and motion dependencies, recently proposed motion segmentation methods resort to non-trivial representations to achieve state of the art segmentation accuracies, at the expense of a large computational cost. This paper proposes a method that dramatically reduces this cost (by two or three orders of magnitude) with minimal accuracy loss (from 98.8 achieved by the state of the art, to 96.2 achieved by our method on the standard Hopkins 155 dataset). Computational efficiency comes from the use of a simple but powerful representation of motion that explicitly incorporates mechanisms to deal with noise, outliers and motion degeneracies. Subsets of motion models with the best balance between prediction accuracy and model complexity are chosen from a pool of candidates, which are then used for segmentation.",
"Low-Rank Representation (LRR) [16, 17] is an effective method for exploring the multiple subspace structures of data. Usually, the observed data matrix itself is chosen as the dictionary, which is a key aspect of LRR. However, such a strategy may depress the performance, especially when the observations are insufficient and or grossly corrupted. In this paper we therefore propose to construct the dictionary by using both observed and unobserved, hidden data. We show that the effects of the hidden data can be approximately recovered by solving a nuclear norm minimization problem, which is convex and can be solved efficiently. The formulation of the proposed method, called Latent Low-Rank Representation (LatLRR), seamlessly integrates subspace segmentation and feature extraction into a unified framework, and thus provides us with a solution for both subspace segmentation and feature extraction. As a subspace segmentation algorithm, LatLRR is an enhanced version of LRR and outperforms the state-of-the-art algorithms. Being an unsupervised feature extraction algorithm, LatLRR is able to robustly extract salient features from corrupted data, and thus can work much better than the benchmark that utilizes the original data vectors as features for classification. Compared to dimension reduction based methods, LatLRR is more robust to noise.",
"Many motion segmentation algorithms based on manifold clustering rely on a accurate rank estimation of the trajectory matrix and on a meaningful affinity measure between the estimated manifolds. While it is known that rank estimation is a difficult task, we also point out the problems that can be induced by an affinity measure that neglects the distribution of the principal angles. In this paper we suggest a new interpretation of the rank of the trajectory matrix and a new affinity measure. The rank estimation is performed by analysing which rank leads to a configuration where small and large angles are best separated. The affinity measure is a new function automatically parametrized so that it is able to adapt to the actual configuration of the principal angles. Our technique has one of lowest misclassification rates on the Hopkins155 database and has good performances also on synthetic sequences with up to 5 motions and variable noise level.",
"In many real-world problems, we are dealing with collections of high-dimensional data, such as images, videos, text and web documents, DNA microarray data, and more. Often, high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories the data belongs to. In this paper, we propose and study an algorithm, called Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of subspaces and the distribution of data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm can be solved efficiently and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal with data nuisances, such as noise, sparse outlying entries, and missing entries, directly by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering."
],
"cite_N": [
"@cite_16",
"@cite_20",
"@cite_23",
"@cite_17"
],
"mid": [
"1999925655",
"2145152441",
"1527474722",
"2952285266"
]
} | 0 |
||
1907.05944 | 2956358468 | We study various discrete nonlinear combinatorial optimization problems in an online learning framework. In the first part, we address the question of whether there are negative results showing that getting a vanishing (or even vanishing approximate) regret is computational hard. We provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation @math -regret, for some @math (unless @math ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no @math -regret online algorithm, unless Unique Game is in @math ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method. Then, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly @math -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalized knapsack problem. | Online Learning, or Online Convex Optimization, is an active research domain. In this section, we only summarize works which are directly related to ours. We refer the reader to comprehensive books @cite_15 @cite_1 and references therein for a more complete overview. The first no-regret algorithm has been given by . Subsequently, and gave improved algorithms with regret @math where @math is the size of the action space. However, these algorithms have running-time @math which is exponential in the size of the input for many applications, in particular for combinatorial optimization problems. An intriguing question is whether there exists a no-regret online algorithm with running-time polynomial in @math . proved that no such algorithm exists in general settings without any assumption on the structure. Designing online polynomial-time algorithms with approximation and vanishing regret guarantees for combinatorial optimization problems is a major research agenda. | {
"abstract": [
"Online learning is a well established learning paradigm which has both theoretical and practical appeals. The goal of online learning is to make a sequence of accurate predictions given knowledge of the correct answer to previous prediction tasks and possibly additional available information. Online learning has been studied in several research fields including game theory, information theory, and machine learning. It also became of great interest to practitioners due the recent emergence of large scale applications such as online advertisement placement and online web ranking. In this survey we provide a modern overview of online learning. Our goal is to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms. We do not mean to be comprehensive but rather to give a high-level, rigorous yet easy to follow, survey.",
"This monograph portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives."
],
"cite_N": [
"@cite_15",
"@cite_1"
],
"mid": [
"2077723394",
"2513180554"
]
} | Online-Learning for min-max discrete problems | Over the past years, online learning has become a very active research field. This is due to the widespread of applications with evolving or adversarial environments, e.g. routing schemes in networks [3], online marketplaces [5], spam filtering [11], etc. An online learning algorithm has to choose an action over a (possible infinite) set of feasible decisions. A loss/reward is associated to each decision which may be adversarially chosen. The losses/rewards are unknown to the algorithm beforehand. The goal is to minimize the regret, i.e. the difference between the total loss/reward of the online algorithm and that of the best single action in hindsight. A "good" online learning algorithm is an algorithm whose regret is sublinear as a function of the length of the time-horizon since then, on the average, the algorithm performs as well as the best single action in hindsight. Such an online algorithm is called an online learning algorithm with vanishing regret. For problems for which the offline version is N P -hard, the notions of regret and vanishing regret have been extended to the notions of α-regret and α-vanishing regret in order to take into account the existence of an α-approximation algorithm instead of an exact algorithm for solving the offline optimization problem.
While a lot of online learning problems can be modeled as the so called "experts problem" by associating a feasible solution to an expert, there is clearly an efficiency challenge since there are potentially an exponential number of solutions making problematic the use of such an approach in practice. Other methods have been used as the online gradient descent [24], the follow the leader algorithm and its extensions follow the perturbed leader [15] for linear objective functions and its generalization to submodular objective functions [12], or the generalized follow the perturbed leader [7] algorithm. Hazan and Koren [13] proved that a no-regret algorithm with running-time polynomial in the size of the problem does not exist in general settings without any assumption on the structure of the problem.
Our work takes into account the computational efficiency of the online learning algorithm in the same vein as the works in [1,15,12,22,6,7,14,9]. We study various discrete nonlinear combinatorial optimization problems in an online learning framework, focusing in particular on the family of min-max discrete optimization problems.
Our goal is to address the two following central questions:
(Q1) are there negative results showing that getting vanishing regret (or even vanishing approximate regret) is computationally hard?
(Q2) are there some notable differences in the efficiencies of follow the leader and gradient descent strategies for discrete problems?
Formally. An online learning problem consists of a decision-space X , a state-space Y and an objective function f : X × Y → R that can be either a cost or a reward function. Any problem of this class can be viewed as an iterative adversarial game with T rounds where the following procedure is repeated for t = 1, . . . , T : (a) Decide an action x t ∈ X , (b) Observe a state y t ∈ Y, (c) Suffer loss or gain reward f t (x t ) = f (x t , y t ).
We use f t (x) as another way to refer to the objective function f after observing the state y t , i.e. the objective function at round t.
The objective of the player is to minimize/maximize the accumulative cost/reward of his decided actions, which is given by the aggregation T t=1 f (x t , y t ). An online learning algorithm is any algorithm that decides the actions x t at every round before observing y t . We compare the decisions (x 1 , . . . , x T ) of the algorithm with those of the best static action in hindsight, defined as: x * = arg min x∈X T t=1 f (x, y t ), or x * = arg max x∈X T t=1 f (x, y t ), for minimization or maximization problems, respectively. This is the action that a (hypothetical) offline oracle would compute, if it had access to the entire sequence y 1 , . . . , y T . The typical measurement for the efficiency of an online learning algorithm is the regret, defined as:
R T = | T t=1 f (x t , y t ) − T t=1 f (x * , y t )|.
A learning algorithm typically uses some kind of randomness, and the regret denotes the expectation of the above quantity. We are interested in online learning algorithms that have the "vanishing regret" property. This means that as the "game" progresses (T → ∞), the average deviation between the algorithm's average cost/payoff to the average cost/payoff of the optimum action in hindsight tends to zero. Typically, a vanishing regret algorithm is an algorithm with regret R T such that: lim T →∞ R T T = 0. However, as we are interested in polynomial time algorithms, we consider only vanishing regret R T = O(T c ) where 0 ≤ c < 1 (that guarantees the convergence in polynomial time). Throughout the paper, whenever we mention vanishing regret, we mean regret
R T = O(T c ) where 0 ≤ c < 1.
For many online learning problems, even their offline versions are N P -hard. Thus, it is not feasible to produce a vanishing regret sequence with an efficient algorithm. For such cases, the notion of α-regret has been defined as:
R α T = T t=1 f (x t , y t ) − α T t=1 f (x * , y t ) .
Hence, we are interested in vanishing α-regret sequences for some α for which we know how to approximate the offline problem. The notion of vanishing α-regret is defined in the same way as that of vanishing regret. In this article we focus on computational issues. Efficiency for an online learning algorithm needs to capture both the computation of x t and the convergence speed. This is formalized in the following definition (where n denotes the size of the instance).
Definition 1.
A polynomial time vanishing α-regret algorithm is an online learning algorithm for which (1) the computation of x t is polynomial in n and t (2) the expected α-regret is bounded by p(n)T c for some polynomial p and some constant 0 ≤ c < 1.
Note that in case α = 1, we simply use the term polynomial time vanishing regret algorithm.
Our contribution
In Section 2, we provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation α-regret, for some α (unless N P = BP P ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no (2 − ǫ)-regret online algorithm, unless Unique Game is in BP P ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method.
In Section 3, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly N P -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). We also prove that the offline optimization oracle is strongly N P -hard for the problem of scheduling a set of jobs on m identical machines, where m is a fixed constant. To the best of our knowledge, up to now algorithms based on the follow the leader method for non-linear objective functions require an exact oracle or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that follow-the-leader-type strategies can't be used for the online problem, at least with our current knowledge. On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalization of knapsack problem [2].
Hardness of online learning for min-max problems
General reduction
As mentioned in the introduction, in this section we give some answers to question (Q1) on ruling out the existence of vanishing regret algorithm for a broad family of online min-max problems, even for ones that are polynomial-time solvable in the offline case. In fact, we provide a general reduction (see Theorem 1) showing that many min-max problems do not admit vanishing α-regret for some α > 1 unless N P = BP P .
More precisely, we focus on a class of cardinality minimization problems where, given an n-elements set U , a set of constraints C on the subsets of U (defining feasible solutions) and an integer k, the goal is to determine whether there exists a feasible solution of size at most k. This is a general class of problems, including for instance graph problems such as Vertex Cover, Dominating Set, Feedback Vertex Set, etc.
Given such a cardinality problem P, let min-max-P be the optimization problem where given nonnegative weights for all the elements of U , one has to compute a feasible solution (under the same set of constraints C as in problem P) such that the maximum weight of all its elements is minimized. The online min-max-P problem is the online learning variant of min-max-P , where the weights on the elements of U change over time.
Interestingly, the min-max version of all the problems mentioned above are polynomially solvable. This is actually true as soon as, for problem P, every superset of a feasible solution is feasible. Then one just has to check for each possible weight w if the set of all elements of weight at most w agrees with the constraints. For example, one can decide if there exists a vertex cover with the maximum weight w as follows: remove all vertices of weight strictly larger than w, and check if the remaining vertices form a vertex cover.
We will show that, in contrast, if P is N P -complete then its online learning min-max version has no vanishing regret algorithm (unless N P = BP P ), and that if P has an inapproximability gap r, then there is no vanishing (r − ǫ)-regret for its online learning min-max version. Let us first recall the notion of approximation gap, where x opt denotes the minimum size of a feasible solution to the cardinality problem P.
Definition 2. Given two numbers 0 ≤ A < B ≤ 1, let [A,B]-Gap-P be the decision problem where
given an instance of P such that |x opt | ≤ An or |x opt | ≥ Bn, we need to decide whether x opt < Bn. Now we can state the main result of the section.
Theorem 1. Let P be a cardinality minimization problem and A, B be real numbers with
0 ≤ A < B ≤ 1. Assume that the problem [A,B]-Gap-P is N P -complete. Then, for every α ≤ B
A − ǫ where ǫ is an arbitrarily small constant, there is no polynomial time vanishing α-regret algorithm for online min-max-P unless N P = BP P .
Proof. We prove this theorem by deriving an polytime algorithm for [A,B]-Gap-P that gives, under the assumption of a vanishing α-regret algorithm for online min-max-P , the correct answer with probability of error at most 1 3 . This would imply that the [A,B]-Gap-P problem is in BP P and thus N P = BP P . Let O be a vanishing α-regret algorithm for online min-max-P for some α ≤ B
A − ǫ = (1 − ǫ ′ ) B A where ǫ > 0 is a constant and ǫ ′ = A B ǫ.
Let T be a time horizon which will be fixed later. We construct the following (offline) algorithm for [A,B]-Gap-P using O as an oracle (subroutine). At every step 1 ≤ t ≤ T , use the oracle O to compute a solution x t . Then, choose one element of x t uniformly at random and assign weight 1 to that element; assign weight 0 to other elements. Consequently, the cost incurred to O is 1 at every step. These weight assignments over times, yet simple, are crucial. Intuitively, the assignments will be used to learn about the optimal solution of the [A,B]-Gap-P problem (given the performance of the learning algorithm O). The formal description is given in Algorithm 1.
Algorithm 1: Algorithm for the [A,B]-Gap-P problem
1 for t = 1, 2, . . . , T do 2 Compute x t ∈ X using algorithm O. 3 if |x t | < Bn then return Yes, i.e., |x opt | ≤ An. 4
Assign weight 1 to a element of x t chosen uniformly at random and 0 to all other element of U .
5
Feed the weight vector and the cost f t (
x t ) = max u∈x t w t (u) back to O. 6 end 7 return No, i.e., |x opt | ≥ Bn.
We are now analyzing Algorithm 1. If the algorithm outputs |x opt | ≤ An, this means that at some step t the oracle O has figured out a feasible solution x t with |x t | < Bn. Since x opt (the minimum cardinality feasible solution) is known to be either |x opt | ≤ An or |x opt | ≥ Bn, the output is always correct.
If the algorithm outputs |x opt | ≥ Bn, then this means that every solution x t had a cardinality that was greater or equal to Bn. We bound the probability that Algorithm 1 returns a wrong answer in this case. Let R T be the α-regret achieved by the oracle (online learning algorithm) O on the set of instances produced in Algorithm 1. Let E denote the event that the algorithm returns a wrong answer. By Adam's Law, we have:
E[R T ] = E[R T |E]P[E] + E[R T |¬E]P[¬E] ≥ E[R T |E]P[E] ⇒ P[E] ≤ E[R T ] E[R T |E]
From Algorithm 1 it should be clear that at every step, the oracle O always suffers loss 1. By definition of α-regret, this means that:
E[R T |E] = T − α min x∈X T t=1 E[f t (x)|E].
Now, we consider a minimum cardinality feasible solution x opt (for the initial instance of the cardinality minimization problem P). We have min x∈X
T t=1 E[f t (x)|E] ≤ T t=1 E[f t (x opt )|E].
As Algorithm 1 returns a wrong answer, |x opt | ≤ An and at every time t, x t has at least Bn elements. Furthermore, by the construction of the weights, there is only one element with weight 1. Thus, f t (x opt ) = 1 with probability at most |x opt |/|x t | ≤ A/B (and f t (x opt ) = 0 otherwise). Thus, we get:
min x∈X T t=1 E[f t (x)|E] ≤ A B T ⇒ E[R T |E] ≥ T 1 − α A B ≥ ǫ ′ · T since α ≤ (1 − ǫ ′ ) B A . Hence, P[E] ≤ E[R T ] ǫ ′ ·T . As O has vanishing α-regret, i.e., there exists a constant 0 ≤ c < 1 such that E[R T ] ≤ p(n)T c where p(n)
is a polynomial of the problem parameters. Therefore,
P[E] ≤ p(n)T c ǫ ′ · T = p(n)T c−1 A B · ǫ Choose parameter T = ǫA 3p(n)B 1 c−1 , we get that P[E] ≤ 1
3 . Besides, the running time of Algorithm 1 is polynomial since it consists of T (polynomial in the size of the problem) iterations and the running time of each iteration is polynomial (as O is a polynomial time algorithm).
In conclusion, if there exists a vanishing α-regret algorithm for online min-max-P , then the N Pcomplete problem [A,B]-Gap-P is in BP P , implying N P = BP P .
The inapproximability (gap) results for the aforementioned problems give lower bounds on the approximation ratio α of any vanishing α-regret algorithm for their online min-max version. For instance, the online min-max dominating set problem has no vanishing constant-regret algorithm based on the approximation hardness in [19]. We state the lower bound explicitly for the online min-max vertex cover problem in the following corollary, as we refer to it later by showing a matching upper bound. They are based on the hardness results for vertex cover in [17] and [16] (N P -hardness and UGC-hardness, respectively). Now, consider N P -complete cardinality problems which have no known inapproximability gap (for instance Vertex Cover in planar graphs, which admits a PTAS). Then we can show the following impossibility result. Corollary 3. If a cardinality problem P is N P -Complete, then there is no vanishing regret algorithm for online min-max-P unless N P = BP P .
Proof. We note that the proof of Theorem 1 does not require A, B and α to be constant: they can be functions of the instance, and the result holds as soon as 1/ 1 − α A B is polynomially bounded (so that T remains polynomially bounded in n). Then, for a cardinality problem P, if A = k/n and B = k+1 n = A + 1 n , then deciding whether |x opt | ≤ k is the same as deciding whether |x opt | ≤ An or |x opt | ≥ Bn. By setting α = 1, A = k/n and B = k+1 n in proof of Theorem 1 we get the result.
Min-max Vertex Cover: matching upper bound with Gradient Descent
In this section we will present an online algorithm for the min-max vertex cover problem based on the classic Online Gradient Descent (OGD) algorithm. In the latter, at every step the solution is obtained by updating the previous one in the direction of the (sub-)gradient of the objective and projecting to a feasible convex set. The particular nature of the min-max vertex cover problem is that the objective function is the l ∞ norm and the set of feasible solutions is discrete (non-convex). In our algorithm, we consider the following standard relaxation of the problem:
min max i∈V w i x i s.t. x ∈ Q : x i + x j ≥ 1 ∀(i, j) ∈ E, 0 ≤ x i ≤ 1 ∀i ∈ V.
At time step t, we update the solution by a sub-gradient g t ( Play X t ∈ {0, 1} n . Observe w t (weights of vertices) and incur the cost max i w t i X t i .
x t ) = [0, . . . , 0, w t i , 0, . . . , 0] with w t i in coordinate i t (x t ) = arg max 1≤i≤n w t i x t i
4
Update y t+1 = x t − 1 √ t g t (x t ).
5
Project y t+1 to Q w.r.t the ℓ 2 -norm: x t+1 = Proj Q y t+1 := arg min x∈Q y t+1 − x 2 .
6
Round x t+1 to X t+1 :
X t+1 i = 1 if x t+1 i ≥ 1/2 and X t+1 i = 0 otherwise 7 end
The following theorem, coupled with Corollary 2, show the tight bound of 2 on the approximation ratio of polynomial-time online algorithms for Min-max Vertex Cover (assuming UGC conjecture).
Theorem 4.
Assume that W = max 1≤t≤T max 1≤i≤n w t i . Then, after T time steps, Algorithm 2 achieves
t t=1 max 1≤i≤n w t i X t i ≤ 2 · min X * t t=1 max 1≤i≤n w t i X * i + 3W √ nT
Proof. By the OGD algorithm (see [24] or [11, Chapter 3]), we have
t t=1 max 1≤i≤n w t i x t i ≤ min x * ∈P t t=1 max 1≤i≤n w t i x * i + 3DG 2 √ T where D = max x,x ′ ∈Q x − x ′ 2 ≤ √ n is the diameter of Q and G = max 1≤t≤T max g t 2 ≤ W .
Moreover, by the rounding procedure, it always holds that
max i=1,...,n X t i w t i ≤ 2 max i=1,...,n x t i w t i .
Combining these inequalities, the theorem follows.
Computational issues for Follow the Leader based methods
The most natural approach in online learning is for the player to always pick the leading action, i.e. the action x t that is optimal to the observed history y 1 , . . . , y t−1 . However it can be proven ( [15]) that any deterministic algorithm that always decides on the leading action can be "tricked" by the adversary in order to make decision that are worse than the optimal action in hindsight, thus leading to large regret algorithms. On this regard, we need to add a regularization term containing randomness to the optimization oracle in order to make our algorithms less predictable and more stable. Thus, the Follow the Regularized Leader strategy in a minimization problem, consists of deciding on an action x t such that:
x t = arg min x∈X t−1 τ =1 f (x, y τ ) + R(x)
where R(x) is the regularization term.
There are many variations of the Follow the Leader (FTL) algorithm that differentiate on the applied objective functions and the type of regularization term. For linear objectives, Kalai and Vempala [15] suggested the Follow the Perturbed Leader algorithm where the regularization term is simply the cost/payoff of each action on a randomly generated instance of the problem. Dudik et al. [7] were able to generalize the FTPL algorithm of Kalai and Vempala [15] for non-linear objectives, by introducing the concept of shared randomness and a much more complex perturbation mechanism.
A common element between every Follow the Leader based method, is the need for an optimization oracle over the observed history of the problem. This is a minimum requirement since the regularization term can make determining the leader even harder, but most algorithms are able to map the perturbations to the value of the objective function on a set of instances of the problem and thus eliminate this extra complexity. To the best of our knowledge, up to now FTL algorithms for non-linear objective functions require an exact or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that the FTL strategy cannot be used for the online problem, at least with our current knowledge.
Computational hardness results
As we mentioned, algorithms that use the "Follow the Leader" strategy heavily rely on the existence of an optimization oracle for the multi-instance version of the offline problem. For linear objectives, it is easy to see ( [15]) that optimization over a set of instances is equivalent to optimization over a single instance and thus any algorithm for the offline problem can be transformed to an online learning algorithm. However, for non-linear problems this assumption is not always justified since even when the offline problem is polytime-solvable, the corresponding variation with multiple instances can be strongly N P -hard.
In this section we present some problems where we can prove that the optimum solution over a set of instances is hard to approximate. More precisely, in the multi-instance version of a given problem, we are given an integer N > 0, a set of feasible solutions X , and N objective functions f 1 , . . . , f N over X . The goal is to minimize (over X ) N i=1 f i (x). We will show computational hardness results for the multi-instance versions of:
• min-max vertex cover (already defined).
• min-max perfect matching, where we are given an undirected graph G(V, E) and a weight function w : E → R + on the edges and we need to determine a perfect matching such that the weight of the heaviest edge on the matching is minimized.
• min-max path, where we are given an undirected graph G(V, E), two vertices s and t, and a weight function w : E → R + on the edges and we need to determine an s − t path such that the weight of the heaviest edge in the path is minimized.
• P 3||Cmax, where we are given 3 identical parallel machines, a set of n-jobs J = {j 1 , . . . , j n } and processing times p : J → R + and we need to determine a schedule of the jobs to the machines (without preemption) such that the makespan, i.e. the time that elapses until the last job is processed, is minimized.
Hence, in the multi-instance versions of these problems, we are given N weight functions over vertices (min-max vertex cover) or edges (min-max perfect matching, min-max path), or N processing time vectors (P 3||Cmax).
Theorem 5.
The multi-instance versions of min-max vertex cover, min-max perfect matching, min-max path and P 3||Cmax are strongly N P -hard.
Proof. Here we present the proof for the multi-instance version of the min-max perfect matching and the min-max path problems, which use a similar reduction from the Max-3-DNF problem. The proofs for multi-instance min-max vertex cover and multi-instance P 3||Cmax can be found at appendices A.1 and A.2 respectively. In the Max-3-DNF problem, we are given a set of n boolean variables X = {x 1 , . . . , x n } and m clauses C 1 , . . . , C m that are conjunctions of three variables in X or their negations and we need to determine a truth assignment σ : X → {T, F } such that the number of satisfied clauses is maximized.
We start with the multi-instance min-max perfect matching problem. For every instance I of the Max-3-DNF problem we construct a graph G(V, E) and m weight functions defined as follows:
• To each variable x i is associated a 4-cycle on vertices (u i , u t i , u i , u f i ). This 4-cycle has two perfect matchings: either u i is matched with u t i and u i is matched with u f i , corresponding to setting the variable x i to true, or vice-versa, corresponding to setting x i to false. This specifies a one-to-one correspondence between the solutions of the two problems.
• Each weight function corresponds to one conjunction: w j (u i u t i ) = 1 if ¬x i ∈ C j , otherwise w j (u i u t i ) = 0. Edges incident to vertices u i always get weight 0. The above construction can obviously be done in polynomial time to the size of the input. It remains to show the correlation between the objective values of these solutions. If a clause C j is satisfied by a truth assignment σ then (since it is a conjunction) every literal on the clause must be satisfied. From the construction of the instance I ′ of multi-instance min-max matching, the corresponding matching M σ will have a maximum weight of 0 for the weight function w j . If a clause C j is not satisfied by a truth assignment, then the corresponding matching M σ will have a maximum weight of 1 for the weight function w j . Thus, from the reduction we get
val(I, σ) = m − val(I ′ , M σ )
where val stands for the value of a solution. This equation already proves the hardness result of Theorem 5. It actually also shows AP X-hardness. Indeed, the optimal value OPT of Max-3-DNF verifies m 8 ≤ OP T ≤ m. Assuming the existence of a (1 + ǫ) approximation algorithm for multi-instance minmax perfect matching problem, we can get a (1 − 7ǫ) approximation algorithm for Max-3-DNF. Since Max-3-DNF is AP X-Hard, multi-instance min-max perfect matching is also AP X-Hard.
A similar reduction leads to the same result for the min-max path problem: starting from an instance of 3-DNF, build a graph G where V = {v 0 , v 1 , . . . , v n }. Vertex v i corresponds to variable x i There are two arcs e t i and e f i between v i−1 and v i . We are looking for v 0 − v n paths. Taking edge e t i (resp. e f i ) corresponds to setting x i to true (resp. false). As previously this gives a one-to-one correspondence between solutions. Each clause corresponds to one weight function: if x i ∈ C j then w j (e f i ) = 1, if ¬x i ∈ C j then w j (e t i ) = 1. All other weights are 0. Then for a v 0 − v n path P , w j (P ) = 0 if and only if C j is satisfied by the corresponding truth assignment. The remainder of the proof is exactly the same as the one of min-max perfect matching.
Theorem 5 gives insight on the hardness of non-linear multi-instance problems compared to their single-instance counterparts. As we proved, the multi-instance P 3||Cmax is strongly NP-Hard while P 3||Cmax is known to admit a FPTAS [20,23]. Also, the multi-instance version of min-max perfect matching, min-max path and min-max vertex cover are proved to be AP X-Hard while their singleinstance versions can be solved in polynomial time. We also note that these hardness results hold for the very specific case where weights/processing times are in {0, 1}, for which P ||Cmax, as well as the other problems, become trivial.
We also note that the inapproximability bound we acquired for the multi-instance min-max vertex cover under UGC is tight, since we can formulate the problem as a linear program, solve it's continuous relaxation and then use a rounding algorithm to get a vertex cover of cost at most twice the optimum for the problem.
The results on the min-max vertex cover problem also provides some answer to question (Q2) addressed in the introduction. As we proved in Section 2.2, the online gradient descent method (paired with a rounding algorithm) suffices to give a vanishing 2-regret algorithm for online min-max vertex cover. However, since the multi-instance version of the problem is APX-hard there is no indication that the follow the leader approach can be used in order to get the same result and match the lower bound of Corollary 2 for the problem.
Online generalized knapsack problem
In this section we present a vanishing regret algorithm for the online learning version of the following generalized knapsack problem. In the traditional knapsack problem, one has to select a set of items with total weight not exceeding a fixed "knapsack" capacity B and maximizes the total profit of the set. Instead, we assume that the knapsack can be customized to fit more items. Specifically, there is a capacity B and if the total weight of the items exceeds this capacity, then we have to pay c-times the extra weight. Formally: Definition 3 (Generalized Knapsack Problem (GKP)). Given a set of items i = 1, 2, ..., n with nonnegative weights w i and non-negative profits p i , a knapsack capacity B ∈ R + and a constant c ∈ R + , determine a set of items A ⊆ [n] that maximizes the total profit:
profit(A) = i∈A p i − c max{0, i∈A w i − B}
This problem, as well as generalizations with other penalty costs for overweight, have been studied for instance in [4,2] (see there for practical motivations). In an online learning setting, we assume that we have n-items with static weights w i and a static constant c. On each timestep, we need to select a subset of those items and then we learn the capacity of the knapsack and the profit of every item, gaining some profit or even suffering loss based on our decision.
As we showed in Section 3.1, many non-linear problems do not have an efficient (polynomial) offline oracle and as a direct consequence, the follow the leader strategy can not directly be applied to get vanishing regret. While GKP is clearly not linear due to the maximum in the profit function, we will show that there exists a FPTAS for solving its multiple instances variation. We will use this result to get a vanishing regret algorithm for the online version of GKP (Theorem 6).
Since the problem is not linear, we use the the generalized FTPL (GFTPL) framework of Dudik et al. [7], which does not rely on the assumption that the objective function is linear. While in the linear case it was sufficient to consider an "extra" random observation (FTPL), a much more complex perturbation mechanism is needed in order for the analysis to work if the objective function is not linear. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness, using the notion of translation matrix. The method is presented in Appendix B.1.
Theorem 6.
There is a polynomial time vanishing regret algorithm for GKP.
Proof. (sketch) The proof is based on the three following steps:
• First we note that GFTPL works (gives vanishing regret) even if the oracle admits a FPTAS. This is necessary since our problem is clearly N P -hard.
• Second, we provide for GKP an ad hoc translation matrix. This shows that the GFTPL method can be applied to our problem. Moreover, this matrix is built in such a way that the oracle needed for GFTPL is precisely a multi-instance oracle.
• Third, we show that there exists an FPTAS multi-instance oracle.
The first two points are given in appendices B.1 and B.2 respectively. We only show the last point. To do this, we show that we can map a set of instances of the generalized knapsack problem to a single instance of the more general convex-generalized knapsack problem. Suppose that we have a set of m instances (p i , B i ) of GKP. Then, the total profit of every item set x ∈ X is:
profit(x) = m t=1 (x · p t − c max{0, w · x − B t }) = x · p s − ck(x|B 1 , ..., B m )
where p s = m t=1 p t and k(x|B 1 , ..., B m ) = m t=1 max{0, w · x − B t }. Let W = w · x the total weight of the item set andB 1 , . . . ,B m a non-decreasing ordering of the knapsack capacities. Then:
k(x|B 1 , ..., B m ) = k(W |B 1 , ...,B m ) 0 , W ≤B 1 W −B 1 ,B 1 < W ≤B 2 2W − (B 1 +B 2 )
,B 2 < W ≤B 3 . . . mW − (B 1 +B 2 + · · · +B m ) ,B m < W Note that the above function is always convex. This means that at every time step t, we need a FPTAS for the maximization problem x · p − f (W ) where f is a convex function. We know that such an FPTAS exists ( [2]). In this paper, the authors suggest a FPTAS with time complexity O(n 3 /ǫ 2 ) by assuming that the convex function can be computed at constant time. In our case the convex function k is part of the input; with binary search we can compute it in logarithmic time.
Conclusion
In the paper, we have presented a general framework showing the hardness of online learning algorithms for min-max problems. We have also showed a sharp separation between two widely-studied online learning algorithms, online gradient descent and follow-the-leader, from the approximation and computational complexity aspects. The paper gives rise to several interesting directions. A first one is to extend the reduction framework to objectives other than min-max. A second direction is to design online vanishing regret algorithms with approximation ratio matched to the lower bound guarantee. Finally, the proof of Theorem 1 needs a non-oblivious adversary. An interesting direction would be to get the same lower bounds with an oblivious adversary if possible.
Appendix
A Hardness of multi-instance problems (Theorem 5)
A.1 Hardness of multi-instance min-max vertex cover
We make a straightforward reduction from the vertex cover problem. Consider any instance G(V, E) of the vertex cover problem, with V = {v 1 , . . . , v n }. We construct n weight functions w 1 , . . . , w n : V → R + such that in w i vertex v i has weight 1 and all other vertices have weight 0. If we consider the instance of the multi-instance min-max vertex cover with graph G(V, E) and weight functions w 1 , . . . , w n , it is clear that any vertex cover has total cost that is equal to its size, since for any vertex v i ∈ V there is exactly one weight function where w i = 1 and w i = 0 for every other weight function.
Since vertex cover is strongly N P -hard, N P -hard to approximate within ratio √ 2−ǫ and UGC-hard to approximate within ratio 2 − ǫ, the same negative results hold for the multi-instance min-max vertex cover problem.
A.2 Hardness of multi-instance P3||Cmax
We prove that the multi-instance P 3||Cmax problem is strongly N P -hard even when the processing times are in {0, 1}, using a reduction from the N P -complete 3-coloring problem. In the 3-coloring (3C) problem, we are given a graph G(V, E) and we need to decide whether there exists a coloring of its vertices with 3 colors such that if two vertices are connected by an edge, they cannot have the same color.
For every instance G(V, E) of the 3C problem with |V | = n and |E| = m, we construct (in polynomial time) an instance of the multi-instance P 3||Cmax with n-jobs and N = m processing time vectors. Every edge (i, j) ∈ E corresponds to a processing time vector with jobs i and j having processing time 1 and every other job having processing time 0. It is easy to see that at each time step the makespan is either 1 or 2 and thus the total makespan is at least m and at most 2m.
If there exists a 3-coloring on G then by assigning every color to a machine, at each time step there will not be two jobs with non-zero processing time in the same machine and thus the makespan will be 1 and the total solution will have cost m. If the total solution has cost m then this means that at every time step the makespan was 1 and by assigning to the jobs of every machine the same color we get a 3 coloring of G. Hence, the multi-instance variation of the P 3||Cmax problem is strongly N P -hard.
B A polynomial time vanishing regret algorithm for GKP (Theorem 6) B.1 Generalized follow the perturbed leader
For the sake of completeness, we introduce the generalized FTPL (GFTPL) method of Dudik et al. [7], which can be used to achieve a vanishing regret for non linear objective functions for some discrete problems. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness. In their algorithm, the regularization term R(x) of the FTPL algorithm is substituted by the inner product Γ x · a where a is a random vector and Γ x is a vector corresponding to the action x. In FTPL it was sufficient to have Γ x = x but in this general setting, Γ x must be the row of a translation matrix that corresponds to action x. [7]). A matrix Γ is admissible if its rows are distinct. It is (κ, δ)admissible if it is admissible and also (i) the number of distinct elements within each column is at most κ and (ii) the distinct elements within each column differ by at least δ. [7]). A translation matrix Γ is a (κ, δ)-admissible matrix with |X |rows and N-columns. Since the number of rows is equal to the number of feasible actions, we denote as Γ x the row corresponding to action x ∈ X . In the general case, Γ ∈ [γ m , γ M ] X ×N and G γ = γ M − γ m is used to denote the diameter of the translation matrix.
Definition 4 (Admissible Matrix
Definition 5 (Translation Matrix
From the definition of the translation matrix it becomes clear that the action space X needs to be finite. Note that the number of feasible actions can be exponential to the input size, since we do not need to directly compute the translation matrix. The generalized FTPL algorithm for a maximization problem is presented in algorithmic box 3. At time t, the algorithm decides the perturbed leader as the action that maximizes the total payoff on the observed history plus some noise that is given by the inner product of Γ x and the perturbation vector α. Note that in [7] the algorithm only needs an oracle with an additive error ǫ. We will see later that it works also for a multiplicative error ǫ (more precisely, for an FPTAS). Decide x t such that ∀x ∈ X :
Algorithm 3: Generalized FTPL algorithm
Data: A (κ, δ)-admissible translation matrix Γ ∈ [γ m , γ M ] X ×N ,t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ t−1 τ =1 f (x, y τ ) + a · Γ x − ǫ 4
Observe y t and gain payoff f (x t , y t ).
end
Let us denote G f as the diameter of the objective function, i.e., G f = max x,x ′ ∈X , y,y ′ ∈Y |f (x, y) − f (x ′ , y ′ )|.
Theorem 7 ([7]
). By using an appropriate η to draw the random vector, the regret of the generalized FTPL algorithm is:
R T ≤ N κG f G γ G f + 2ǫ δ T + ǫT
By setting ǫ = Θ(1/ √ T ), this clearly gives a vanishing regret.
Let us quote two difficulties to use this algorithm. First, the oracle has to solve a problem where the objective function is the sum of a multi-instance version of the offline problem and the perturbation. We will see in Appendix B.2 how we can implement the perturbation mechanism Γ x · α as the payoff of action x on a set of (random) observations of the problem.
Second, if the multi-instance version is N P -hard, having an efficient algorithm solving the oracle with an additive error ǫ is quite improbable. We remark that the assumption of an additive error ǫ can be replaced by the assumption of the existence of a FPTAS for the oracle. Namely, let us consider a modification of Algorithm 3 where at at each time t we compute a solution x t such that ∀x ∈ X :
t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ (1 − ǫ ′ ) t−1 τ =1 f (x, y τ ) + a · Γ x(1)
Then, if we use F M to denote the maximum payoff, i.e., F M = max x∈X , y∈Y f (x, y), by applying the same analysis as in [7], we can show that by fixing ǫ ′ = ǫ T F M +N ηΓ M we are guaranteed to get an action that has at least the same total perturbed payoff of decision x t if an additive optimization parameter ǫ was used. The computation is polynomial if we use an FPTAS. Then, we can still get a vanishing regret by using ǫ ′ = O(T − 3 2 ) instead of ǫ = O(T − 1 2 ) (considering all parameters of the problem as constants).
As a corollary, we can achieve a vanishing regret for any online learning problem in our setting by assuming access to an oracle OPT that can compute (for any ǫ ′ ) in polynomial time a decision x t satisfying Equation (1).
B.2 Distinguisher sets and a translation matrix for GKP
As noted above, an important issue in the method arises from the perturbation. Until now, the translation matrix Γ could be any (κ, δ)-admissible matrix as long as it had one distinct row for every feasible action in X . However, this matrix has to be considered by the oracle in order to decide x t . In [7] the authors introduce the concept of implementability that overcomes this problem. We present a simplified version of this property. Definition 6 (Distinguisher Set). A distinguisher set for an offline problem P is a set of instances S = {y 1 , y 2 , . . . , y N } ∈ Y N such that for any feasible actions x, x ′ ∈ X :
x = x ′ ⇔ ∃j ∈ [N ] : f (x, y j ) = f (x ′ , y j )
This means that S in a set of instances that "forces" any two different actions to differentiate in at least one of their payoffs over the instances in S. If we can determine such a set, then we can construct a translation matrix Γ that significantly simplifies our assumptions on the oracle.
Let S = {y 1 , y 2 , . . . , y N } be a distinguisher set for our problem. Then, for every feasible action x ∈ X we can construct the corresponding row of Γ such that: Γ x = [f (x, y 1 ), f (x, y 2 ), . . . , f (x, y N )]
Since S is a distinguisher set, the translation matrix Γ is guaranteed to be admissible. Furthermore, according to the set we can always determine some κ and δ parameters for the translation matrix. By implementing Γ using a distinguisher set, the expression we need to (approximately) maximize at each round can be written as:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 a i f (x, y i )
This shows that the perturbations transform into a set of weighted instances, were the weights a i are randomly drawn from uniform distribution [0, η]. This is already a significant improvement, since now the oracle has to consider only weighted instances of the offline problem and not the arbitrary perturbation αΓ x we were assuming until now. Furthermore, for a variety of problems (including GKP), we can construct a distinguisher set y 1 , . . . , y N such that:
af (x, y j ) = f (x, ay j ) ∀a ∈ R, j ∈ [N ]
If this is true, then we can shift the random weights of the oracle inside the instances:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 f (x, a i y i )
Thus, if we have a distinguisher set for a given problem, to apply GFTPL all we need is an FPTAS for optimizing the total payoff over a set of weighted instances.
We now provide a distinguisher set for the generalized knapsack problem. Consider a set of n instances (p j , B j ) of the problem such that in instance (p j , B j ) item j has profit P , all other items have profit 0 and the knapsack capacity is B j = W s . Since the total weight of a set of items can never exceed W s , it is easy to see that ∀x ∈ X :
f (x, p j , B j ) = P if item j is selected in set x 0 otherwise
For any two different assignments x and x ′ , there is at least one item j ∈ [n] that they don't have in common. It is easy to see that in the corresponding instance (y j , B j ) one of the assignments will have total profit P and the other will have total profit 0. Thus, the proposed set of instances is indeed a distinguisher set for the generalized knapsack problem. We use this set of instances to implement the Γ matrix. Then, every column of Γ will have exactly 2 distinct values 0 and P , making the translation matrix (2, P )-admissible. As a result, in order to achieve a vanishing regret for online learning GKP, all we need is an FPTAS for the multi-instance generalized knapsack problem. | 8,388 |
1907.05944 | 2956358468 | We study various discrete nonlinear combinatorial optimization problems in an online learning framework. In the first part, we address the question of whether there are negative results showing that getting a vanishing (or even vanishing approximate) regret is computational hard. We provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation @math -regret, for some @math (unless @math ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no @math -regret online algorithm, unless Unique Game is in @math ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method. Then, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly @math -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalized knapsack problem. | In their breakthrough paper, presented the first efficient online algorithm, called (FTPL), for linear objective functions. The strategy consists of adding perturbation to the cumulative gain (payoff) of each action and then selecting the action with the highest perturbed gain. This strategy has been generalized and successfully applied to several settings @cite_4 @cite_9 @cite_7 @cite_20 . Specifically, FTPL and its generalized versions have been used to design efficient online no-regret algorithms with oracles beyond linear settings: to submodular settings @cite_4 and non-convex settings @cite_6 . However, all these approaches require best-response oracles, and as we show in this paper, for several problems such best-response oracles require exponential time computation. | {
"abstract": [
"We consider an online decision problem over a discrete space in which the loss function is submodular. We give algorithms which are computationally efficient and are Hannan-consistent in both the full information and partial feedback settings.",
"An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊆ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed \"no-envy learning.\" This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.",
"We provide the first oracle efficient sublinear regret algorithms for adversarial versions of the contextual bandit problem. In this problem, the learner repeatedly makes an action on the basis of a context and receives reward for the chosen action, with the goal of achieving reward competitive with a large class of policies. We analyze two settings: i) in the transductive setting the learner knows the set of contexts a priori, ii) in the small separator setting, there exists a small set of contexts such that any two policies behave differently on one of the contexts in the set. Our algorithms fall into the Follow-The-Perturbed-Leader family (Kalai & Vempala, 2005) and achieve regret O(T3 4√K log(N)) in the transductive setting and O(T2 3d3 4K√log(N)) in the separator setting, where T is the number of rounds, K is the number of actions, N is the number of base-line policies, and d is the size of the separator. We actually solve the more general adversarial contextual semi-bandit linear optimization problem, whilst in the full information setting we address the even more general contextual combinatorial optimization. We provide several extensions and implications of our algorithms, such as switching regret and efficient learning with predictable sequences.",
"",
"We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle. We present an algorithm called Generalized Followthe- Perturbed-Leader and provide conditions under which it is oracle-efficient while achieving vanishing regret. Our results make significant progress on an open problem raised by Hazan and Koren [1], who showed that oracle-efficient algorithms do not exist in full generality and asked whether one can identify conditions under which oracle-efficient online learning may be possible. Our auction-design framework considers an auctioneer learning an optimal auction for a sequence of adversarially selected valuations with the goal of achieving revenue that is almost as good as the optimal auction in hindsight, among a class of auctions. We give oracle-efficient learning results for: (1) VCG auctions with bidder-specific reserves in singleparameter settings, (2) envy-free item-pricing auctions in multiitem settings, and (3) the level auctions of Morgenstern and Roughgarden [2] for single-item settings. The last result leads to an approximation of the overall optimal Myerson auction when bidders’ valuations are drawn according to a fast-mixing Markov process, extending prior work that only gave such guarantees for the i.i.d. setting.We also derive various extensions, including: (1) oracleefficient algorithms for the contextual learning setting in which the learner has access to side information (such as bidder demographics), (2) learning with approximate oracles such as those based on Maximal-in-Range algorithms, and (3) no-regret bidding algorithms in simultaneous auctions, which resolve an open problem of Daskalakis and Syrgkanis [3]."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_6",
"@cite_20"
],
"mid": [
"169739242",
"2963146395",
"2256838191",
"2965154744",
"2964245654"
]
} | Online-Learning for min-max discrete problems | Over the past years, online learning has become a very active research field. This is due to the widespread of applications with evolving or adversarial environments, e.g. routing schemes in networks [3], online marketplaces [5], spam filtering [11], etc. An online learning algorithm has to choose an action over a (possible infinite) set of feasible decisions. A loss/reward is associated to each decision which may be adversarially chosen. The losses/rewards are unknown to the algorithm beforehand. The goal is to minimize the regret, i.e. the difference between the total loss/reward of the online algorithm and that of the best single action in hindsight. A "good" online learning algorithm is an algorithm whose regret is sublinear as a function of the length of the time-horizon since then, on the average, the algorithm performs as well as the best single action in hindsight. Such an online algorithm is called an online learning algorithm with vanishing regret. For problems for which the offline version is N P -hard, the notions of regret and vanishing regret have been extended to the notions of α-regret and α-vanishing regret in order to take into account the existence of an α-approximation algorithm instead of an exact algorithm for solving the offline optimization problem.
While a lot of online learning problems can be modeled as the so called "experts problem" by associating a feasible solution to an expert, there is clearly an efficiency challenge since there are potentially an exponential number of solutions making problematic the use of such an approach in practice. Other methods have been used as the online gradient descent [24], the follow the leader algorithm and its extensions follow the perturbed leader [15] for linear objective functions and its generalization to submodular objective functions [12], or the generalized follow the perturbed leader [7] algorithm. Hazan and Koren [13] proved that a no-regret algorithm with running-time polynomial in the size of the problem does not exist in general settings without any assumption on the structure of the problem.
Our work takes into account the computational efficiency of the online learning algorithm in the same vein as the works in [1,15,12,22,6,7,14,9]. We study various discrete nonlinear combinatorial optimization problems in an online learning framework, focusing in particular on the family of min-max discrete optimization problems.
Our goal is to address the two following central questions:
(Q1) are there negative results showing that getting vanishing regret (or even vanishing approximate regret) is computationally hard?
(Q2) are there some notable differences in the efficiencies of follow the leader and gradient descent strategies for discrete problems?
Formally. An online learning problem consists of a decision-space X , a state-space Y and an objective function f : X × Y → R that can be either a cost or a reward function. Any problem of this class can be viewed as an iterative adversarial game with T rounds where the following procedure is repeated for t = 1, . . . , T : (a) Decide an action x t ∈ X , (b) Observe a state y t ∈ Y, (c) Suffer loss or gain reward f t (x t ) = f (x t , y t ).
We use f t (x) as another way to refer to the objective function f after observing the state y t , i.e. the objective function at round t.
The objective of the player is to minimize/maximize the accumulative cost/reward of his decided actions, which is given by the aggregation T t=1 f (x t , y t ). An online learning algorithm is any algorithm that decides the actions x t at every round before observing y t . We compare the decisions (x 1 , . . . , x T ) of the algorithm with those of the best static action in hindsight, defined as: x * = arg min x∈X T t=1 f (x, y t ), or x * = arg max x∈X T t=1 f (x, y t ), for minimization or maximization problems, respectively. This is the action that a (hypothetical) offline oracle would compute, if it had access to the entire sequence y 1 , . . . , y T . The typical measurement for the efficiency of an online learning algorithm is the regret, defined as:
R T = | T t=1 f (x t , y t ) − T t=1 f (x * , y t )|.
A learning algorithm typically uses some kind of randomness, and the regret denotes the expectation of the above quantity. We are interested in online learning algorithms that have the "vanishing regret" property. This means that as the "game" progresses (T → ∞), the average deviation between the algorithm's average cost/payoff to the average cost/payoff of the optimum action in hindsight tends to zero. Typically, a vanishing regret algorithm is an algorithm with regret R T such that: lim T →∞ R T T = 0. However, as we are interested in polynomial time algorithms, we consider only vanishing regret R T = O(T c ) where 0 ≤ c < 1 (that guarantees the convergence in polynomial time). Throughout the paper, whenever we mention vanishing regret, we mean regret
R T = O(T c ) where 0 ≤ c < 1.
For many online learning problems, even their offline versions are N P -hard. Thus, it is not feasible to produce a vanishing regret sequence with an efficient algorithm. For such cases, the notion of α-regret has been defined as:
R α T = T t=1 f (x t , y t ) − α T t=1 f (x * , y t ) .
Hence, we are interested in vanishing α-regret sequences for some α for which we know how to approximate the offline problem. The notion of vanishing α-regret is defined in the same way as that of vanishing regret. In this article we focus on computational issues. Efficiency for an online learning algorithm needs to capture both the computation of x t and the convergence speed. This is formalized in the following definition (where n denotes the size of the instance).
Definition 1.
A polynomial time vanishing α-regret algorithm is an online learning algorithm for which (1) the computation of x t is polynomial in n and t (2) the expected α-regret is bounded by p(n)T c for some polynomial p and some constant 0 ≤ c < 1.
Note that in case α = 1, we simply use the term polynomial time vanishing regret algorithm.
Our contribution
In Section 2, we provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation α-regret, for some α (unless N P = BP P ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no (2 − ǫ)-regret online algorithm, unless Unique Game is in BP P ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method.
In Section 3, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly N P -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). We also prove that the offline optimization oracle is strongly N P -hard for the problem of scheduling a set of jobs on m identical machines, where m is a fixed constant. To the best of our knowledge, up to now algorithms based on the follow the leader method for non-linear objective functions require an exact oracle or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that follow-the-leader-type strategies can't be used for the online problem, at least with our current knowledge. On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalization of knapsack problem [2].
Hardness of online learning for min-max problems
General reduction
As mentioned in the introduction, in this section we give some answers to question (Q1) on ruling out the existence of vanishing regret algorithm for a broad family of online min-max problems, even for ones that are polynomial-time solvable in the offline case. In fact, we provide a general reduction (see Theorem 1) showing that many min-max problems do not admit vanishing α-regret for some α > 1 unless N P = BP P .
More precisely, we focus on a class of cardinality minimization problems where, given an n-elements set U , a set of constraints C on the subsets of U (defining feasible solutions) and an integer k, the goal is to determine whether there exists a feasible solution of size at most k. This is a general class of problems, including for instance graph problems such as Vertex Cover, Dominating Set, Feedback Vertex Set, etc.
Given such a cardinality problem P, let min-max-P be the optimization problem where given nonnegative weights for all the elements of U , one has to compute a feasible solution (under the same set of constraints C as in problem P) such that the maximum weight of all its elements is minimized. The online min-max-P problem is the online learning variant of min-max-P , where the weights on the elements of U change over time.
Interestingly, the min-max version of all the problems mentioned above are polynomially solvable. This is actually true as soon as, for problem P, every superset of a feasible solution is feasible. Then one just has to check for each possible weight w if the set of all elements of weight at most w agrees with the constraints. For example, one can decide if there exists a vertex cover with the maximum weight w as follows: remove all vertices of weight strictly larger than w, and check if the remaining vertices form a vertex cover.
We will show that, in contrast, if P is N P -complete then its online learning min-max version has no vanishing regret algorithm (unless N P = BP P ), and that if P has an inapproximability gap r, then there is no vanishing (r − ǫ)-regret for its online learning min-max version. Let us first recall the notion of approximation gap, where x opt denotes the minimum size of a feasible solution to the cardinality problem P.
Definition 2. Given two numbers 0 ≤ A < B ≤ 1, let [A,B]-Gap-P be the decision problem where
given an instance of P such that |x opt | ≤ An or |x opt | ≥ Bn, we need to decide whether x opt < Bn. Now we can state the main result of the section.
Theorem 1. Let P be a cardinality minimization problem and A, B be real numbers with
0 ≤ A < B ≤ 1. Assume that the problem [A,B]-Gap-P is N P -complete. Then, for every α ≤ B
A − ǫ where ǫ is an arbitrarily small constant, there is no polynomial time vanishing α-regret algorithm for online min-max-P unless N P = BP P .
Proof. We prove this theorem by deriving an polytime algorithm for [A,B]-Gap-P that gives, under the assumption of a vanishing α-regret algorithm for online min-max-P , the correct answer with probability of error at most 1 3 . This would imply that the [A,B]-Gap-P problem is in BP P and thus N P = BP P . Let O be a vanishing α-regret algorithm for online min-max-P for some α ≤ B
A − ǫ = (1 − ǫ ′ ) B A where ǫ > 0 is a constant and ǫ ′ = A B ǫ.
Let T be a time horizon which will be fixed later. We construct the following (offline) algorithm for [A,B]-Gap-P using O as an oracle (subroutine). At every step 1 ≤ t ≤ T , use the oracle O to compute a solution x t . Then, choose one element of x t uniformly at random and assign weight 1 to that element; assign weight 0 to other elements. Consequently, the cost incurred to O is 1 at every step. These weight assignments over times, yet simple, are crucial. Intuitively, the assignments will be used to learn about the optimal solution of the [A,B]-Gap-P problem (given the performance of the learning algorithm O). The formal description is given in Algorithm 1.
Algorithm 1: Algorithm for the [A,B]-Gap-P problem
1 for t = 1, 2, . . . , T do 2 Compute x t ∈ X using algorithm O. 3 if |x t | < Bn then return Yes, i.e., |x opt | ≤ An. 4
Assign weight 1 to a element of x t chosen uniformly at random and 0 to all other element of U .
5
Feed the weight vector and the cost f t (
x t ) = max u∈x t w t (u) back to O. 6 end 7 return No, i.e., |x opt | ≥ Bn.
We are now analyzing Algorithm 1. If the algorithm outputs |x opt | ≤ An, this means that at some step t the oracle O has figured out a feasible solution x t with |x t | < Bn. Since x opt (the minimum cardinality feasible solution) is known to be either |x opt | ≤ An or |x opt | ≥ Bn, the output is always correct.
If the algorithm outputs |x opt | ≥ Bn, then this means that every solution x t had a cardinality that was greater or equal to Bn. We bound the probability that Algorithm 1 returns a wrong answer in this case. Let R T be the α-regret achieved by the oracle (online learning algorithm) O on the set of instances produced in Algorithm 1. Let E denote the event that the algorithm returns a wrong answer. By Adam's Law, we have:
E[R T ] = E[R T |E]P[E] + E[R T |¬E]P[¬E] ≥ E[R T |E]P[E] ⇒ P[E] ≤ E[R T ] E[R T |E]
From Algorithm 1 it should be clear that at every step, the oracle O always suffers loss 1. By definition of α-regret, this means that:
E[R T |E] = T − α min x∈X T t=1 E[f t (x)|E].
Now, we consider a minimum cardinality feasible solution x opt (for the initial instance of the cardinality minimization problem P). We have min x∈X
T t=1 E[f t (x)|E] ≤ T t=1 E[f t (x opt )|E].
As Algorithm 1 returns a wrong answer, |x opt | ≤ An and at every time t, x t has at least Bn elements. Furthermore, by the construction of the weights, there is only one element with weight 1. Thus, f t (x opt ) = 1 with probability at most |x opt |/|x t | ≤ A/B (and f t (x opt ) = 0 otherwise). Thus, we get:
min x∈X T t=1 E[f t (x)|E] ≤ A B T ⇒ E[R T |E] ≥ T 1 − α A B ≥ ǫ ′ · T since α ≤ (1 − ǫ ′ ) B A . Hence, P[E] ≤ E[R T ] ǫ ′ ·T . As O has vanishing α-regret, i.e., there exists a constant 0 ≤ c < 1 such that E[R T ] ≤ p(n)T c where p(n)
is a polynomial of the problem parameters. Therefore,
P[E] ≤ p(n)T c ǫ ′ · T = p(n)T c−1 A B · ǫ Choose parameter T = ǫA 3p(n)B 1 c−1 , we get that P[E] ≤ 1
3 . Besides, the running time of Algorithm 1 is polynomial since it consists of T (polynomial in the size of the problem) iterations and the running time of each iteration is polynomial (as O is a polynomial time algorithm).
In conclusion, if there exists a vanishing α-regret algorithm for online min-max-P , then the N Pcomplete problem [A,B]-Gap-P is in BP P , implying N P = BP P .
The inapproximability (gap) results for the aforementioned problems give lower bounds on the approximation ratio α of any vanishing α-regret algorithm for their online min-max version. For instance, the online min-max dominating set problem has no vanishing constant-regret algorithm based on the approximation hardness in [19]. We state the lower bound explicitly for the online min-max vertex cover problem in the following corollary, as we refer to it later by showing a matching upper bound. They are based on the hardness results for vertex cover in [17] and [16] (N P -hardness and UGC-hardness, respectively). Now, consider N P -complete cardinality problems which have no known inapproximability gap (for instance Vertex Cover in planar graphs, which admits a PTAS). Then we can show the following impossibility result. Corollary 3. If a cardinality problem P is N P -Complete, then there is no vanishing regret algorithm for online min-max-P unless N P = BP P .
Proof. We note that the proof of Theorem 1 does not require A, B and α to be constant: they can be functions of the instance, and the result holds as soon as 1/ 1 − α A B is polynomially bounded (so that T remains polynomially bounded in n). Then, for a cardinality problem P, if A = k/n and B = k+1 n = A + 1 n , then deciding whether |x opt | ≤ k is the same as deciding whether |x opt | ≤ An or |x opt | ≥ Bn. By setting α = 1, A = k/n and B = k+1 n in proof of Theorem 1 we get the result.
Min-max Vertex Cover: matching upper bound with Gradient Descent
In this section we will present an online algorithm for the min-max vertex cover problem based on the classic Online Gradient Descent (OGD) algorithm. In the latter, at every step the solution is obtained by updating the previous one in the direction of the (sub-)gradient of the objective and projecting to a feasible convex set. The particular nature of the min-max vertex cover problem is that the objective function is the l ∞ norm and the set of feasible solutions is discrete (non-convex). In our algorithm, we consider the following standard relaxation of the problem:
min max i∈V w i x i s.t. x ∈ Q : x i + x j ≥ 1 ∀(i, j) ∈ E, 0 ≤ x i ≤ 1 ∀i ∈ V.
At time step t, we update the solution by a sub-gradient g t ( Play X t ∈ {0, 1} n . Observe w t (weights of vertices) and incur the cost max i w t i X t i .
x t ) = [0, . . . , 0, w t i , 0, . . . , 0] with w t i in coordinate i t (x t ) = arg max 1≤i≤n w t i x t i
4
Update y t+1 = x t − 1 √ t g t (x t ).
5
Project y t+1 to Q w.r.t the ℓ 2 -norm: x t+1 = Proj Q y t+1 := arg min x∈Q y t+1 − x 2 .
6
Round x t+1 to X t+1 :
X t+1 i = 1 if x t+1 i ≥ 1/2 and X t+1 i = 0 otherwise 7 end
The following theorem, coupled with Corollary 2, show the tight bound of 2 on the approximation ratio of polynomial-time online algorithms for Min-max Vertex Cover (assuming UGC conjecture).
Theorem 4.
Assume that W = max 1≤t≤T max 1≤i≤n w t i . Then, after T time steps, Algorithm 2 achieves
t t=1 max 1≤i≤n w t i X t i ≤ 2 · min X * t t=1 max 1≤i≤n w t i X * i + 3W √ nT
Proof. By the OGD algorithm (see [24] or [11, Chapter 3]), we have
t t=1 max 1≤i≤n w t i x t i ≤ min x * ∈P t t=1 max 1≤i≤n w t i x * i + 3DG 2 √ T where D = max x,x ′ ∈Q x − x ′ 2 ≤ √ n is the diameter of Q and G = max 1≤t≤T max g t 2 ≤ W .
Moreover, by the rounding procedure, it always holds that
max i=1,...,n X t i w t i ≤ 2 max i=1,...,n x t i w t i .
Combining these inequalities, the theorem follows.
Computational issues for Follow the Leader based methods
The most natural approach in online learning is for the player to always pick the leading action, i.e. the action x t that is optimal to the observed history y 1 , . . . , y t−1 . However it can be proven ( [15]) that any deterministic algorithm that always decides on the leading action can be "tricked" by the adversary in order to make decision that are worse than the optimal action in hindsight, thus leading to large regret algorithms. On this regard, we need to add a regularization term containing randomness to the optimization oracle in order to make our algorithms less predictable and more stable. Thus, the Follow the Regularized Leader strategy in a minimization problem, consists of deciding on an action x t such that:
x t = arg min x∈X t−1 τ =1 f (x, y τ ) + R(x)
where R(x) is the regularization term.
There are many variations of the Follow the Leader (FTL) algorithm that differentiate on the applied objective functions and the type of regularization term. For linear objectives, Kalai and Vempala [15] suggested the Follow the Perturbed Leader algorithm where the regularization term is simply the cost/payoff of each action on a randomly generated instance of the problem. Dudik et al. [7] were able to generalize the FTPL algorithm of Kalai and Vempala [15] for non-linear objectives, by introducing the concept of shared randomness and a much more complex perturbation mechanism.
A common element between every Follow the Leader based method, is the need for an optimization oracle over the observed history of the problem. This is a minimum requirement since the regularization term can make determining the leader even harder, but most algorithms are able to map the perturbations to the value of the objective function on a set of instances of the problem and thus eliminate this extra complexity. To the best of our knowledge, up to now FTL algorithms for non-linear objective functions require an exact or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that the FTL strategy cannot be used for the online problem, at least with our current knowledge.
Computational hardness results
As we mentioned, algorithms that use the "Follow the Leader" strategy heavily rely on the existence of an optimization oracle for the multi-instance version of the offline problem. For linear objectives, it is easy to see ( [15]) that optimization over a set of instances is equivalent to optimization over a single instance and thus any algorithm for the offline problem can be transformed to an online learning algorithm. However, for non-linear problems this assumption is not always justified since even when the offline problem is polytime-solvable, the corresponding variation with multiple instances can be strongly N P -hard.
In this section we present some problems where we can prove that the optimum solution over a set of instances is hard to approximate. More precisely, in the multi-instance version of a given problem, we are given an integer N > 0, a set of feasible solutions X , and N objective functions f 1 , . . . , f N over X . The goal is to minimize (over X ) N i=1 f i (x). We will show computational hardness results for the multi-instance versions of:
• min-max vertex cover (already defined).
• min-max perfect matching, where we are given an undirected graph G(V, E) and a weight function w : E → R + on the edges and we need to determine a perfect matching such that the weight of the heaviest edge on the matching is minimized.
• min-max path, where we are given an undirected graph G(V, E), two vertices s and t, and a weight function w : E → R + on the edges and we need to determine an s − t path such that the weight of the heaviest edge in the path is minimized.
• P 3||Cmax, where we are given 3 identical parallel machines, a set of n-jobs J = {j 1 , . . . , j n } and processing times p : J → R + and we need to determine a schedule of the jobs to the machines (without preemption) such that the makespan, i.e. the time that elapses until the last job is processed, is minimized.
Hence, in the multi-instance versions of these problems, we are given N weight functions over vertices (min-max vertex cover) or edges (min-max perfect matching, min-max path), or N processing time vectors (P 3||Cmax).
Theorem 5.
The multi-instance versions of min-max vertex cover, min-max perfect matching, min-max path and P 3||Cmax are strongly N P -hard.
Proof. Here we present the proof for the multi-instance version of the min-max perfect matching and the min-max path problems, which use a similar reduction from the Max-3-DNF problem. The proofs for multi-instance min-max vertex cover and multi-instance P 3||Cmax can be found at appendices A.1 and A.2 respectively. In the Max-3-DNF problem, we are given a set of n boolean variables X = {x 1 , . . . , x n } and m clauses C 1 , . . . , C m that are conjunctions of three variables in X or their negations and we need to determine a truth assignment σ : X → {T, F } such that the number of satisfied clauses is maximized.
We start with the multi-instance min-max perfect matching problem. For every instance I of the Max-3-DNF problem we construct a graph G(V, E) and m weight functions defined as follows:
• To each variable x i is associated a 4-cycle on vertices (u i , u t i , u i , u f i ). This 4-cycle has two perfect matchings: either u i is matched with u t i and u i is matched with u f i , corresponding to setting the variable x i to true, or vice-versa, corresponding to setting x i to false. This specifies a one-to-one correspondence between the solutions of the two problems.
• Each weight function corresponds to one conjunction: w j (u i u t i ) = 1 if ¬x i ∈ C j , otherwise w j (u i u t i ) = 0. Edges incident to vertices u i always get weight 0. The above construction can obviously be done in polynomial time to the size of the input. It remains to show the correlation between the objective values of these solutions. If a clause C j is satisfied by a truth assignment σ then (since it is a conjunction) every literal on the clause must be satisfied. From the construction of the instance I ′ of multi-instance min-max matching, the corresponding matching M σ will have a maximum weight of 0 for the weight function w j . If a clause C j is not satisfied by a truth assignment, then the corresponding matching M σ will have a maximum weight of 1 for the weight function w j . Thus, from the reduction we get
val(I, σ) = m − val(I ′ , M σ )
where val stands for the value of a solution. This equation already proves the hardness result of Theorem 5. It actually also shows AP X-hardness. Indeed, the optimal value OPT of Max-3-DNF verifies m 8 ≤ OP T ≤ m. Assuming the existence of a (1 + ǫ) approximation algorithm for multi-instance minmax perfect matching problem, we can get a (1 − 7ǫ) approximation algorithm for Max-3-DNF. Since Max-3-DNF is AP X-Hard, multi-instance min-max perfect matching is also AP X-Hard.
A similar reduction leads to the same result for the min-max path problem: starting from an instance of 3-DNF, build a graph G where V = {v 0 , v 1 , . . . , v n }. Vertex v i corresponds to variable x i There are two arcs e t i and e f i between v i−1 and v i . We are looking for v 0 − v n paths. Taking edge e t i (resp. e f i ) corresponds to setting x i to true (resp. false). As previously this gives a one-to-one correspondence between solutions. Each clause corresponds to one weight function: if x i ∈ C j then w j (e f i ) = 1, if ¬x i ∈ C j then w j (e t i ) = 1. All other weights are 0. Then for a v 0 − v n path P , w j (P ) = 0 if and only if C j is satisfied by the corresponding truth assignment. The remainder of the proof is exactly the same as the one of min-max perfect matching.
Theorem 5 gives insight on the hardness of non-linear multi-instance problems compared to their single-instance counterparts. As we proved, the multi-instance P 3||Cmax is strongly NP-Hard while P 3||Cmax is known to admit a FPTAS [20,23]. Also, the multi-instance version of min-max perfect matching, min-max path and min-max vertex cover are proved to be AP X-Hard while their singleinstance versions can be solved in polynomial time. We also note that these hardness results hold for the very specific case where weights/processing times are in {0, 1}, for which P ||Cmax, as well as the other problems, become trivial.
We also note that the inapproximability bound we acquired for the multi-instance min-max vertex cover under UGC is tight, since we can formulate the problem as a linear program, solve it's continuous relaxation and then use a rounding algorithm to get a vertex cover of cost at most twice the optimum for the problem.
The results on the min-max vertex cover problem also provides some answer to question (Q2) addressed in the introduction. As we proved in Section 2.2, the online gradient descent method (paired with a rounding algorithm) suffices to give a vanishing 2-regret algorithm for online min-max vertex cover. However, since the multi-instance version of the problem is APX-hard there is no indication that the follow the leader approach can be used in order to get the same result and match the lower bound of Corollary 2 for the problem.
Online generalized knapsack problem
In this section we present a vanishing regret algorithm for the online learning version of the following generalized knapsack problem. In the traditional knapsack problem, one has to select a set of items with total weight not exceeding a fixed "knapsack" capacity B and maximizes the total profit of the set. Instead, we assume that the knapsack can be customized to fit more items. Specifically, there is a capacity B and if the total weight of the items exceeds this capacity, then we have to pay c-times the extra weight. Formally: Definition 3 (Generalized Knapsack Problem (GKP)). Given a set of items i = 1, 2, ..., n with nonnegative weights w i and non-negative profits p i , a knapsack capacity B ∈ R + and a constant c ∈ R + , determine a set of items A ⊆ [n] that maximizes the total profit:
profit(A) = i∈A p i − c max{0, i∈A w i − B}
This problem, as well as generalizations with other penalty costs for overweight, have been studied for instance in [4,2] (see there for practical motivations). In an online learning setting, we assume that we have n-items with static weights w i and a static constant c. On each timestep, we need to select a subset of those items and then we learn the capacity of the knapsack and the profit of every item, gaining some profit or even suffering loss based on our decision.
As we showed in Section 3.1, many non-linear problems do not have an efficient (polynomial) offline oracle and as a direct consequence, the follow the leader strategy can not directly be applied to get vanishing regret. While GKP is clearly not linear due to the maximum in the profit function, we will show that there exists a FPTAS for solving its multiple instances variation. We will use this result to get a vanishing regret algorithm for the online version of GKP (Theorem 6).
Since the problem is not linear, we use the the generalized FTPL (GFTPL) framework of Dudik et al. [7], which does not rely on the assumption that the objective function is linear. While in the linear case it was sufficient to consider an "extra" random observation (FTPL), a much more complex perturbation mechanism is needed in order for the analysis to work if the objective function is not linear. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness, using the notion of translation matrix. The method is presented in Appendix B.1.
Theorem 6.
There is a polynomial time vanishing regret algorithm for GKP.
Proof. (sketch) The proof is based on the three following steps:
• First we note that GFTPL works (gives vanishing regret) even if the oracle admits a FPTAS. This is necessary since our problem is clearly N P -hard.
• Second, we provide for GKP an ad hoc translation matrix. This shows that the GFTPL method can be applied to our problem. Moreover, this matrix is built in such a way that the oracle needed for GFTPL is precisely a multi-instance oracle.
• Third, we show that there exists an FPTAS multi-instance oracle.
The first two points are given in appendices B.1 and B.2 respectively. We only show the last point. To do this, we show that we can map a set of instances of the generalized knapsack problem to a single instance of the more general convex-generalized knapsack problem. Suppose that we have a set of m instances (p i , B i ) of GKP. Then, the total profit of every item set x ∈ X is:
profit(x) = m t=1 (x · p t − c max{0, w · x − B t }) = x · p s − ck(x|B 1 , ..., B m )
where p s = m t=1 p t and k(x|B 1 , ..., B m ) = m t=1 max{0, w · x − B t }. Let W = w · x the total weight of the item set andB 1 , . . . ,B m a non-decreasing ordering of the knapsack capacities. Then:
k(x|B 1 , ..., B m ) = k(W |B 1 , ...,B m ) 0 , W ≤B 1 W −B 1 ,B 1 < W ≤B 2 2W − (B 1 +B 2 )
,B 2 < W ≤B 3 . . . mW − (B 1 +B 2 + · · · +B m ) ,B m < W Note that the above function is always convex. This means that at every time step t, we need a FPTAS for the maximization problem x · p − f (W ) where f is a convex function. We know that such an FPTAS exists ( [2]). In this paper, the authors suggest a FPTAS with time complexity O(n 3 /ǫ 2 ) by assuming that the convex function can be computed at constant time. In our case the convex function k is part of the input; with binary search we can compute it in logarithmic time.
Conclusion
In the paper, we have presented a general framework showing the hardness of online learning algorithms for min-max problems. We have also showed a sharp separation between two widely-studied online learning algorithms, online gradient descent and follow-the-leader, from the approximation and computational complexity aspects. The paper gives rise to several interesting directions. A first one is to extend the reduction framework to objectives other than min-max. A second direction is to design online vanishing regret algorithms with approximation ratio matched to the lower bound guarantee. Finally, the proof of Theorem 1 needs a non-oblivious adversary. An interesting direction would be to get the same lower bounds with an oblivious adversary if possible.
Appendix
A Hardness of multi-instance problems (Theorem 5)
A.1 Hardness of multi-instance min-max vertex cover
We make a straightforward reduction from the vertex cover problem. Consider any instance G(V, E) of the vertex cover problem, with V = {v 1 , . . . , v n }. We construct n weight functions w 1 , . . . , w n : V → R + such that in w i vertex v i has weight 1 and all other vertices have weight 0. If we consider the instance of the multi-instance min-max vertex cover with graph G(V, E) and weight functions w 1 , . . . , w n , it is clear that any vertex cover has total cost that is equal to its size, since for any vertex v i ∈ V there is exactly one weight function where w i = 1 and w i = 0 for every other weight function.
Since vertex cover is strongly N P -hard, N P -hard to approximate within ratio √ 2−ǫ and UGC-hard to approximate within ratio 2 − ǫ, the same negative results hold for the multi-instance min-max vertex cover problem.
A.2 Hardness of multi-instance P3||Cmax
We prove that the multi-instance P 3||Cmax problem is strongly N P -hard even when the processing times are in {0, 1}, using a reduction from the N P -complete 3-coloring problem. In the 3-coloring (3C) problem, we are given a graph G(V, E) and we need to decide whether there exists a coloring of its vertices with 3 colors such that if two vertices are connected by an edge, they cannot have the same color.
For every instance G(V, E) of the 3C problem with |V | = n and |E| = m, we construct (in polynomial time) an instance of the multi-instance P 3||Cmax with n-jobs and N = m processing time vectors. Every edge (i, j) ∈ E corresponds to a processing time vector with jobs i and j having processing time 1 and every other job having processing time 0. It is easy to see that at each time step the makespan is either 1 or 2 and thus the total makespan is at least m and at most 2m.
If there exists a 3-coloring on G then by assigning every color to a machine, at each time step there will not be two jobs with non-zero processing time in the same machine and thus the makespan will be 1 and the total solution will have cost m. If the total solution has cost m then this means that at every time step the makespan was 1 and by assigning to the jobs of every machine the same color we get a 3 coloring of G. Hence, the multi-instance variation of the P 3||Cmax problem is strongly N P -hard.
B A polynomial time vanishing regret algorithm for GKP (Theorem 6) B.1 Generalized follow the perturbed leader
For the sake of completeness, we introduce the generalized FTPL (GFTPL) method of Dudik et al. [7], which can be used to achieve a vanishing regret for non linear objective functions for some discrete problems. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness. In their algorithm, the regularization term R(x) of the FTPL algorithm is substituted by the inner product Γ x · a where a is a random vector and Γ x is a vector corresponding to the action x. In FTPL it was sufficient to have Γ x = x but in this general setting, Γ x must be the row of a translation matrix that corresponds to action x. [7]). A matrix Γ is admissible if its rows are distinct. It is (κ, δ)admissible if it is admissible and also (i) the number of distinct elements within each column is at most κ and (ii) the distinct elements within each column differ by at least δ. [7]). A translation matrix Γ is a (κ, δ)-admissible matrix with |X |rows and N-columns. Since the number of rows is equal to the number of feasible actions, we denote as Γ x the row corresponding to action x ∈ X . In the general case, Γ ∈ [γ m , γ M ] X ×N and G γ = γ M − γ m is used to denote the diameter of the translation matrix.
Definition 4 (Admissible Matrix
Definition 5 (Translation Matrix
From the definition of the translation matrix it becomes clear that the action space X needs to be finite. Note that the number of feasible actions can be exponential to the input size, since we do not need to directly compute the translation matrix. The generalized FTPL algorithm for a maximization problem is presented in algorithmic box 3. At time t, the algorithm decides the perturbed leader as the action that maximizes the total payoff on the observed history plus some noise that is given by the inner product of Γ x and the perturbation vector α. Note that in [7] the algorithm only needs an oracle with an additive error ǫ. We will see later that it works also for a multiplicative error ǫ (more precisely, for an FPTAS). Decide x t such that ∀x ∈ X :
Algorithm 3: Generalized FTPL algorithm
Data: A (κ, δ)-admissible translation matrix Γ ∈ [γ m , γ M ] X ×N ,t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ t−1 τ =1 f (x, y τ ) + a · Γ x − ǫ 4
Observe y t and gain payoff f (x t , y t ).
end
Let us denote G f as the diameter of the objective function, i.e., G f = max x,x ′ ∈X , y,y ′ ∈Y |f (x, y) − f (x ′ , y ′ )|.
Theorem 7 ([7]
). By using an appropriate η to draw the random vector, the regret of the generalized FTPL algorithm is:
R T ≤ N κG f G γ G f + 2ǫ δ T + ǫT
By setting ǫ = Θ(1/ √ T ), this clearly gives a vanishing regret.
Let us quote two difficulties to use this algorithm. First, the oracle has to solve a problem where the objective function is the sum of a multi-instance version of the offline problem and the perturbation. We will see in Appendix B.2 how we can implement the perturbation mechanism Γ x · α as the payoff of action x on a set of (random) observations of the problem.
Second, if the multi-instance version is N P -hard, having an efficient algorithm solving the oracle with an additive error ǫ is quite improbable. We remark that the assumption of an additive error ǫ can be replaced by the assumption of the existence of a FPTAS for the oracle. Namely, let us consider a modification of Algorithm 3 where at at each time t we compute a solution x t such that ∀x ∈ X :
t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ (1 − ǫ ′ ) t−1 τ =1 f (x, y τ ) + a · Γ x(1)
Then, if we use F M to denote the maximum payoff, i.e., F M = max x∈X , y∈Y f (x, y), by applying the same analysis as in [7], we can show that by fixing ǫ ′ = ǫ T F M +N ηΓ M we are guaranteed to get an action that has at least the same total perturbed payoff of decision x t if an additive optimization parameter ǫ was used. The computation is polynomial if we use an FPTAS. Then, we can still get a vanishing regret by using ǫ ′ = O(T − 3 2 ) instead of ǫ = O(T − 1 2 ) (considering all parameters of the problem as constants).
As a corollary, we can achieve a vanishing regret for any online learning problem in our setting by assuming access to an oracle OPT that can compute (for any ǫ ′ ) in polynomial time a decision x t satisfying Equation (1).
B.2 Distinguisher sets and a translation matrix for GKP
As noted above, an important issue in the method arises from the perturbation. Until now, the translation matrix Γ could be any (κ, δ)-admissible matrix as long as it had one distinct row for every feasible action in X . However, this matrix has to be considered by the oracle in order to decide x t . In [7] the authors introduce the concept of implementability that overcomes this problem. We present a simplified version of this property. Definition 6 (Distinguisher Set). A distinguisher set for an offline problem P is a set of instances S = {y 1 , y 2 , . . . , y N } ∈ Y N such that for any feasible actions x, x ′ ∈ X :
x = x ′ ⇔ ∃j ∈ [N ] : f (x, y j ) = f (x ′ , y j )
This means that S in a set of instances that "forces" any two different actions to differentiate in at least one of their payoffs over the instances in S. If we can determine such a set, then we can construct a translation matrix Γ that significantly simplifies our assumptions on the oracle.
Let S = {y 1 , y 2 , . . . , y N } be a distinguisher set for our problem. Then, for every feasible action x ∈ X we can construct the corresponding row of Γ such that: Γ x = [f (x, y 1 ), f (x, y 2 ), . . . , f (x, y N )]
Since S is a distinguisher set, the translation matrix Γ is guaranteed to be admissible. Furthermore, according to the set we can always determine some κ and δ parameters for the translation matrix. By implementing Γ using a distinguisher set, the expression we need to (approximately) maximize at each round can be written as:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 a i f (x, y i )
This shows that the perturbations transform into a set of weighted instances, were the weights a i are randomly drawn from uniform distribution [0, η]. This is already a significant improvement, since now the oracle has to consider only weighted instances of the offline problem and not the arbitrary perturbation αΓ x we were assuming until now. Furthermore, for a variety of problems (including GKP), we can construct a distinguisher set y 1 , . . . , y N such that:
af (x, y j ) = f (x, ay j ) ∀a ∈ R, j ∈ [N ]
If this is true, then we can shift the random weights of the oracle inside the instances:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 f (x, a i y i )
Thus, if we have a distinguisher set for a given problem, to apply GFTPL all we need is an FPTAS for optimizing the total payoff over a set of weighted instances.
We now provide a distinguisher set for the generalized knapsack problem. Consider a set of n instances (p j , B j ) of the problem such that in instance (p j , B j ) item j has profit P , all other items have profit 0 and the knapsack capacity is B j = W s . Since the total weight of a set of items can never exceed W s , it is easy to see that ∀x ∈ X :
f (x, p j , B j ) = P if item j is selected in set x 0 otherwise
For any two different assignments x and x ′ , there is at least one item j ∈ [n] that they don't have in common. It is easy to see that in the corresponding instance (y j , B j ) one of the assignments will have total profit P and the other will have total profit 0. Thus, the proposed set of instances is indeed a distinguisher set for the generalized knapsack problem. We use this set of instances to implement the Γ matrix. Then, every column of Γ will have exactly 2 distinct values 0 and P , making the translation matrix (2, P )-admissible. As a result, in order to achieve a vanishing regret for online learning GKP, all we need is an FPTAS for the multi-instance generalized knapsack problem. | 8,388 |
1907.05944 | 2956358468 | We study various discrete nonlinear combinatorial optimization problems in an online learning framework. In the first part, we address the question of whether there are negative results showing that getting a vanishing (or even vanishing approximate) regret is computational hard. We provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation @math -regret, for some @math (unless @math ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no @math -regret online algorithm, unless Unique Game is in @math ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method. Then, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly @math -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalized knapsack problem. | Another direction is to design online learning algorithms using (offline polynomial-time) approximation algorithms as oracles. provided an algorithm which is inspired by Zinkevich's algorithm @cite_14 (gradient descent): at every step, the algorithm updates the current solution in the direction of the gradient and project back to the feasible set using an approximation algorithm. They showed that given an @math -approximation algorithm for a optimization problem, after @math prediction rounds (time steps) the online algorithm achieves an @math -regret bound of @math using @math calls to the approximation algorithm per round in average. Later on, gave an algorithm with @math -regret bound of @math using only @math calls to the approximation algorithm per round in average. These algorithms rely crucially on the linearity of the objective functions and it remains an interesting open question to design algorithms for online non-linear optimization problems. | {
"abstract": [
"Convex programming involves a convex set F ⊆ Rn and a convex cost function c : F → R. The goal of convex programming is to find a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2148825261"
]
} | Online-Learning for min-max discrete problems | Over the past years, online learning has become a very active research field. This is due to the widespread of applications with evolving or adversarial environments, e.g. routing schemes in networks [3], online marketplaces [5], spam filtering [11], etc. An online learning algorithm has to choose an action over a (possible infinite) set of feasible decisions. A loss/reward is associated to each decision which may be adversarially chosen. The losses/rewards are unknown to the algorithm beforehand. The goal is to minimize the regret, i.e. the difference between the total loss/reward of the online algorithm and that of the best single action in hindsight. A "good" online learning algorithm is an algorithm whose regret is sublinear as a function of the length of the time-horizon since then, on the average, the algorithm performs as well as the best single action in hindsight. Such an online algorithm is called an online learning algorithm with vanishing regret. For problems for which the offline version is N P -hard, the notions of regret and vanishing regret have been extended to the notions of α-regret and α-vanishing regret in order to take into account the existence of an α-approximation algorithm instead of an exact algorithm for solving the offline optimization problem.
While a lot of online learning problems can be modeled as the so called "experts problem" by associating a feasible solution to an expert, there is clearly an efficiency challenge since there are potentially an exponential number of solutions making problematic the use of such an approach in practice. Other methods have been used as the online gradient descent [24], the follow the leader algorithm and its extensions follow the perturbed leader [15] for linear objective functions and its generalization to submodular objective functions [12], or the generalized follow the perturbed leader [7] algorithm. Hazan and Koren [13] proved that a no-regret algorithm with running-time polynomial in the size of the problem does not exist in general settings without any assumption on the structure of the problem.
Our work takes into account the computational efficiency of the online learning algorithm in the same vein as the works in [1,15,12,22,6,7,14,9]. We study various discrete nonlinear combinatorial optimization problems in an online learning framework, focusing in particular on the family of min-max discrete optimization problems.
Our goal is to address the two following central questions:
(Q1) are there negative results showing that getting vanishing regret (or even vanishing approximate regret) is computationally hard?
(Q2) are there some notable differences in the efficiencies of follow the leader and gradient descent strategies for discrete problems?
Formally. An online learning problem consists of a decision-space X , a state-space Y and an objective function f : X × Y → R that can be either a cost or a reward function. Any problem of this class can be viewed as an iterative adversarial game with T rounds where the following procedure is repeated for t = 1, . . . , T : (a) Decide an action x t ∈ X , (b) Observe a state y t ∈ Y, (c) Suffer loss or gain reward f t (x t ) = f (x t , y t ).
We use f t (x) as another way to refer to the objective function f after observing the state y t , i.e. the objective function at round t.
The objective of the player is to minimize/maximize the accumulative cost/reward of his decided actions, which is given by the aggregation T t=1 f (x t , y t ). An online learning algorithm is any algorithm that decides the actions x t at every round before observing y t . We compare the decisions (x 1 , . . . , x T ) of the algorithm with those of the best static action in hindsight, defined as: x * = arg min x∈X T t=1 f (x, y t ), or x * = arg max x∈X T t=1 f (x, y t ), for minimization or maximization problems, respectively. This is the action that a (hypothetical) offline oracle would compute, if it had access to the entire sequence y 1 , . . . , y T . The typical measurement for the efficiency of an online learning algorithm is the regret, defined as:
R T = | T t=1 f (x t , y t ) − T t=1 f (x * , y t )|.
A learning algorithm typically uses some kind of randomness, and the regret denotes the expectation of the above quantity. We are interested in online learning algorithms that have the "vanishing regret" property. This means that as the "game" progresses (T → ∞), the average deviation between the algorithm's average cost/payoff to the average cost/payoff of the optimum action in hindsight tends to zero. Typically, a vanishing regret algorithm is an algorithm with regret R T such that: lim T →∞ R T T = 0. However, as we are interested in polynomial time algorithms, we consider only vanishing regret R T = O(T c ) where 0 ≤ c < 1 (that guarantees the convergence in polynomial time). Throughout the paper, whenever we mention vanishing regret, we mean regret
R T = O(T c ) where 0 ≤ c < 1.
For many online learning problems, even their offline versions are N P -hard. Thus, it is not feasible to produce a vanishing regret sequence with an efficient algorithm. For such cases, the notion of α-regret has been defined as:
R α T = T t=1 f (x t , y t ) − α T t=1 f (x * , y t ) .
Hence, we are interested in vanishing α-regret sequences for some α for which we know how to approximate the offline problem. The notion of vanishing α-regret is defined in the same way as that of vanishing regret. In this article we focus on computational issues. Efficiency for an online learning algorithm needs to capture both the computation of x t and the convergence speed. This is formalized in the following definition (where n denotes the size of the instance).
Definition 1.
A polynomial time vanishing α-regret algorithm is an online learning algorithm for which (1) the computation of x t is polynomial in n and t (2) the expected α-regret is bounded by p(n)T c for some polynomial p and some constant 0 ≤ c < 1.
Note that in case α = 1, we simply use the term polynomial time vanishing regret algorithm.
Our contribution
In Section 2, we provide a general reduction showing that many (min-max) polynomial time solvable problems not only do not have a vanishing regret, but also no vanishing approximation α-regret, for some α (unless N P = BP P ). Then, we focus on a particular min-max problem, the min-max version of the vertex cover problem which is solvable in polynomial time in the offline case. The previous reduction proves that there is no (2 − ǫ)-regret online algorithm, unless Unique Game is in BP P ; we prove a matching upper bound providing an online algorithm based on the online gradient descent method.
In Section 3, we turn our attention to online learning algorithms that are based on an offline optimization oracle that, given a set of instances of the problem, is able to compute the optimum static solution. We show that for different nonlinear discrete optimization problems, it is strongly N P -hard to solve the offline optimization oracle, even for problems that can be solved in polynomial time in the static case (e.g. min-max vertex cover, min-max perfect matching, etc.). We also prove that the offline optimization oracle is strongly N P -hard for the problem of scheduling a set of jobs on m identical machines, where m is a fixed constant. To the best of our knowledge, up to now algorithms based on the follow the leader method for non-linear objective functions require an exact oracle or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that follow-the-leader-type strategies can't be used for the online problem, at least with our current knowledge. On the positive side, we present an online algorithm with vanishing regret that is based on the follow the perturbed leader algorithm for a generalization of knapsack problem [2].
Hardness of online learning for min-max problems
General reduction
As mentioned in the introduction, in this section we give some answers to question (Q1) on ruling out the existence of vanishing regret algorithm for a broad family of online min-max problems, even for ones that are polynomial-time solvable in the offline case. In fact, we provide a general reduction (see Theorem 1) showing that many min-max problems do not admit vanishing α-regret for some α > 1 unless N P = BP P .
More precisely, we focus on a class of cardinality minimization problems where, given an n-elements set U , a set of constraints C on the subsets of U (defining feasible solutions) and an integer k, the goal is to determine whether there exists a feasible solution of size at most k. This is a general class of problems, including for instance graph problems such as Vertex Cover, Dominating Set, Feedback Vertex Set, etc.
Given such a cardinality problem P, let min-max-P be the optimization problem where given nonnegative weights for all the elements of U , one has to compute a feasible solution (under the same set of constraints C as in problem P) such that the maximum weight of all its elements is minimized. The online min-max-P problem is the online learning variant of min-max-P , where the weights on the elements of U change over time.
Interestingly, the min-max version of all the problems mentioned above are polynomially solvable. This is actually true as soon as, for problem P, every superset of a feasible solution is feasible. Then one just has to check for each possible weight w if the set of all elements of weight at most w agrees with the constraints. For example, one can decide if there exists a vertex cover with the maximum weight w as follows: remove all vertices of weight strictly larger than w, and check if the remaining vertices form a vertex cover.
We will show that, in contrast, if P is N P -complete then its online learning min-max version has no vanishing regret algorithm (unless N P = BP P ), and that if P has an inapproximability gap r, then there is no vanishing (r − ǫ)-regret for its online learning min-max version. Let us first recall the notion of approximation gap, where x opt denotes the minimum size of a feasible solution to the cardinality problem P.
Definition 2. Given two numbers 0 ≤ A < B ≤ 1, let [A,B]-Gap-P be the decision problem where
given an instance of P such that |x opt | ≤ An or |x opt | ≥ Bn, we need to decide whether x opt < Bn. Now we can state the main result of the section.
Theorem 1. Let P be a cardinality minimization problem and A, B be real numbers with
0 ≤ A < B ≤ 1. Assume that the problem [A,B]-Gap-P is N P -complete. Then, for every α ≤ B
A − ǫ where ǫ is an arbitrarily small constant, there is no polynomial time vanishing α-regret algorithm for online min-max-P unless N P = BP P .
Proof. We prove this theorem by deriving an polytime algorithm for [A,B]-Gap-P that gives, under the assumption of a vanishing α-regret algorithm for online min-max-P , the correct answer with probability of error at most 1 3 . This would imply that the [A,B]-Gap-P problem is in BP P and thus N P = BP P . Let O be a vanishing α-regret algorithm for online min-max-P for some α ≤ B
A − ǫ = (1 − ǫ ′ ) B A where ǫ > 0 is a constant and ǫ ′ = A B ǫ.
Let T be a time horizon which will be fixed later. We construct the following (offline) algorithm for [A,B]-Gap-P using O as an oracle (subroutine). At every step 1 ≤ t ≤ T , use the oracle O to compute a solution x t . Then, choose one element of x t uniformly at random and assign weight 1 to that element; assign weight 0 to other elements. Consequently, the cost incurred to O is 1 at every step. These weight assignments over times, yet simple, are crucial. Intuitively, the assignments will be used to learn about the optimal solution of the [A,B]-Gap-P problem (given the performance of the learning algorithm O). The formal description is given in Algorithm 1.
Algorithm 1: Algorithm for the [A,B]-Gap-P problem
1 for t = 1, 2, . . . , T do 2 Compute x t ∈ X using algorithm O. 3 if |x t | < Bn then return Yes, i.e., |x opt | ≤ An. 4
Assign weight 1 to a element of x t chosen uniformly at random and 0 to all other element of U .
5
Feed the weight vector and the cost f t (
x t ) = max u∈x t w t (u) back to O. 6 end 7 return No, i.e., |x opt | ≥ Bn.
We are now analyzing Algorithm 1. If the algorithm outputs |x opt | ≤ An, this means that at some step t the oracle O has figured out a feasible solution x t with |x t | < Bn. Since x opt (the minimum cardinality feasible solution) is known to be either |x opt | ≤ An or |x opt | ≥ Bn, the output is always correct.
If the algorithm outputs |x opt | ≥ Bn, then this means that every solution x t had a cardinality that was greater or equal to Bn. We bound the probability that Algorithm 1 returns a wrong answer in this case. Let R T be the α-regret achieved by the oracle (online learning algorithm) O on the set of instances produced in Algorithm 1. Let E denote the event that the algorithm returns a wrong answer. By Adam's Law, we have:
E[R T ] = E[R T |E]P[E] + E[R T |¬E]P[¬E] ≥ E[R T |E]P[E] ⇒ P[E] ≤ E[R T ] E[R T |E]
From Algorithm 1 it should be clear that at every step, the oracle O always suffers loss 1. By definition of α-regret, this means that:
E[R T |E] = T − α min x∈X T t=1 E[f t (x)|E].
Now, we consider a minimum cardinality feasible solution x opt (for the initial instance of the cardinality minimization problem P). We have min x∈X
T t=1 E[f t (x)|E] ≤ T t=1 E[f t (x opt )|E].
As Algorithm 1 returns a wrong answer, |x opt | ≤ An and at every time t, x t has at least Bn elements. Furthermore, by the construction of the weights, there is only one element with weight 1. Thus, f t (x opt ) = 1 with probability at most |x opt |/|x t | ≤ A/B (and f t (x opt ) = 0 otherwise). Thus, we get:
min x∈X T t=1 E[f t (x)|E] ≤ A B T ⇒ E[R T |E] ≥ T 1 − α A B ≥ ǫ ′ · T since α ≤ (1 − ǫ ′ ) B A . Hence, P[E] ≤ E[R T ] ǫ ′ ·T . As O has vanishing α-regret, i.e., there exists a constant 0 ≤ c < 1 such that E[R T ] ≤ p(n)T c where p(n)
is a polynomial of the problem parameters. Therefore,
P[E] ≤ p(n)T c ǫ ′ · T = p(n)T c−1 A B · ǫ Choose parameter T = ǫA 3p(n)B 1 c−1 , we get that P[E] ≤ 1
3 . Besides, the running time of Algorithm 1 is polynomial since it consists of T (polynomial in the size of the problem) iterations and the running time of each iteration is polynomial (as O is a polynomial time algorithm).
In conclusion, if there exists a vanishing α-regret algorithm for online min-max-P , then the N Pcomplete problem [A,B]-Gap-P is in BP P , implying N P = BP P .
The inapproximability (gap) results for the aforementioned problems give lower bounds on the approximation ratio α of any vanishing α-regret algorithm for their online min-max version. For instance, the online min-max dominating set problem has no vanishing constant-regret algorithm based on the approximation hardness in [19]. We state the lower bound explicitly for the online min-max vertex cover problem in the following corollary, as we refer to it later by showing a matching upper bound. They are based on the hardness results for vertex cover in [17] and [16] (N P -hardness and UGC-hardness, respectively). Now, consider N P -complete cardinality problems which have no known inapproximability gap (for instance Vertex Cover in planar graphs, which admits a PTAS). Then we can show the following impossibility result. Corollary 3. If a cardinality problem P is N P -Complete, then there is no vanishing regret algorithm for online min-max-P unless N P = BP P .
Proof. We note that the proof of Theorem 1 does not require A, B and α to be constant: they can be functions of the instance, and the result holds as soon as 1/ 1 − α A B is polynomially bounded (so that T remains polynomially bounded in n). Then, for a cardinality problem P, if A = k/n and B = k+1 n = A + 1 n , then deciding whether |x opt | ≤ k is the same as deciding whether |x opt | ≤ An or |x opt | ≥ Bn. By setting α = 1, A = k/n and B = k+1 n in proof of Theorem 1 we get the result.
Min-max Vertex Cover: matching upper bound with Gradient Descent
In this section we will present an online algorithm for the min-max vertex cover problem based on the classic Online Gradient Descent (OGD) algorithm. In the latter, at every step the solution is obtained by updating the previous one in the direction of the (sub-)gradient of the objective and projecting to a feasible convex set. The particular nature of the min-max vertex cover problem is that the objective function is the l ∞ norm and the set of feasible solutions is discrete (non-convex). In our algorithm, we consider the following standard relaxation of the problem:
min max i∈V w i x i s.t. x ∈ Q : x i + x j ≥ 1 ∀(i, j) ∈ E, 0 ≤ x i ≤ 1 ∀i ∈ V.
At time step t, we update the solution by a sub-gradient g t ( Play X t ∈ {0, 1} n . Observe w t (weights of vertices) and incur the cost max i w t i X t i .
x t ) = [0, . . . , 0, w t i , 0, . . . , 0] with w t i in coordinate i t (x t ) = arg max 1≤i≤n w t i x t i
4
Update y t+1 = x t − 1 √ t g t (x t ).
5
Project y t+1 to Q w.r.t the ℓ 2 -norm: x t+1 = Proj Q y t+1 := arg min x∈Q y t+1 − x 2 .
6
Round x t+1 to X t+1 :
X t+1 i = 1 if x t+1 i ≥ 1/2 and X t+1 i = 0 otherwise 7 end
The following theorem, coupled with Corollary 2, show the tight bound of 2 on the approximation ratio of polynomial-time online algorithms for Min-max Vertex Cover (assuming UGC conjecture).
Theorem 4.
Assume that W = max 1≤t≤T max 1≤i≤n w t i . Then, after T time steps, Algorithm 2 achieves
t t=1 max 1≤i≤n w t i X t i ≤ 2 · min X * t t=1 max 1≤i≤n w t i X * i + 3W √ nT
Proof. By the OGD algorithm (see [24] or [11, Chapter 3]), we have
t t=1 max 1≤i≤n w t i x t i ≤ min x * ∈P t t=1 max 1≤i≤n w t i x * i + 3DG 2 √ T where D = max x,x ′ ∈Q x − x ′ 2 ≤ √ n is the diameter of Q and G = max 1≤t≤T max g t 2 ≤ W .
Moreover, by the rounding procedure, it always holds that
max i=1,...,n X t i w t i ≤ 2 max i=1,...,n x t i w t i .
Combining these inequalities, the theorem follows.
Computational issues for Follow the Leader based methods
The most natural approach in online learning is for the player to always pick the leading action, i.e. the action x t that is optimal to the observed history y 1 , . . . , y t−1 . However it can be proven ( [15]) that any deterministic algorithm that always decides on the leading action can be "tricked" by the adversary in order to make decision that are worse than the optimal action in hindsight, thus leading to large regret algorithms. On this regard, we need to add a regularization term containing randomness to the optimization oracle in order to make our algorithms less predictable and more stable. Thus, the Follow the Regularized Leader strategy in a minimization problem, consists of deciding on an action x t such that:
x t = arg min x∈X t−1 τ =1 f (x, y τ ) + R(x)
where R(x) is the regularization term.
There are many variations of the Follow the Leader (FTL) algorithm that differentiate on the applied objective functions and the type of regularization term. For linear objectives, Kalai and Vempala [15] suggested the Follow the Perturbed Leader algorithm where the regularization term is simply the cost/payoff of each action on a randomly generated instance of the problem. Dudik et al. [7] were able to generalize the FTPL algorithm of Kalai and Vempala [15] for non-linear objectives, by introducing the concept of shared randomness and a much more complex perturbation mechanism.
A common element between every Follow the Leader based method, is the need for an optimization oracle over the observed history of the problem. This is a minimum requirement since the regularization term can make determining the leader even harder, but most algorithms are able to map the perturbations to the value of the objective function on a set of instances of the problem and thus eliminate this extra complexity. To the best of our knowledge, up to now FTL algorithms for non-linear objective functions require an exact or a FPTAS oracle in order to obtain vanishing regret. Thus, strong N P -hardness for the multiple instance version of the offline problem indicates that the FTL strategy cannot be used for the online problem, at least with our current knowledge.
Computational hardness results
As we mentioned, algorithms that use the "Follow the Leader" strategy heavily rely on the existence of an optimization oracle for the multi-instance version of the offline problem. For linear objectives, it is easy to see ( [15]) that optimization over a set of instances is equivalent to optimization over a single instance and thus any algorithm for the offline problem can be transformed to an online learning algorithm. However, for non-linear problems this assumption is not always justified since even when the offline problem is polytime-solvable, the corresponding variation with multiple instances can be strongly N P -hard.
In this section we present some problems where we can prove that the optimum solution over a set of instances is hard to approximate. More precisely, in the multi-instance version of a given problem, we are given an integer N > 0, a set of feasible solutions X , and N objective functions f 1 , . . . , f N over X . The goal is to minimize (over X ) N i=1 f i (x). We will show computational hardness results for the multi-instance versions of:
• min-max vertex cover (already defined).
• min-max perfect matching, where we are given an undirected graph G(V, E) and a weight function w : E → R + on the edges and we need to determine a perfect matching such that the weight of the heaviest edge on the matching is minimized.
• min-max path, where we are given an undirected graph G(V, E), two vertices s and t, and a weight function w : E → R + on the edges and we need to determine an s − t path such that the weight of the heaviest edge in the path is minimized.
• P 3||Cmax, where we are given 3 identical parallel machines, a set of n-jobs J = {j 1 , . . . , j n } and processing times p : J → R + and we need to determine a schedule of the jobs to the machines (without preemption) such that the makespan, i.e. the time that elapses until the last job is processed, is minimized.
Hence, in the multi-instance versions of these problems, we are given N weight functions over vertices (min-max vertex cover) or edges (min-max perfect matching, min-max path), or N processing time vectors (P 3||Cmax).
Theorem 5.
The multi-instance versions of min-max vertex cover, min-max perfect matching, min-max path and P 3||Cmax are strongly N P -hard.
Proof. Here we present the proof for the multi-instance version of the min-max perfect matching and the min-max path problems, which use a similar reduction from the Max-3-DNF problem. The proofs for multi-instance min-max vertex cover and multi-instance P 3||Cmax can be found at appendices A.1 and A.2 respectively. In the Max-3-DNF problem, we are given a set of n boolean variables X = {x 1 , . . . , x n } and m clauses C 1 , . . . , C m that are conjunctions of three variables in X or their negations and we need to determine a truth assignment σ : X → {T, F } such that the number of satisfied clauses is maximized.
We start with the multi-instance min-max perfect matching problem. For every instance I of the Max-3-DNF problem we construct a graph G(V, E) and m weight functions defined as follows:
• To each variable x i is associated a 4-cycle on vertices (u i , u t i , u i , u f i ). This 4-cycle has two perfect matchings: either u i is matched with u t i and u i is matched with u f i , corresponding to setting the variable x i to true, or vice-versa, corresponding to setting x i to false. This specifies a one-to-one correspondence between the solutions of the two problems.
• Each weight function corresponds to one conjunction: w j (u i u t i ) = 1 if ¬x i ∈ C j , otherwise w j (u i u t i ) = 0. Edges incident to vertices u i always get weight 0. The above construction can obviously be done in polynomial time to the size of the input. It remains to show the correlation between the objective values of these solutions. If a clause C j is satisfied by a truth assignment σ then (since it is a conjunction) every literal on the clause must be satisfied. From the construction of the instance I ′ of multi-instance min-max matching, the corresponding matching M σ will have a maximum weight of 0 for the weight function w j . If a clause C j is not satisfied by a truth assignment, then the corresponding matching M σ will have a maximum weight of 1 for the weight function w j . Thus, from the reduction we get
val(I, σ) = m − val(I ′ , M σ )
where val stands for the value of a solution. This equation already proves the hardness result of Theorem 5. It actually also shows AP X-hardness. Indeed, the optimal value OPT of Max-3-DNF verifies m 8 ≤ OP T ≤ m. Assuming the existence of a (1 + ǫ) approximation algorithm for multi-instance minmax perfect matching problem, we can get a (1 − 7ǫ) approximation algorithm for Max-3-DNF. Since Max-3-DNF is AP X-Hard, multi-instance min-max perfect matching is also AP X-Hard.
A similar reduction leads to the same result for the min-max path problem: starting from an instance of 3-DNF, build a graph G where V = {v 0 , v 1 , . . . , v n }. Vertex v i corresponds to variable x i There are two arcs e t i and e f i between v i−1 and v i . We are looking for v 0 − v n paths. Taking edge e t i (resp. e f i ) corresponds to setting x i to true (resp. false). As previously this gives a one-to-one correspondence between solutions. Each clause corresponds to one weight function: if x i ∈ C j then w j (e f i ) = 1, if ¬x i ∈ C j then w j (e t i ) = 1. All other weights are 0. Then for a v 0 − v n path P , w j (P ) = 0 if and only if C j is satisfied by the corresponding truth assignment. The remainder of the proof is exactly the same as the one of min-max perfect matching.
Theorem 5 gives insight on the hardness of non-linear multi-instance problems compared to their single-instance counterparts. As we proved, the multi-instance P 3||Cmax is strongly NP-Hard while P 3||Cmax is known to admit a FPTAS [20,23]. Also, the multi-instance version of min-max perfect matching, min-max path and min-max vertex cover are proved to be AP X-Hard while their singleinstance versions can be solved in polynomial time. We also note that these hardness results hold for the very specific case where weights/processing times are in {0, 1}, for which P ||Cmax, as well as the other problems, become trivial.
We also note that the inapproximability bound we acquired for the multi-instance min-max vertex cover under UGC is tight, since we can formulate the problem as a linear program, solve it's continuous relaxation and then use a rounding algorithm to get a vertex cover of cost at most twice the optimum for the problem.
The results on the min-max vertex cover problem also provides some answer to question (Q2) addressed in the introduction. As we proved in Section 2.2, the online gradient descent method (paired with a rounding algorithm) suffices to give a vanishing 2-regret algorithm for online min-max vertex cover. However, since the multi-instance version of the problem is APX-hard there is no indication that the follow the leader approach can be used in order to get the same result and match the lower bound of Corollary 2 for the problem.
Online generalized knapsack problem
In this section we present a vanishing regret algorithm for the online learning version of the following generalized knapsack problem. In the traditional knapsack problem, one has to select a set of items with total weight not exceeding a fixed "knapsack" capacity B and maximizes the total profit of the set. Instead, we assume that the knapsack can be customized to fit more items. Specifically, there is a capacity B and if the total weight of the items exceeds this capacity, then we have to pay c-times the extra weight. Formally: Definition 3 (Generalized Knapsack Problem (GKP)). Given a set of items i = 1, 2, ..., n with nonnegative weights w i and non-negative profits p i , a knapsack capacity B ∈ R + and a constant c ∈ R + , determine a set of items A ⊆ [n] that maximizes the total profit:
profit(A) = i∈A p i − c max{0, i∈A w i − B}
This problem, as well as generalizations with other penalty costs for overweight, have been studied for instance in [4,2] (see there for practical motivations). In an online learning setting, we assume that we have n-items with static weights w i and a static constant c. On each timestep, we need to select a subset of those items and then we learn the capacity of the knapsack and the profit of every item, gaining some profit or even suffering loss based on our decision.
As we showed in Section 3.1, many non-linear problems do not have an efficient (polynomial) offline oracle and as a direct consequence, the follow the leader strategy can not directly be applied to get vanishing regret. While GKP is clearly not linear due to the maximum in the profit function, we will show that there exists a FPTAS for solving its multiple instances variation. We will use this result to get a vanishing regret algorithm for the online version of GKP (Theorem 6).
Since the problem is not linear, we use the the generalized FTPL (GFTPL) framework of Dudik et al. [7], which does not rely on the assumption that the objective function is linear. While in the linear case it was sufficient to consider an "extra" random observation (FTPL), a much more complex perturbation mechanism is needed in order for the analysis to work if the objective function is not linear. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness, using the notion of translation matrix. The method is presented in Appendix B.1.
Theorem 6.
There is a polynomial time vanishing regret algorithm for GKP.
Proof. (sketch) The proof is based on the three following steps:
• First we note that GFTPL works (gives vanishing regret) even if the oracle admits a FPTAS. This is necessary since our problem is clearly N P -hard.
• Second, we provide for GKP an ad hoc translation matrix. This shows that the GFTPL method can be applied to our problem. Moreover, this matrix is built in such a way that the oracle needed for GFTPL is precisely a multi-instance oracle.
• Third, we show that there exists an FPTAS multi-instance oracle.
The first two points are given in appendices B.1 and B.2 respectively. We only show the last point. To do this, we show that we can map a set of instances of the generalized knapsack problem to a single instance of the more general convex-generalized knapsack problem. Suppose that we have a set of m instances (p i , B i ) of GKP. Then, the total profit of every item set x ∈ X is:
profit(x) = m t=1 (x · p t − c max{0, w · x − B t }) = x · p s − ck(x|B 1 , ..., B m )
where p s = m t=1 p t and k(x|B 1 , ..., B m ) = m t=1 max{0, w · x − B t }. Let W = w · x the total weight of the item set andB 1 , . . . ,B m a non-decreasing ordering of the knapsack capacities. Then:
k(x|B 1 , ..., B m ) = k(W |B 1 , ...,B m ) 0 , W ≤B 1 W −B 1 ,B 1 < W ≤B 2 2W − (B 1 +B 2 )
,B 2 < W ≤B 3 . . . mW − (B 1 +B 2 + · · · +B m ) ,B m < W Note that the above function is always convex. This means that at every time step t, we need a FPTAS for the maximization problem x · p − f (W ) where f is a convex function. We know that such an FPTAS exists ( [2]). In this paper, the authors suggest a FPTAS with time complexity O(n 3 /ǫ 2 ) by assuming that the convex function can be computed at constant time. In our case the convex function k is part of the input; with binary search we can compute it in logarithmic time.
Conclusion
In the paper, we have presented a general framework showing the hardness of online learning algorithms for min-max problems. We have also showed a sharp separation between two widely-studied online learning algorithms, online gradient descent and follow-the-leader, from the approximation and computational complexity aspects. The paper gives rise to several interesting directions. A first one is to extend the reduction framework to objectives other than min-max. A second direction is to design online vanishing regret algorithms with approximation ratio matched to the lower bound guarantee. Finally, the proof of Theorem 1 needs a non-oblivious adversary. An interesting direction would be to get the same lower bounds with an oblivious adversary if possible.
Appendix
A Hardness of multi-instance problems (Theorem 5)
A.1 Hardness of multi-instance min-max vertex cover
We make a straightforward reduction from the vertex cover problem. Consider any instance G(V, E) of the vertex cover problem, with V = {v 1 , . . . , v n }. We construct n weight functions w 1 , . . . , w n : V → R + such that in w i vertex v i has weight 1 and all other vertices have weight 0. If we consider the instance of the multi-instance min-max vertex cover with graph G(V, E) and weight functions w 1 , . . . , w n , it is clear that any vertex cover has total cost that is equal to its size, since for any vertex v i ∈ V there is exactly one weight function where w i = 1 and w i = 0 for every other weight function.
Since vertex cover is strongly N P -hard, N P -hard to approximate within ratio √ 2−ǫ and UGC-hard to approximate within ratio 2 − ǫ, the same negative results hold for the multi-instance min-max vertex cover problem.
A.2 Hardness of multi-instance P3||Cmax
We prove that the multi-instance P 3||Cmax problem is strongly N P -hard even when the processing times are in {0, 1}, using a reduction from the N P -complete 3-coloring problem. In the 3-coloring (3C) problem, we are given a graph G(V, E) and we need to decide whether there exists a coloring of its vertices with 3 colors such that if two vertices are connected by an edge, they cannot have the same color.
For every instance G(V, E) of the 3C problem with |V | = n and |E| = m, we construct (in polynomial time) an instance of the multi-instance P 3||Cmax with n-jobs and N = m processing time vectors. Every edge (i, j) ∈ E corresponds to a processing time vector with jobs i and j having processing time 1 and every other job having processing time 0. It is easy to see that at each time step the makespan is either 1 or 2 and thus the total makespan is at least m and at most 2m.
If there exists a 3-coloring on G then by assigning every color to a machine, at each time step there will not be two jobs with non-zero processing time in the same machine and thus the makespan will be 1 and the total solution will have cost m. If the total solution has cost m then this means that at every time step the makespan was 1 and by assigning to the jobs of every machine the same color we get a 3 coloring of G. Hence, the multi-instance variation of the P 3||Cmax problem is strongly N P -hard.
B A polynomial time vanishing regret algorithm for GKP (Theorem 6) B.1 Generalized follow the perturbed leader
For the sake of completeness, we introduce the generalized FTPL (GFTPL) method of Dudik et al. [7], which can be used to achieve a vanishing regret for non linear objective functions for some discrete problems. The key idea of the GFTPL algorithm is to use common randomness for every feasible action but apply it in a different way. This concept was referred by the authors of [7] as shared randomness. In their algorithm, the regularization term R(x) of the FTPL algorithm is substituted by the inner product Γ x · a where a is a random vector and Γ x is a vector corresponding to the action x. In FTPL it was sufficient to have Γ x = x but in this general setting, Γ x must be the row of a translation matrix that corresponds to action x. [7]). A matrix Γ is admissible if its rows are distinct. It is (κ, δ)admissible if it is admissible and also (i) the number of distinct elements within each column is at most κ and (ii) the distinct elements within each column differ by at least δ. [7]). A translation matrix Γ is a (κ, δ)-admissible matrix with |X |rows and N-columns. Since the number of rows is equal to the number of feasible actions, we denote as Γ x the row corresponding to action x ∈ X . In the general case, Γ ∈ [γ m , γ M ] X ×N and G γ = γ M − γ m is used to denote the diameter of the translation matrix.
Definition 4 (Admissible Matrix
Definition 5 (Translation Matrix
From the definition of the translation matrix it becomes clear that the action space X needs to be finite. Note that the number of feasible actions can be exponential to the input size, since we do not need to directly compute the translation matrix. The generalized FTPL algorithm for a maximization problem is presented in algorithmic box 3. At time t, the algorithm decides the perturbed leader as the action that maximizes the total payoff on the observed history plus some noise that is given by the inner product of Γ x and the perturbation vector α. Note that in [7] the algorithm only needs an oracle with an additive error ǫ. We will see later that it works also for a multiplicative error ǫ (more precisely, for an FPTAS). Decide x t such that ∀x ∈ X :
Algorithm 3: Generalized FTPL algorithm
Data: A (κ, δ)-admissible translation matrix Γ ∈ [γ m , γ M ] X ×N ,t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ t−1 τ =1 f (x, y τ ) + a · Γ x − ǫ 4
Observe y t and gain payoff f (x t , y t ).
end
Let us denote G f as the diameter of the objective function, i.e., G f = max x,x ′ ∈X , y,y ′ ∈Y |f (x, y) − f (x ′ , y ′ )|.
Theorem 7 ([7]
). By using an appropriate η to draw the random vector, the regret of the generalized FTPL algorithm is:
R T ≤ N κG f G γ G f + 2ǫ δ T + ǫT
By setting ǫ = Θ(1/ √ T ), this clearly gives a vanishing regret.
Let us quote two difficulties to use this algorithm. First, the oracle has to solve a problem where the objective function is the sum of a multi-instance version of the offline problem and the perturbation. We will see in Appendix B.2 how we can implement the perturbation mechanism Γ x · α as the payoff of action x on a set of (random) observations of the problem.
Second, if the multi-instance version is N P -hard, having an efficient algorithm solving the oracle with an additive error ǫ is quite improbable. We remark that the assumption of an additive error ǫ can be replaced by the assumption of the existence of a FPTAS for the oracle. Namely, let us consider a modification of Algorithm 3 where at at each time t we compute a solution x t such that ∀x ∈ X :
t−1 τ =1 f (x t , y τ ) + a · Γ x t ≥ (1 − ǫ ′ ) t−1 τ =1 f (x, y τ ) + a · Γ x(1)
Then, if we use F M to denote the maximum payoff, i.e., F M = max x∈X , y∈Y f (x, y), by applying the same analysis as in [7], we can show that by fixing ǫ ′ = ǫ T F M +N ηΓ M we are guaranteed to get an action that has at least the same total perturbed payoff of decision x t if an additive optimization parameter ǫ was used. The computation is polynomial if we use an FPTAS. Then, we can still get a vanishing regret by using ǫ ′ = O(T − 3 2 ) instead of ǫ = O(T − 1 2 ) (considering all parameters of the problem as constants).
As a corollary, we can achieve a vanishing regret for any online learning problem in our setting by assuming access to an oracle OPT that can compute (for any ǫ ′ ) in polynomial time a decision x t satisfying Equation (1).
B.2 Distinguisher sets and a translation matrix for GKP
As noted above, an important issue in the method arises from the perturbation. Until now, the translation matrix Γ could be any (κ, δ)-admissible matrix as long as it had one distinct row for every feasible action in X . However, this matrix has to be considered by the oracle in order to decide x t . In [7] the authors introduce the concept of implementability that overcomes this problem. We present a simplified version of this property. Definition 6 (Distinguisher Set). A distinguisher set for an offline problem P is a set of instances S = {y 1 , y 2 , . . . , y N } ∈ Y N such that for any feasible actions x, x ′ ∈ X :
x = x ′ ⇔ ∃j ∈ [N ] : f (x, y j ) = f (x ′ , y j )
This means that S in a set of instances that "forces" any two different actions to differentiate in at least one of their payoffs over the instances in S. If we can determine such a set, then we can construct a translation matrix Γ that significantly simplifies our assumptions on the oracle.
Let S = {y 1 , y 2 , . . . , y N } be a distinguisher set for our problem. Then, for every feasible action x ∈ X we can construct the corresponding row of Γ such that: Γ x = [f (x, y 1 ), f (x, y 2 ), . . . , f (x, y N )]
Since S is a distinguisher set, the translation matrix Γ is guaranteed to be admissible. Furthermore, according to the set we can always determine some κ and δ parameters for the translation matrix. By implementing Γ using a distinguisher set, the expression we need to (approximately) maximize at each round can be written as:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 a i f (x, y i )
This shows that the perturbations transform into a set of weighted instances, were the weights a i are randomly drawn from uniform distribution [0, η]. This is already a significant improvement, since now the oracle has to consider only weighted instances of the offline problem and not the arbitrary perturbation αΓ x we were assuming until now. Furthermore, for a variety of problems (including GKP), we can construct a distinguisher set y 1 , . . . , y N such that:
af (x, y j ) = f (x, ay j ) ∀a ∈ R, j ∈ [N ]
If this is true, then we can shift the random weights of the oracle inside the instances:
t−1 τ =1 f (x, y τ ) + αΓ x = t−1 τ =1 f (x, y τ ) + N i=1 f (x, a i y i )
Thus, if we have a distinguisher set for a given problem, to apply GFTPL all we need is an FPTAS for optimizing the total payoff over a set of weighted instances.
We now provide a distinguisher set for the generalized knapsack problem. Consider a set of n instances (p j , B j ) of the problem such that in instance (p j , B j ) item j has profit P , all other items have profit 0 and the knapsack capacity is B j = W s . Since the total weight of a set of items can never exceed W s , it is easy to see that ∀x ∈ X :
f (x, p j , B j ) = P if item j is selected in set x 0 otherwise
For any two different assignments x and x ′ , there is at least one item j ∈ [n] that they don't have in common. It is easy to see that in the corresponding instance (y j , B j ) one of the assignments will have total profit P and the other will have total profit 0. Thus, the proposed set of instances is indeed a distinguisher set for the generalized knapsack problem. We use this set of instances to implement the Γ matrix. Then, every column of Γ will have exactly 2 distinct values 0 and P , making the translation matrix (2, P )-admissible. As a result, in order to achieve a vanishing regret for online learning GKP, all we need is an FPTAS for the multi-instance generalized knapsack problem. | 8,388 |
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | In computer science, with the constant networking and middleware development, scheduling in distributed processing systems is one of the topics which has gained attention in the last two decades. Casavant and Kuhl @cite_115 present a taxonomy of scheduling in general purpose distributed systems. The classification presented by the authors include local and global, static and dynamic, distributed and non-distributed, cooperative and non-cooperative scheduling, as well as some approaches to solve the problem, such as optimal and sub-optimal, heuristic, and approximate. This presented classification is complete in some sense, and it is still valid nowadays. However the current state of distributed systems indeed demands the addition of new branches in this taxonomy. | {
"abstract": [
"One measure of the usefulness of a general-purpose distributed computing system is the system's ability to provide a level of performance commensurate to the degree of multiplicity of resources present in the system. A taxonomy of approaches to the resource management problem is presented in an attempt to provide a common terminology and classification mechanism necessary in addressing this problem. The taxonomy, while presented and discussed in terms of distributed scheduling, is also applicable to most types of resource management. >"
],
"cite_N": [
"@cite_115"
],
"mid": [
"2152408640"
]
} | 0 |
||
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | Kwok and Ahmad @cite_141 survey static scheduling algorithms for allocating tasks connected as directed task graphs (DAGs) into multiprocessors. The authors presented a simplified taxonomy for approaches to the problem, as well as the description and classification of @math scheduling algorithms. The DAG scheduling algorithms for multiprocessors have been adapted for scheduling in distributed systems, incorporating intrinsic characteristics of such systems for an enhanced performance. Therefore, Kwok and Ahmad presented static scheduling algorithms for multiprocessors, which are also applicable to distributed systems, and their classification. In this paper we review extensions of those algorithms as well as the their classification by including heterogeneous systems, dynamic scheduling algorithms, scheduling algorithms in modern distributed environments, and new scheduling techniques. | {
"abstract": [
"Static scheduling of a program represented by a directed task graph on a multiprocessor system to minimize the program completion time is a well-known problem in parallel processing. Since finding an optimal schedule is an NP-complete problem in general, researchers have resorted to devising efficient heuristics. A plethora of heuristics have been proposed based on a wide spectrum of techniques, including branch-and-bound, integer-programming, searching, graph-theory, randomization, genetic algorithms, and evolutionary methods. The objective of this survey is to describe various scheduling algorithms and their functionalities in a contrasting fashion as well as examine their relative merits in terms of performance and time-complexity. Since these algorithms are based on diverse assumptions, they differ in their functionalities, and hence are difficult to describe in a unified context. We propose a taxonomy that classifies these algorithms into different categories. We consider 27 scheduling algorithms, with each algorithm explained through an easy-to-understand description followed by an illustrative example to demonstrate its operation. We also outline some of the novel and promising optimization approaches and current research trends in the area. Finally, we give an overview of the software tools that provide scheduling mapping functionalities."
],
"cite_N": [
"@cite_141"
],
"mid": [
"2040466547"
]
} | 0 |
||
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | In the last decades, after Kwok and Ahmad's work, other surveys and taxonomies for solutions to the scheduling problem for parallel systems have been developed. Most of these works focus on heterogeneous distributed systems @cite_159 , which Ahmad and Kwok considered as one of the most challenging directions to follow @cite_141 . Job scheduling strategies for grid computing are evaluated by in @cite_168 . The authors present common scheduling structures, such as centralized, decentralized, and hierarchical. Within each scheduling structure, they present and evaluate @math processor selection strategies and three scheduling algorithms, namely (FCFS), , and . After this work, many dynamic scheduling strategies were developed to tackle with the grid dynamicity. | {
"abstract": [
"Static scheduling of a program represented by a directed task graph on a multiprocessor system to minimize the program completion time is a well-known problem in parallel processing. Since finding an optimal schedule is an NP-complete problem in general, researchers have resorted to devising efficient heuristics. A plethora of heuristics have been proposed based on a wide spectrum of techniques, including branch-and-bound, integer-programming, searching, graph-theory, randomization, genetic algorithms, and evolutionary methods. The objective of this survey is to describe various scheduling algorithms and their functionalities in a contrasting fashion as well as examine their relative merits in terms of performance and time-complexity. Since these algorithms are based on diverse assumptions, they differ in their functionalities, and hence are difficult to describe in a unified context. We propose a taxonomy that classifies these algorithms into different categories. We consider 27 scheduling algorithms, with each algorithm explained through an easy-to-understand description followed by an illustrative example to demonstrate its operation. We also outline some of the novel and promising optimization approaches and current research trends in the area. Finally, we give an overview of the software tools that provide scheduling mapping functionalities.",
"In this paper, we discuss typical scheduling structures that occur in computational grids. Scheduling algorithms and selection strategies applicable to these structures are introduced and classified. Simulations were used to evaluate these aspects considering combinations of different Job and Machine Models. Some of the results are presented in this paper and are discussed in qualitative and quantitative way. For hierarchical scheduling, a common scheduling structure, the simulation results confirmed the benefit of Backfill. Unexpected results were achieved as FCFS proves to perform better than Backfill when using a central job-pool.",
"The problem of optimally scheduling tasks onto heterogeneous resources in grids, minimizing the makespan of these tasks, has proved to be NP-complete. There is no best scheduling algorithm for all grid computing systems. An alternative is to select an appropriate scheduling algorithm to use in a given grid environment because of the characteristics of the tasks, machines and network connectivity. In this paper a survey is presented on the problem and the different aspects of job scheduling in grids such as (a) fault-tolerance; (b) security; and (c) simulation of grid job scheduling strategies are discussed. This paper also presents a discussion on the future research topics and the challenges of job scheduling in grids."
],
"cite_N": [
"@cite_141",
"@cite_168",
"@cite_159"
],
"mid": [
"2040466547",
"1822495682",
"1500711357"
]
} | 0 |
||
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | As claim in their scheduling review @cite_11 , parallel job scheduling reviews are needed in a regular basis. The purpose of their short review was to introduce clusters and grids into the parallel job scheduling literature. Indeed, the authors present an introduction to job scheduling in grids, highlighting differences between a parallel computer and the grid. They point out cross-domain load balancing and co-allocations as two main concerns when scheduling in grids. In our work, we introduce a classification of schedulers in distributed systems that comprises a more extensive view of grid computing algorithms. Moreover, we highlight new requirements for the cloud computing emergent paradigm as well as its differences to grid computing. | {
"abstract": [
"The popularity of research on the scheduling of parallel jobs demands a periodic review of the status of the field. Indeed, several surveys have been written on this topic in the context of parallel supercomputers [17, 20]. The purpose of the present paper is to update that material, and to extend it to include work concerning clusters and the grid."
],
"cite_N": [
"@cite_11"
],
"mid": [
"1493147916"
]
} | 0 |
||
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | @cite_75 present a taxonomy in the scheduling problem for workflows considering multiple criteria optimization in grid computing environments. The authors separate the multi-criteria scheduling taxonomy in @math , namely , and , each facet describing the problem from a different point of view. These facets are expanded to classify existing works in a smaller granularity, pointing out where current research can be expanded and the work in each facet. | {
"abstract": [
"The workflow scheduling problem which is considered difficult on the Grid becomes even more challenging when multiple scheduling criteria are used for optimization. The existing approaches can address only certain variants of the multi-criteria workflow scheduling problem, usually considering up to two contradicting criteria being scheduled in some specific Grid environments. A comprehensive description of the problem can be an important step towards more general scheduling approaches. Based on the related work and on our own experience, we propose several novel taxonomies of the multi-criteria workflow scheduling problem, considering five facets which may have a major impact on the selection of an appropriate scheduling strategy: scheduling process, scheduling criteria, resource model, task model, and workflow model. We analyze different existing workflow scheduling approaches for the Grid, and classify them according to the proposed taxonomies, identifying the most common use cases and the areas which have not been sufficiently explored yet."
],
"cite_N": [
"@cite_75"
],
"mid": [
"1895859803"
]
} | 0 |
||
1901.03270 | 2890654010 | Abstract Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems. | We highlight two conclusions achieved by the authors in @cite_75 which are touched by contributions given by our survey: (i) grid workflow scheduling problem is still not fully addressed by existing work''; and (ii) there are almost no workflow scheduling approaches which are based on an adaptive cost model for criteria''. As a contribution to (i), in this survey we expand the general distributed system scheduling to comprise it. As a contribution to (ii), we include scheduling taxonomies for utility grids and cloud computing environments. | {
"abstract": [
"The workflow scheduling problem which is considered difficult on the Grid becomes even more challenging when multiple scheduling criteria are used for optimization. The existing approaches can address only certain variants of the multi-criteria workflow scheduling problem, usually considering up to two contradicting criteria being scheduled in some specific Grid environments. A comprehensive description of the problem can be an important step towards more general scheduling approaches. Based on the related work and on our own experience, we propose several novel taxonomies of the multi-criteria workflow scheduling problem, considering five facets which may have a major impact on the selection of an appropriate scheduling strategy: scheduling process, scheduling criteria, resource model, task model, and workflow model. We analyze different existing workflow scheduling approaches for the Grid, and classify them according to the proposed taxonomies, identifying the most common use cases and the areas which have not been sufficiently explored yet."
],
"cite_N": [
"@cite_75"
],
"mid": [
"1895859803"
]
} | 0 |
||
1901.02936 | 2909346791 | Linear mixed models (LMMs) are widely used for heritability estimation in genome-wide association studies (GWAS). In standard approaches to heritability estimation with LMMs, a genetic relationship matrix (GRM) must be specified. In GWAS, the GRM is frequently a correlation matrix estimated from the study population's genotypes, which corresponds to a normalized Euclidean distance kernel. In this paper, we show that reliance on the Euclidean distance kernel contributes to several unresolved modeling inconsistencies in heritability estimation for GWAS. These inconsistencies can cause biased heritability estimates in the presence of linkage disequilibrium (LD), depending on the distribution of causal variants. We show that these biases can be resolved (at least at the modeling level) if one adopts a Mahalanobis distance-based GRM for LMM analysis. Additionally, we propose a new definition of partitioned heritability -- the heritability attributable to a subset of genes or single nucleotide polymorphisms (SNPs) -- using the Mahalanobis GRM, and show that it inherits many of the nice consistency properties identified in our original analysis. Partitioned heritability is a relatively new area for GWAS analysis, where inconsistency issues related to LD have previously been known to be especially pernicious. | Recently, in independent work, @cite_0 proposed using the Mahalanobis kernel in a similar way for heritability esitmation with GWAS data. 's paper primarily focuses on empirical analysis, using both simulated and real datasets to illsutrate advantages of the Mahalanobis kernel. The present work contains more precise mathematical and statistical justification for much of the work in @cite_0 , and introduces statistical principles (e.g. @math -heritability in Section ) that can be extended to other targeted application areas and genetics (like partitioning heritability). | {
"abstract": [
"Single nucleotide polymorphism (SNP)-heritability estimation is an important topic in several research fields, including animal, plant and human genetics, as well as in ecology. Linear mixed model estimation of SNP-heritability uses the structures of genomic relationships between individuals, which is constructed from genome-wide sets of SNP-markers that are generally weighted equally in their contributions. Proposed methods to handle dependence between SNPs include, “thinning” the marker set by linkage disequilibrium (LD)-pruning, the use of haplotype-tagging of SNPs, and LD-weighting of the SNP-contributions. For improved estimation, we propose a new conceptual framework for genomic relationship matrix, in which Mahalanobis distance-based LD-correction is used in a linear mixed model estimation of SNP-heritability. The superiority of the presented method is illustrated and compared to mixed-model analyses using a VanRaden genomic relationship matrix, a matrix used by GCTA and a matrix employing LD-weighting (as implemented in the LDAK software) in simulated (using real human, rice and cattle genotypes) and real (maize, rice and mice) datasets. Despite of the computational difficulties, our results suggest that by using the proposed method one can improve the accuracy of SNP-heritability estimates in datasets with high LD."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2773459529"
]
} | THE MAHALANOBIS KERNEL FOR HERITABILITY ESTIMATION IN GENOME-WIDE ASSOCIATION STUDIES: FIXED-EFFECTS AND RANDOM-EFFECTS METHODS * | 0 |
|
1901.03031 | 2909148692 | In the past decades, feature-learning-based 3D shape retrieval approaches have been received widespread attention in the computer graphic community. These approaches usually explored the hand-crafted distance metric or conventional distance metric learning methods to compute the similarity of the single feature. The single feature always contains onefold geometric information, which cannot characterize the 3D shapes well. Therefore, the multiple features should be used for the retrieval task to overcome the limitation of single feature and further improve the performance. However, most conventional distance metric learning methods fail to integrate the complementary information from multiple features to construct the distance metric. To address these issue, a novel multi-feature distance metric learning method for non-rigid 3D shape retrieval is presented in this study, which can make full use of the complimentary geometric information from multiple shape features by utilizing the KL-divergences. Minimizing KL-divergence between different metric of features and a common metric is a consistency constraints, which can lead the consistency shared latent feature space of the multiple features. We apply the proposed method to 3D model retrieval, and test our method on well known benchmark database. The results show that our method substantially outperforms the state-of-the-art non-rigid 3D shape retrieval methods. | Appropriate similarities between samples can improve the performances of the retrieval system. During the past decade, several well-known distance metric learning methods are proposed for various fields @cite_5 @cite_33 @cite_44 @cite_21 @cite_6 @cite_18 , such as ITML @cite_5 , LMNN @cite_33 , SVMs @cite_44 , PCA @cite_21 , LDA @cite_6 , etc. These algorithms have been used for many computer vision and computer graphic tasks, such as classification, retrieval, correspondence, etc. These algorithms solve the problem that most features lie in a complex high-dimensional spaces where Euclidean metric is ineffective. However, most distance metric learning methods fail to integrate compatible and complementary information from multiple features to construct a distance metric. In order to explore more useful information for various applications, many researchers invest many methods to combine multi-view setting to distance metric learning algorithm. Kan @cite_7 proposed a multi-view discriminant analysis as an extension of LDA, which has achieved excellent performances facing with multi-view features. Wu @cite_31 proposed an online multi-modal distance metric learning which has been successfully applied in image retrieval. | {
"abstract": [
"Along with the arrival of multimedia time, multimedia data has replaced textual data to transfer information in various fields. As an important form of multimedia data, images have been widely utilized by many applications, such as face recognition and image classification. Therefore, how to accurately annotate each image from a large set of images is of vital importance but challenging. To perform these tasks well, it is crucial to extract suitable features to character the visual contents of images and learn an appropriate distance metric to measure similarities between all images. Unfortunately, existing feature operators, such as histogram of gradient, local binary pattern, and color histogram, care more about the visual character of images and lack the ability to distinguish semantic information. Similarities between those features cannot reflect the real category correlations due to the well-known semantic gap. In order to solve this problem, this paper proposes a regularized distance metric framework called semantic discriminative metric learning (SDML). SDML combines geometric mean with normalized divergences and separates images from different classes simultaneously. The learned distance metric can treat all images from different classes equally. And distinctions between similar classes with entirely different semantic contents are emphasized by SDML. This procedure ensures the consistency between dissimilarities and semantic distinctions and avoids inaccuracy similarities incurred by unbalanced locations of samples. Various experiments on benchmark image datasets show the excellent performance of the novel method.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"",
"Abstract Principal component analysis of a data matrix extracts the dominant patterns in the matrix in terms of a complementary set of score and loading plots. It is the responsibility of the data analyst to formulate the scientific issue at hand in terms of PC projections, PLS regressions, etc. Ask yourself, or the investigator, why the data matrix was collected, and for what purpose the experiments and measurements were made. Specify before the analysis what kinds of patterns you would expect and what you would find exciting. The results of the analysis depend on the scaling of the matrix, which therefore must be specified. Variance scaling, where each variable is scaled to unit variance, can be recommended for general use, provided that almost constant variables are left unscaled. Combining different types of variables warrants blockscaling. In the initial analysis, look for outliers and strong groupings in the plots, indicating that the data matrix perhaps should be “polished” or whether disjoint modeling is the proper course. For plotting purposes, two or three principal components are usually sufficient, but for modeling purposes the number of significant components should be properly determined, e.g. by cross-validation. Use the resulting principal components to guide your continued investigation or chemical experimentation, not as an end in itself.",
"",
"In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM‘s. The approach is illustrated on a two-spiral benchmark classification problem.",
"",
"Distance metric learning (DML) is an important technique to improve similarity search in content-based image retrieval. Despite being studied extensively, most existing DML approaches typically adopt a single-modal learning framework that learns the distance metric on either a single feature type or a combined feature space where multiple types of features are simply concatenated. Such single-modal DML methods suffer from some critical limitations: (i) some type of features may significantly dominate the others in the DML task due to diverse feature representations; and (ii) learning a distance metric on the combined high-dimensional feature space can be extremely time-consuming using the naive feature concatenation approach. To address these limitations, in this paper, we investigate a novel scheme of online multi-modal distance metric learning (OMDML), which explores a unified two-level online learning scheme: (i) it learns to optimize a distance metric on each individual feature space; and (ii) then it learns to find the optimal combination of diverse types of features. To further reduce the expensive cost of DML on high-dimensional feature space, we propose a low-rank OMDML algorithm which not only significantly reduces the computational cost but also retains highly competing or even better learning accuracy. We conduct extensive experiments to evaluate the performance of the proposed algorithms for multi-modal image retrieval, in which encouraging results validate the effectiveness of the proposed technique."
],
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_21",
"@cite_6",
"@cite_44",
"@cite_5",
"@cite_31"
],
"mid": [
"2408458745",
"2106053110",
"",
"2089468765",
"",
"1596717185",
"",
"2240204456"
]
} | Multi-feature Distance Metric Learning for Non-rigid 3D Shape Retrieval | known benchmark database. The results show that our method substantially outperforms the state-of-the-art non-rigid 3D shape retrieval methods.
Keywords Multi-view learning · Distance metric learning · Non-rigid 3D shape retrieval
Introduction
With the development of information technology [1,2], non-rigid 3D shape retrieval has been an active research spot for many years with explosive growth of 3D models [3][4][5][6][7][8]. The 3D shape retrieval are described as: Given a set of 3D shapes and a query shape, we would like to develop an effective algorithm to measure the similarity of the query [9] to all shapes in the datebase. 3D models have the complicated geometric structure information, which is difficult to construct the discriminative global features for various application. Only onefold global features usually cannot characterize the 3D shapes well, which means that the only onefold intrinsic geometric information is not enough to discriminate various 3D shapes for the non-rigid 3D shape retrieval task. Meanwhile the non-rigid deformations of the shapes induce the noise of the features, which impacts the computation of 3D shape similarity. Therefore, how to effectively calculate the distance between non-rigid 3D shapes is still a challenging problem.
In recent years, various non-rigid 3D shape retrieval algorithms had been proposed. Most of the algorithms focus on extracting the intrinsic features of the shapes based on the local or global geometric structure and measuring the similarity of the features. These approaches usually extract novel intrinsic features for the shapes firstly. Then, the hand-crafted distance metric or conventional distance metric learning methods are used to compute the similarity of the features. In [3], the bag-of-word feature with spatial information are constructed by coding the spectral signatures. Then the Similarity Sensitive Hashing (SSH) are used to improve performance of the retrieval. Litman [6] extract the global features by sparse dictionary learning algorithm, and explore the Euclidean metric to measure the similarity between the features. These methods explore single intrinsic features, which is not enough for discriminating various 3D shapes. Different with the single features, multiple features contain compatible and complementary geometric information which can improve the performance of the retrieval task. Chiotellis [10], use the weighted averaging directly on two spectral signatures to construct the global features, and the similarity of the features are measured by Large margin nearest neighbor algorithm. Some approaches [11][12][13] explore weighted averaging of the distance between single feature to measure the similarity of the shapes. These methods concatenate all features into one single feature to adapt to the hand-craft distance metric or distance metric learning setting. However, this concatenations not physically meaningful because each feature has a specific statistical property [14]. Therefore, it can not exploit the complementary geometric information to discriminate the 3D shapes well.
Meanwhile, many researchers focus on multi-view learning methods [15], which makes a significant development in machine learning fields [16][17][18]. In their mind, a real-world object may have different descriptions from multiview observation spaces. These spaces usually look different from each other but are highly related. The multi-view setting is usually combined with single view based on either the consensus principle or the complementary principle to improve the performance of various tasks [14,[19][20][21][22]. Zhai [19] presented a multi-view metric learning method named Multiview Metric Learning with Global consistency and Local smoothness(MVML-GL) under a semisupervised learning setting. This method seeks a global consistent shared latent feature space firstly, and then a explicit mapping functions between the input spaces and the shared latent space can be learned via regularized locally linear regression. The process of these two steps can be solved by convex optimizations in closed form. Canonical Correlation Analysis (CCA) [20] is a statistical methods correlating linear relationships between more variables. Kernel CCA(KCCA) explore the kernel function framework to extend the nonlinear processing ability. Kumar [21] proposed a co-regularized framework by advancing co-training for multi-view spectral clustering. Iterative optimization procedure is adopted to update the eigenvector one after another. Xu [22] proposed a Multi-view Intact Space Learning(MISL), which integrates the encoded complementary information in multiple views to discover a latent intact representation of the data. Intact space learning for multi-view learning provides a new multi-view representation method. It can be extened to supervised learning problems, but adding a hinge loss, or a multi-view loss to the objective. More related works survey have been proposed by [23][24][25][26] which provide a more comprehensive introduction for the recent developments of multi-view learning methods on the basis of coherence with early methods. In Computer Graphic community, the multi-view means that the multiple angles projection of the 3D models. In order not to confuse the concepts, we use the multi-feature in the Computer Graphic community to replace the multi-view in machine learning community [27].
Inspire by the multi-view learning methods [28], we develop a novel multifeature distance metric learning algorithm in this paper, which can make fully use of the geometric information from multiple shape features. We introduce the multi-feature distance metric learning algorithm to construct a common distance metric for all features. For each feature, the distance of inner-class pair is less than a smaller threshold and that of each extra-class pair is higher than a larger threshold, respectively. Meanwhile, the algorithm minimize the distance between the Gaussian distributions of different features under different distance metrics based on KL-divergence. The two constraints are both adopted to obtain the common distance metric. Figure1. shows the pipeline of the proposed framework.
The organization of this paper is as follows. In Section 2, we provide a brief overview of previous related work of the local descriptor, shape features and metric learning algorithms. In Section 3, we present the detail of the multi- The pipeline of the MfML based non-rigid 3D shape retrieval feature metric learning algorithm for non-rigid 3D shape retrieval. In Section 4, we show the results of our experiment. Section 5 concludes the paper.
Fig. 2: The HKS, WKS and SIHKS point signatures with different parameters
that remains at the point of surface over a period of time. HKS is intrinsic, multi-scale and robust, which is useful for non-rigid shape analysis. However, HKS is sensitive to the change of the shape scale. Bronstein [33] introduced a scale-invariant version of HKS by using the Fourier transform, which moves from the time domain to the frequencies domain. Then, Aubry [34] proposed Wave Kernel Signature based on Schrodinger equation, which describes the average probability over time to locate a particle with a certain energy distribution at the point on the surface. WKS clearly separates influences of different frequencies, treating all frequencies equally. Hence WKS reserves more high frequency information than HKS. A comprehensive survey in [35,36] provides more details of the spectral signatures. As mentioned above, although the global shape descriptors can be used for shape retrieval directly, the lack of details limit their performance on some benchmark in which the shapes contain many details. Therefore, make full use of the point or local descriptors is important for the non-rigid shape retrieval task. Many approaches aggregate the point descriptors, regions or partial views to construct the global intrinsic features by using various algorithms. Among the algorithms, Bag of Words (BoW) is the most popular one. BoW had been successfully applied to computer vision, natural language processing, etc. In recent years, it has been concerned in shape retrieval field [2]. The geometric equivalent of words are local descriptors, which are quantized in a geometric dictionary to obtain the bag-of-geometric words [6]. This algorithm codes the local descriptor to construct a global feature, in which contains rich details of the shape. Bronstein [4] exploited the BoW algorithm and added the spatial relations to extract the Spatially Sensitive Bags of Features (SS-BoF).
The SS-BoF exhibited an excellent performance in SHREC10 ShapeGoogle dataset benchmark. Litman [6] explored supervised dictionary leaning with sparse coding algorithm for extracting the global feature based on point descriptors. Subsequently, the Fisher Vector (FV) and Super Vector (SV) algorithm are introduced to code the point descriptors [37]. These two algorithms are similar to the BoW algorithm. The dictionary is designed firstly by Gaussian Mixture Model, and then the local descriptors are coded by the Gaussian distributions. These algorithm contain multi-order information, which is more informative than BoW. Therefore, the FV and SV algorithms extract more comprehensive features for the shape. Unlike the BoF which aims to code the descriptors, Li [38] proposed a intrinsic spatial pyramid matching method for the retrieval task and also achieved a good performance. Furthermore, there are some approaches focus on the metric between the features more [10]. Chiotellis [10,11], use the weighted averaging directly on siHKS and WKS to construct the global features, and then explored the Large margin nearest neighbor algorithm to obtain the metric between the features. This method is very concise, efficient, and effective, and the result outperforms many methods in SHREC14 benchmark. The success of this approach is based on the LMNN algorithm. Therefore, the distance metric learning algorithm is also very important in the retrieval task.
Appropriate similarities between samples can improve the performances of the retrieval system. During the past decade, several well-known distance metric learning methods are proposed for various fields [39][40][41][42][43][44], such as ITML [39], LMNN [40], SVMs [41], PCA [42], LDA [43], etc. These algorithms have been used for many computer vision and computer graphic tasks, such as classification, retrieval, correspondence, etc. These algorithms solve the problem that most features lie in a complex high-dimensional spaces where Euclidean metric is ineffective. However, most distance metric learning methods fail to integrate compatible and complementary information from multiple features to construct a distance metric. In order to explore more useful information for various applications, many researchers invest many methods to combine multi-view setting to distance metric learning algorithm. Kan [45] proposed a multi-view discriminant analysis as an extension of LDA, which has achieved excellent performances facing with multi-view features. Wu [46] proposed an online multi-modal distance metric learning which has been successfully applied in image retrieval.
Proposed Approach
In this section, we introduce the proposed multi-feature metric learning algorithm (MfML) for 3D non-rigid shape retrieval in detail. We extract different types of 3D intrinsic features. Some features are global intrinsic shape descriptors, which is used to describe the global structure of the shapes. And some features are extracted by using the BoW algorithm to code different types of point descriptors, which is used to code the geometric information of local points based on various scales. These intrinsic multiple features are used to train a common metric, which fully integrates compatible and complementary information from them. Then, we illustrate the optimization of the algorithm.
The Structure of Multi-feature Metric Learning
Let X v = [x v 1 , x v 2 , ..., x v N ] ∈ R dv×N , v = 1, 2, .
.., m be the training set of the vth intrinsic feature, where x v i ∈ R dv is ith samples and N is the total number of samples. The Mahalanobis distance metric learning algorithm try to obtain a square matrix as the metric matrix. For vth features, the distance between any two samples x v i and x v j can be computed as:
d v (x v i , x v j ) = (x v i − x v j ) T A v (x v i − x v j ),
with the A v being decomposed as:
A v = L T v L v .
And then the d v (x v i , x v j ) can also be written as:
d v (x v i , x v j ) = (x v i − x v j ) T A v (x v i − x v j ) = L v x v i − L v x v j 2 .
We can see from the equation that learning a Mahalanobis distance metric is equivalent to finding a linear projection onto a subspace, under which the Euclidean distance of two samples in the transformed space is equal to the Mahalanobis distance metric in the original space. We expect that the Euclidean distances between positive pairs are smaller than those between negative pairs in the subspace. Figure 2 shows the basic idea. In order to improve its discriminative ability we explore the following constraint [47]:
δ ij (τ − d 2 v (x v i , x v j )) > 1.(1)
We use C to express the set that contains the pairs of samples from the same class, and M to express the set that contains the pairs of samples from the different class. Let δ ij = −1 if (x v i , x v j ) ∈ M or else δ ij = 1. Then, above constrain in equation 1 is adopted by our algorithm as follows: min Lv,v=1,..,m m v=1 i,j
1 2 g(1 − δ ij (τ − d 2 v (x v i , x v j )) + m v=1 λ v L v 2 F .(2)
where g(x) = 1 ρ log(1 + exp(ρx)) is a smoothed approximation of the hinge loss function, L v 2 F is the regularization term, λ v are regularization parameters. We can find the optimal subspace projection matrix L v , v = 1, ..., m by minimizing Eq.2.
However, it is clearly that minimizing the equation 3 equals to the sum of all features with constrain 1, which exploits neither the consensus principle 2 Margin Same Different Fig. 3: The distance metric is optimized so that differently labeled inputs lie outside this smaller radius by some finite margin nor the complementary principle for improving learning performance. Due to combine the complementary information from multiple features, we explore a hypotheses that each feature of the sample follows the Gaussian distribution with a Mahalanobis distance parameterized by L T v L v , and all the distributions are similar. Inspired by ITML [39] and CMSC [21], we formulate the following cost function to measure the disagreement between the metrics A v and the consensus one A * :
min Lv KL(p(x v ; A * )||p(x v ; L T v L v )) (3)
where p is a multivariate Gaussian as p( µ)), and where Z, µ is a normalizing constant and the mean vector respectively. The A * ∈ R n×n is defined as A * = I + 1 m (L T 1 L 1 +L T 2 L 2 +...+L T m L m ). A * can be treated as the common distance metric for all features. The optimization of equation 3 makes all the Gaussian distributions to be similar, which induces that every A v is closed with A * . Hence, by adopting two constrains, we can formulate a new cost function to construct a new metric: min Lv,v=1,..,m m v=1 i,j
x; A) = 1 Z exp(− 1 2 (x − µ) T A(x −1 2 g(1 − δ ij (τ − d 2 v (x v i , x v j )) + β m v=1 KL(p(x v ; A * )||p(x v ; L T v L v )) + m v=1 λ v L v 2 F (4)
where β is the parameter to balance trade-off between two constrains. We can see from the equation 4 that MfML can separate the samples from different classes by using information from multiple features. The consensus A * is constructed by all A v , which fully integrates the complementary information from every feature. Meanwhile, we can see from the optimization process that the update of A v is also affected by A * .
Optimization Process of MfML
In this section, we provide the detail of the optimization process. Computing the gradient directly based on the definition of KL divergence is difficult. Hence, we reference ITML [39] to simiplify the second term as:
KL(p(x v ; A * )||p(x v ; L T v L v )) = 1 2 D d (L T v L v , A * ) where D d (L T v L v , A * ) = tr(L T v L v , A * −1 )−log det (L T v L v A * −1 )−n. The D d (A, B)
is called Burg matrix divergence(or the LogDet divergence), which is a convex functions defined over matrices. And then, the cost function can be reformulated as follows: min Lv,v=1,..,m m v=1 i,j
1 2 g(1 − δ ij (τ − d 2 v (x v i , x v j )) + α m v=1 (tr(L T v L v , A * −1 ) − log det (L T v L v A * −1 ) − n) + m v=1 λ v L v 2 F(5)
In order to solve the Eq.5, an alternating minimization is carried out. We optimize one L v at one time with other variables fixed by gradient descent algorithm. The consensus metric A * is updated after optimizing every L v . And then, the L v are updated based on the new A * . We explore the Gradient Descent (GD) to solve L v as:
L t+1 v =L t v − (L v t ij δ ij (x i − x j )(x i − x j ) T 1 + exp (βz ij ) + 2L t v A * −1 − 2(L t T v L t v ) † L t v + 2λ v L t v )(6)
where
z ij = 1 − δ ij (τ − d 2 v (x v i , x v j )
). At last, we can a consensus metric matrix A * as the output of the MfML algorithm. The A * can be directly used for measuring the similarity between the any type features that have been preprocessed by PCA for unifying the dimension. From the procedure of updating L v and A * , we can see that the information from multiple feature is integrated into a co-regularized framework.
Experiment
In this section, we demonstrate the results of non-rigid 3D shape retrieval based on MfML, and then compare it with the state-of-the-art non-rigid 3D shape retrieval approaches on SHREC'11 [13] and SHREC'15 [48,49] benchmark dataset. The experiment is conducted on a 3.0 GHz Core(TM) i7 computer with 16GB memory.
Experiment Setting
For all 3D shape benchmark datasets, we explore 2 different types of point signatures and 1 global descriptor to form multiple shape features. We show the setting of the point signatures and the global descriptor used in our experiment as follows:
1)WKS: The Wave Kernel signature describes the average probability over time to locate a particle with a certain energy distribution at the point on the surface [34]. WKS clearly separates influences of different frequencies, treating all frequencies equally, and organizes the intrinsic geometric information of the point in a multi-scale way.
2)siHKS: The scale-invariant Heat Kernel Signature (siHKS) is a scaleinvariant version of heat kernel descriptor [33]. The construction is based on a logarithmically sampled scale-space, and then the absolute values of Fourier transform are used for moving the scale factor from the time domain to the frequencies domain.
3)ShapeDNA: The ShapeDNA is constructed by truncating the normalized sequence of the eigenvalues of the LBO [3]. The main advantages of ShapeDNA are the simple representation, comparison, scale invariance. And in spite of its simplicity, it has a good performance for non-rigid shape retrieval.
We use the first 100 eigenvectors of LBO to construct two point signatures. The 100-dimensional WKS with setting the variance to 6 and 50-dimensional siHKS with same setting as in [6] are extracted by them. Then we explore the BoW algorithm to code the WKS and siHKS respectively, and then we can obtain the 64-dimensional BoW-WKS and BoW-siHKS global features. We utilize the first 40 normalized eigenvalues of the LBO as the ShapeDNA feature. PCA is used to project all features into a 30 dimension subspace as the pre-processing of our experiment.
Experiment on SHREC'11
In this section, we conduct 2 experiments on SHREC'11 benchmark dataset. The database contains 600 watertight meshes, which is derived from 30 original models. Every class contains 1 null model and 19 deformed models based on it. Firstly we compare method based on MfML with the methods related with LBO: (1)ShapeGoogle [4], 2)Modal Function Transformation(MFT) [30], 3)Supervised Dictionary Learning(SupDL) [6], and these three features without being integrated by MfML. We randomly select 60% samples with the labels from every class as the training set. In test stage, we project all features into a 30-dimensional subspace, and explore the MfML to calculate the common metric A * . We compare with 1).ShapeDNA
Conclusion
In this paper, we proposed a novel multi-feature metric learning method for non-rigid 3D shape retrieval. MfML aims to exploit compatible and complementary geometric information from multiple intrinsic features. For each feature, MfML makes the distance of inner-class pair less than a smaller threshold and that of each extra-class pair higher than a larger threshold, respectively. Meanwhile, by minimizing KL-divergence between the Gaussian distributions of different features under different distance metrics to let multiple features to work together to obtain a consensus distance metric. The two constraints are both adopted to obtain an excellent common distance metric. Many experiments on two benchmark datasets have verified that MfML is a highly efficient multi-feature distance metric learning method. | 3,569 |
1901.03067 | 2966078298 | Discovering social relations in images can make machines better interpret the behavior of human beings. However, automatically recognizing social relations in images is a challenging task due to the significant gap between the domains of visual content and social relation. Existing studies separately process various features such as faces expressions, body appearance, and contextual objects, thus they cannot comprehensively capture the multi-granularity semantics, such as scenes, regional cues of persons, and interactions among persons and objects. To bridge the domain gap, we propose a Multi-Granularity Reasoning framework for social relation recognition from images. The global knowledge and mid-level details are learned from the whole scene and the regions of persons and objects, respectively. Most importantly, we explore the fine-granularity pose keypoints of persons to discover the interactions among persons and objects. Specifically, the pose-guided Person-Object Graph and Person-Pose Graph are proposed to model the actions from persons to object and the interactions between paired persons, respectively. Based on the graphs, social relation reasoning is performed by graph convolutional networks. Finally, the global features and reasoned knowledge are integrated as a comprehensive representation for social relation recognition. Extensive experiments on two public datasets show the effectiveness of the proposed framework. | The interdisciplinary research of multimedia and sociology has been studied for many years @cite_5 @cite_0 . Popular topics include social networks discovery @cite_25 , key actors detection @cite_1 , group activity recognition @cite_17 , and so on. In recent years, social recognition from images has attracted attention from researchers @cite_20 @cite_3 @cite_27 @cite_30 . For example, Zhang proposed to learn social relation traits from face images by CNNs @cite_13 . Sun proposed a social relation dataset based on the social domain theory @cite_19 and exploited CNNs to recognize social relations from a set of attributes @cite_20 . Li proposed to an attention-based dual-glance model for social relation recognition, in which the first glance extracted features from persons and the second glance focused on contextual cues @cite_2 . Wang proposed to model persons and objects in an image as a graph and perform relation reasoning by a Gated Graph Neural Network @cite_16 . However, they only considered the co-existence of persons and objects in a scene but neglected global information and interactions among persons and objects that are important knowledge for social relation recognition. Therefore, we propose a Multi-Granularity Reasoning framework to explore complementary cues for social relation recognition. | {
"abstract": [
"",
"Multi-person event recognition is a challenging task, often with many people active in the scene but only a small subset contributing to an actual event. In this paper, we propose a model which learns to detect events in such videos while automatically \"attending\" to the people responsible for the event. Our model does not use explicit annotations regarding who or where those people are during training and testing. In particular, we track people in videos and use a recurrent neural network (RNN) to represent the track features. We learn time-varying attention weights to combine these features at each time-instant. The attended features are then processed using another RNN for event detection classification. Since most video datasets with multiple people are restricted to a small number of videos, we also collected a new basketball dataset comprising 257 basketball games with 14K event annotations corresponding to 11 event classes. Our model outperforms state-of-the-art methods for both event classification and detection on this new dataset. Additionally, we show that the attention mechanism is able to consistently localize the relevant players.",
"",
"",
"Proposing that the algorithms of social life are acquired as a domain-based process, the author offers distinctions between social domains preparing the individual for proximity-maintenance within a protective relationship (attachment domain), use and recognition of social dominance (hierarchical power domain), identification and maintenance of the lines dividing \"us\" and \"them\" (coalitional group domain), negotiation of matched benefits with functional equals (reciprocity domain), and selection and protection of access to sexual partners (mating domain). Flexibility in the implementation of domains occurs at 3 different levels: versatility at a bioecological level, variations in the cognitive representation of individual experience, and cultural and individual variations in the explicit management of social life. Empirical evidence for domain specificity was strongest for the attachment domain; supportive evidence was also found for the distinctiveness of the 4 other domains. Implications are considered at theoretical and applied levels.",
"",
"Since the beginning of early civilizations, social relationships derived from each individual fundamentally form the basis of social structure in our daily life. In the computer vision literature, much progress has been made in scene understanding, such as object detection and scene parsing. Recent research focuses on the relationship between objects based on its functionality and geometrical relations. In this work, we aim to study the problem of social relationship recognition, in still images. We have proposed a dualglance model for social relationship recognition, where the first glance fixates at the individual pair of interest and the second glance deploys attention mechanism to explore contextual cues. We have also collected a new large scale People in Social Context (PISC) dataset, which comprises of 22,670 images and 76,568 annotated samples from 9 types of social relationship. We provide benchmark results on the PISC dataset, and qualitatively demonstrate the efficacy of the proposed model.",
"The people in an image are generally not strangers, but instead often share social relationships such as husband-wife, siblings, grandparent-child, father-child, or mother-child. Further, the social relationship between a pair of people influences the relative position and appearance of the people in the image. This paper explores using familial social relationships as context for recognizing people and for recognizing the social relationships between pairs of people. We introduce a model for representing the interaction between social relationship, facial appearance, and identity. We show that the family relationship a pair of people share influences the relative pairwise features between them. The experiments on a set of personal collections show significant improvement in people recognition is achieved by modeling social relationships, even in a weak label setting that is attractive in practical applications. Furthermore, we show the social relationships are effectively recognized in images from a separate test image collection.",
"",
"Social relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. Motivated by psychological studies, we investigate if such fine grained and high-level relation traits can be characterised and quantified from face images in the wild. To address this challenging problem we propose a deep model that learns a rich face representation to capture gender, expression, head pose, and age-related attributes, and then performs pairwise-face reasoning for relation prediction. To learn from heterogeneous attribute sources, we formulate a new network architecture with a bridging layer to leverage the inherent correspondences among these datasets. It can also cope with missing target attribute labels. Extensive experiments show that our approach is effective for fine-grained social relation learning in images and videos.",
"In this paper, we study the problem of social relational inference using visual concepts which serve as indicators of actors' social interactions. While social network analysis from videos has started to gain attention in the recent years, the existing work either uses proximity or co-occurrence statistics, or exploit a holistic model of the scene content where the relations are assumed to stay constant throughout the video. This work permits changing relations and argues that there exists a relationship between the visual concepts and the social relations among actors, which is a fundamentally new concept in computer vision. Specifically, we leverage the existing large-scale concept detectors to generate concept score vectors to represent the video content, and we further map them to grouping cues that are used to detect the social structure. In our framework, a probabilistic graphical model with temporal smoothing provides a means to analyze social relations among actors and detect communities. Experiments on Youtube videos and theatrical movies validate the proposed framework.",
"Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.",
"We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks."
],
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2963274633",
"",
"",
"2040658249",
"",
"2962794823",
"1745797027",
"2963175158",
"2962784841",
"2051481196",
"2609468337",
"2558630670"
]
} | 0 |
||
1907.05315 | 2961663897 | In this work, we present an end-to-end framework to settle data association in online Multiple-Object Tracking (MOT). Given detection responses, we formulate the frame-by-frame data association as Maximum Weighted Bipartite Matching problem, whose solution is learned using a neural network. The network incorporates an affinity learning module, wherein both appearance and motion cues are investigated to encode object feature representation and compute pairwise affinities. Employing the computed affinities as edge weights, the following matching problem on a bipartite graph is resolved by the optimization module, which leverages a graph neural network to adapt with the varying cardinalities of the association problem and solve the combinatorial hardness with favorable scalability and compatibility. To facilitate effective training of the proposed tracking network, we design a multi-level matrix loss in conjunction with the assembled supervision methodology. Being trained end-to-end, all modules in the tracker can co-adapt and co-operate collaboratively, resulting in improved model adaptiveness and less parameter-tuning efforts. Experiment results on the MOT benchmarks demonstrate the efficacy of the proposed approach. | Data association is the process of dividing a set of instances into different groups, such that to maximize the global cross-group similarities while maintaining one-to-one association constraint. This fundamental technique exists in various domains that involve correspondence matching @cite_65 , such as person re-identification @cite_102 @cite_26 @cite_20 , keypoint matching @cite_108 , 3D reconstruction @cite_59 , action recognition @cite_88 , and T-by-D based MOT @cite_15 . | {
"abstract": [
"",
"Offline training for object tracking has recently shown great potentials in balancing tracking accuracy and speed. However, it is still difficult to adapt an offline trained model to a target tracked online. This work presents a Residual Attentional Siamese Network (RASNet) for high performance object tracking. The RASNet model reformulates the correlation filter within a Siamese tracking framework, and introduces different kinds of the attention mechanisms to adapt the model without updating the model online. In particular, by exploiting the offline trained general attention, the target adapted residual attention, and the channel favored feature attention, the RASNet not only mitigates the over-fitting problem in deep network training, but also enhances its discriminative capacity and adaptability due to the separation of representation learning and discriminator learning. The proposed deep architecture is trained from end to end and takes full advantage of the rich spatial temporal information to achieve robust visual tracking. Experimental results on two latest benchmarks, OTB-2015 and VOT2017, show that the RASNet tracker has the state-of-the-art tracking accuracy while runs at more than 80 frames per second.",
"Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available.",
"The problem of graph matching under node and pairwise constraints is fundamental in areas as diverse as combinatorial optimization, machine learning or computer vision, where representing both the relations between nodes and their neighborhood structure is essential. We present an end-to-end model that makes it possible to learn all parameters of the graph matching process, including the unary and pairwise node neighborhoods, represented as deep feature extraction hierarchies. The challenge is in the formulation of the different matrix computation layers of the model in a way that enables the consistent, efficient propagation of gradients in the complete pipeline from the loss function, through the combinatorial optimization layer solving the matching problem, and the feature extraction hierarchy. Our computer vision experiments and ablation studies on challenging datasets like PASCAL VOC keypoints, Sintel and CUB show that matching models refined end-to-end are superior to counterparts based on feature hierarchies trained for other problems.",
"In this paper, we present a new framework for geo-locating an image utilizing a novel multiple nearest neighbor feature matching method using Generalized Minimum Clique Graphs (GMCP). First, we extract local features (e.g., SIFT) from the query image and retrieve a number of nearest neighbors for each query feature from the reference data set. Next, we apply our GMCP-based feature matching to select a single nearest neighbor for each query feature such that all matches are globally consistent. Our approach to feature matching is based on the proposition that the first nearest neighbors are not necessarily the best choices for finding correspondences in image matching. Therefore, the proposed method considers multiple reference nearest neighbors as potential matches and selects the correct ones by enforcing consistency among their global features (e.g., GIST) using GMCP. In this context, we argue that using a robust distance function for finding the similarity between the global features is essential for the cases where the query matches multiple reference images with dissimilar global features. Towards this end, we propose a robust distance function based on the Gaussian Radial Basis Function (G-RBF). We evaluated the proposed framework on a new data set of 102k street view images; the experiments show it outperforms the state of the art by 10 percent.",
"Complex human activities occurring in videos can be defined in terms of temporal configurations of primitive actions. Prior work typically hand-picks the primitives, their total number, and temporal relations (e.g., allow only followed-by), and then only estimates their relative significance for activity recognition. We advance prior work by learning what activity parts and their spatiotemporal relations should be captured to represent the activity, and how relevant they are for enabling efficient inference in realistic videos. We represent videos by spatiotemporal graphs, where nodes correspond to multiscale video segments, and edges capture their hierarchical, temporal, and spatial relationships. Access to video segments is provided by our new, multiscale segmenter. Given a set of training spatiotemporal graphs, we learn their archetype graph, and pdf's associated with model nodes and edges. The model adaptively learns from data relevant video segments and their relations, addressing the “what” and “how.” Inference and learning are formulated within the same framework - that of a robust, least-squares optimization - which is invariant to arbitrary permutations of nodes in spatiotemporal graphs. The model is used for parsing new videos in terms of detecting and localizing relevant activity parts. We out-perform the state of the art on benchmark Olympic and UT human-interaction datasets, under a favorable complexity-vs.-accuracy trade-off.",
"This paper introduces a novel approach to the task of data association within the context of pedestrian tracking, by introducing a two-stage learning scheme to match pairs of detections. First, a Siamese convolutional neural network (CNN) is trained to learn descriptors encoding local spatio-temporal structures between the two input image patches, aggregating pixel values and optical flow information. Second, a set of contextual features derived from the position and size of the compared input patches are combined with the CNN output by means of a gradient boosting classifier to generate the final matching probability. This learning approach is validated by using a linear programming based multi-person tracker showing that even a simple and efficient tracker may outperform much more complex models when fed with our learned matching probabilities. Results on publicly available sequences show that our method meets state-of-the-art standards in multiple people tracking.",
""
],
"cite_N": [
"@cite_26",
"@cite_65",
"@cite_102",
"@cite_108",
"@cite_59",
"@cite_88",
"@cite_15",
"@cite_20"
],
"mid": [
"",
"2797812763",
"2794497862",
"2799132636",
"2121765205",
"2137275576",
"2964015640",
""
]
} | Graph Neural Based End-to-end Data Association Framework for Online Multiple-Object Tracking | Given a video sequence, Multi-Object Tracking (MOT) algorithms generate consistent trajectories by localizing and identifying multiple targets in consecutive frames. Considering its spatial-temporal nature, MOT task is intrinsically complicated for claiming a formidable solution search space. Moreover, the complications of MOT further aggravates with the increasing number of targets, complex object behaviors, and intricate real-life tracking environments.
Aiming at decoupling the combinatorial complications, * Equal Contribution most trackers solve the object localization and identification separately and lead to two categories of MOT algorithms. On one hand, the Tracking-by-Prediction methods [93,34,12] prioritize object identification by deploying multiple Single Object Trackers (SOTs) on the basis of motion prediction. However, due to the absence of detections, these methods are troubled to adapt to the varying object number because of the object birth & death (i.e. object entering or leaving the scene). On the other hand, the Tracking-by-Detection methods first localize objects anonymously with detectors, then resolve object identification via data association [59,77,32].
In this work, we follow the Tracking-by-Detection strategy. Taking detections as a given, the core of our proposed tracker is its data association module. To achieve online tracking capability, the tracker performs frame-byframe data associations which can be graphically formulated as Maximum Weighted Bipartite Matching problems. For each pair of consecutive frames, a weighted bipartite graph is constructed involving trajectories in the previous frame and detection responses in the current frame. The matching problem established whereupon is resolved by first generating pairwise affinities as edge weights, then solving the obtained optimization problem to generate the association output.
Accordingly, the data association module starts with generating pairwise affinities. In tracking scenes baffled with target appearance variations and similar distractors, the expressivity and discriminability of the computed affinities is determined by the adopted feature representation method, as well as the distance metric deployed to quantify the affinities. Earlier approaches leverage advanced handcrafted features [86,53,15] to achieve robust representation. More recently, CNN based deep features are widely exploited [77,38,72,3] instead. Furthermore, a multi-cues strategy has also been practiced to supplement the appearance cue with others [42,53,57], among which motion is the most vastly adopted [70,63,1,64,37]. On the basis of the encoded feature vectors, hand-engineered distance measures [86,73,60] are generally utilized to compute the affinity scores. In addition, attempts have also been made to learn metrics that can co-adapt with the feature learning altogether [8,64,79].
Given the computed affinities, the following optimization problem defined on the weighted bipartite graph is normally configured into a linear assignment formalism and solved with well-designed optimizers or heuristics [11,71]. However, these approaches suffer from tedious designing efforts, prohibitive computation expense, and poor scalability. Particularly, in the presence of frequent object birth & death, the combinatorial formalism of linear assignment constraints are violated, thus inducing erroneous optimization results which in turn leads to false associations. As a solution, we strive to resolve the optimization problem relying on the function approximation capacity of deep networks in a data-driven way. Nevertheless, this approach is non-trivial to realize. Firstly, although deep neural networks such as CNN and RNN have exceptional feature learning capability, yet their capacities to conduct relational reasoning for data association are limited; Secondly, the varying number of targets give rise to changing dimensionality of the association problem, demanding the otherwise fixed model to be adaptive; Moreover, available data is limited for tracking problem to support the training of heavy models.
Inspired by its graphical formulation of the optimization problem, we observe Graph Neural Network (GNN) [67] is well-suited to solve the problem. By reasoning over non-Euclidean graph data in a message-passing way, the proposed GNN optimization module is endowed with improved relational reasoning capacity and can cope with the varying cardinality problem via the deployments of localized operations. Furthermore, the module is light-weight and converges well. By integrating the aforementioned affinity learning module end-to-end, all parameters in the data association pipeline can co-adapt and co-operate compactly, results in better model adaptiveness, scalability, and efficiency with acceptable model complexity. For the purpose to better optimize the complicated network with diverse modules, we design the multi-level matrix loss which is assembled to enhance the training performance. The main contributions of this work include:
• We propose an end-to-end framework incorporating affinity learning and optimization modules to solve the data association problem in online multiple-object tracking.
• We design the optimization module with Graph Neural Network (GNN), which learns to solve the constructed maximum weighted bipartite matching problem in a data-driven way, avoiding excessive algorithm design and parameter tuning efforts.
• We employ assembled supervision in conjunction with the proposed multi-level matrix loss to ensure the training performance of the end-to-end network composing diverse modules.
• We demonstrate experimentally that the GNN optimization module improves data association performance, and overall our method yields competitive results with other state-of-the-art trackers on the MOT benchmark.
End-to-end Data Association
Using affinity measures as edge weights, the data association of online multi-object tracking can take the graphical form of a Maximum Weighted Bipartite Matching problem. This matching problem can be formulated into a linear assignment and solved with well-defined algorithms [19,51]. In this section, we present our end-to-end data association model, wherein the affinity computation and optimization module are jointly trained to achieve co-adaptation and cooperation. More importantly, the varying cardinality of the optimization problem caused by object birth & death is settled in a data-driven way.
Problem Formulation
For online data association, one bipartite graph G t is constructed between every pair of consecutive frames, where the two disjoint sets each contain the existing tra-
jectories T t−1 = {T t−1 1 , T t−1 2 , . . . , T t−1 i } in the previous frame t − 1, and the newly detected object observations O t = {o t 1 , o t 2 , .
. . , o t j } in the current frame t. i ∈ I and j ∈ J, where I and J defines the cardinality of the association problem. Particularly, trajectory T t−1 i is represented by its bounding box observation t t−1 i (i.e. the (x, y, w, h) annotation) at frame t − 1 and its short tracklet of coordinates. The graph is weighted by the affinity matrix S ∈ R I×J , where affinity score s ij ∈ R is associated as the weight on the edge between trajectory i and observation j. Each edge is also associated with a binary indicator x ij in the association matrix X. The association result is the solution of a Maximum Weighted Bipartite Matching problem defined on G t , i.e. the optimal association matrix X * of the corresponding linear assignment defined as follows:
X * = argmax X S T • X (1) s.t. ∀j : i x ij ≤ 1 ∀i : j x ij ≤ 1 x ij ∈ {0, 1} ij x ij = k, k ≤ min(I, J)(2)
The computation in (1) denotes dot-product of two vectorized matrices. The first two constraints in (2) ensure the assignment feasibilities that no two trajectories can claim the same observation at the same frame. The last constraint computes matrix norm of X, indicates that there are exactly k one-to-one associations which satisfy the equality constraint. In other words, k one-to-one associations are established across two frames whereas the rest are birth & death associations. In cases where I = J, zero nodes, and edges need to be augmented into the graph to maintain symmetry in matrices S and X so that Hungarian algorithm can be applied to solve the assignment. Nonetheless, the application of the Hungarian algorithm or similar alternatives is not ideal for all tracking scenarios. For one, such algorithms scale poorly with the increasing problem size and easily becomes intractable in real tracking scenes. More seriously, k is not known as a prior during tracking due to irregular object birth & death, so the last constraint in (2) does not always hold, i.e. the optimization formulated as above cannot enclose all association scenarios. In these cases, the availability of combinatorial optimization algorithms is invalidated.
Aiming at achieving an efficient optimization wellcompatible to real tracking scenes, we establish a module leveraging the GNN to approximate an optimization solution via supervised learning. In addition, an affinity learning module is proposed to compute the A matrices. Jointly, an end-to-end data association module is realized enabling the co-adaptation and co-operation of both modules collaboratively.
Affinity Learning Module
Affinity is the quantification of similarity between observations, affinity computation is the starting point of a variety of matching-based tasks [97,91,44,90,9]. In the context of online multi-object tracking, an affinity matrix S ∈ R I×J is computed to weight the bipartite graph. Each element s ij in S indicates the similarity between trajectory t i and the newly detected observation o j . In the proposed tracker, s ij is computed end-to-end from input bounding box annotations to output scalar value affinity scores through the proposed two-stream affinity learning module, where appearance affinity is quantified with a Siamese Convolutional Neural Network, while motion affinity is investigated based on proximity reasoning with an LSTM motion prediction component. A set of fully connected layers are deployed as the learnable metric to integrate the two.
Appearance Affinity
Appearance cue provides the most defining image evidence to recognize and discriminate an object. In presences of appearance variations and similar distractors, robust feature representation is vital to achieve reliable affinity. In our method, appearance affinity is computed with a Siamese CNN feature encoding architecture. As shown in Figure 1, p i and p j are encoded into feature vectors F A i and F A j with dimension of D A . The appearance affinity score s ij is computed using F A i and F A j via the learnable metric described later in this section. Differ from the early fusion strategy adopted in [44] where pairs of instances are stacked before input to the Siamese network, we opt to fuse F A i and F A j later as both of them are needed in the GNN optimization module.
Motion Affinity
Motion cue offers generic and appearance-invariant information to characterize a object according to its dynamic behavior. It has been proven to be a beneficial complement to reinforce appearance cue in cases of severe appearance variations and cluttered background [70,63,64]. As shown in Figure 1, motion affinity is computed on the basis of motion prediction with a LSTM motion model.
Metric Learning
Aiming at end-to-end module adaptation and cooperation, we avert application of hand-engineered distance metrics but to learn them from data. As shown in Figure 1, three metric learning components ϕ A (·), ϕ M (·), and ϕ S (·) are distributedly implemented in the affinity learning module. In particular, ϕ A (·) (ϕ M (·)) is formed with a sequence of fully connected layers interleaved with non-linearity, which first concatenate a pair of appearance (motion) feature vectors into a 2D A (2D M ) dimension vector, then transform them into a scalar value indicating the pairwise affinity score. The two-stream appearance and motion cues are integrated by ϕ S (·), which mimics a weighted summation of a ij and m ij , then outputs s ij as final affinity score. This distributed computation of affinities renders a ij , m ij , and s ij as intermediate outputs, upon which assembled supervision can be realized. Details can be found in Section 4.1. Affinity matrices A, M , and S are generated by packing a ij , m ij and s ij into matrix formation, and S is fed into the following GNN optimization module. Additionally, the encoded appearance F A and motion feature vectors F M for each trajectory and observation are concatenated into one unified feature vector F ∈ R D M +D A =D . Vectors for trajectories and observations are packed together in F M and F N , which are also input into the GNN optimization module.
GNN Optimization Module
The main obstacle of applying deep neural networks to solve data association is the varying cardinality. To cope with such variations, tedious heuristics must be designed for the adoption of deep networks who involve fixed size matrix multiplications such as MLP or LSTM.
Motivated by the graphical nature of the optimization problem, we are thus inspired to overcome the varying cardinality problem by utilizing Graph Neural Network. In comparison with LSTM [50], we believe Graph Neural Network (GNN) is more suitable in solving the maximum weighted bipartite matching problem for three reasons. Firstly, GNN operates on graphical data, therefore matches the graphical formulation of the optimization problem. Secondly, GNN deploys local operations in a messagepassing way, thus can cope with the varying cardinality. Thirdly, GNN supports light-weight implementations, such that the requirement of training data is less intense. Following these motivations, a particular GNN architecture is established as shown in Figure 2. Given the computed affin-ity matrix S as edge weights and feature vectors F M , F N as node features on a weighted in-complete bipartite G t , the proposed GNN module composes the feature update layer and the relation update layer, who is expected to solve the optimization problem by outputting an optimal association matrix X * , which denotes an optimal set of one-to-one associations together with accurate birth & death indications.
Feature update layer
Taking the input edge weights and node features, the feature update layer instantiates the message-passing functionality via matrix multiplications in the context of bipartite graphs, i.e. updates the feature vector for each node in one set of the graph according to all nodes in the other set weighted by the affinities in between. Considering the fact that each node in a bipartite graph only has one-hop neighbor, thus a pair of feature update layers defined as follows only need to be deployed once in the GNN to realize feature updates globally. After the message-passing step, the resulted features are further embedded into a higher dimension for enlarged model capacity.
F M = ρ(Sof tmax(S)F N W θ ) (3) F N = ρ(Sof tmax(S T )F M W θ )(4)
On the left side of the above equations, F M ∈ R I×C and F N ∈ R J×C denote the resulted features for each trajectory in the previous frame and each observation in the current frame. On the right side of the equations, S ∈ R I×J represents the affinity matrix computed as discussed in Section 3.2. Sof tmax(S) indicates applying softmax normalization row-wise in the affinity matrix ((4) in the transpose of S) computed by the affinity learning module. W ∈ R D×C indicates a set of learnable weights and θ denote the parameterizations. ρ(·) is the element-wise non-linearity which we adopt ReLU in this paper.
Relation update layer
The updated feature vectors are fed into the relation update layer, wherein elements x ij ∈ R in the association matrix X are iteratively estimated by first aggregating features from a pair of nodes into the feature on the edge connecting the two, then apply a learnable transformation to compute the scalar value output. This layer can be formalized as follows:
x ij = M LP θ (σ(F i , F j ))(5)
Here σ(·) represents the feature aggregation functionality that aggregates node features into the edge features in between. σ(·) can take many forms, in the scope of this work we realize it with non-parameterized element-wise subtraction. Basing on the aggregated edge feature, a Multilayer Perceptron parameterized by θ is employed to instantiate the transformation to get the scalar value x ij . Figure 2. The pipeline of the proposed optimization module based on GNN. Given the affinity matrix and the features of the objects aside in two frames, the module firstly updates their features in a message-passing way, then output their relation to output the optimal association matrix.
End-to-end Supervision
It is non-trivial to supervise the training of the proposed end-to-end data association module for two reasons. Firstly, the data association result is given in Y = X * , which is a matrix with varying dimensionality from frame-to-frame. This matrix contains both one-to-one as well as birth & death associations, such that different supervision need to be imposed on both rows and columns. Secondly, the endto-end framework establishes a network composing components for various tasks, training needs to be carefully designed to facilitate back-propagation and to avoid gradient vanishing problem. To overcome these difficulties, we first clarify the ground truth matrix generation, then propose the multi-level matrix loss in conjunction with assembled supervision.
Ground Truth Generation
To realize matrix-wise supervision, we generate ground truth association matrixŶ with elementsŷ ij ∈ 0, 1. As defined in (2), there exists a sub-matrix Y O2O ∈ R k×k in Y that conforms to one-to-one associations. Corresponding rows and columns inŶ are generated as one-hot vectors with 1 placed at the y ij , indicating row (trajectory) i occu-
pies detection (column) j. Y B&D (Y B&D ∪ Y O2O = Y )
indicates birth & death associations, corresponding row and column vectors inŶ are generated with all 0 elements.
Multi-level Matrix Loss
The loss computed on an estimated association matrix Y ∈ R m×n and its corresponding ground truthŶ is defined as a combination of element-level and vector-level losses. On the element-level, each element in the matrix is the estimation for a binary classification, specifying whether this element denoting a match or mismatch. Accordingly, a binary cross-entropy loss L e formulated as follows is applied to each element to supervise the classification:
L e = I i J j (−pŷ ij log σ(y ij ) − (1 −ŷ ij ) log(1 − σ(y ij ))),(6)
where y ij ∈ Y,ŷ ij ∈Ŷ . p is the weighting factor assigned to positive examples to alleviate the sample imbalance. In our experiments p equals to 25.
On the vector-level, we separate the one-to-one association from the birth & death, and denote the sub-matrix as Y O2O and Y B&D respecitively. For vectors v O2O within Y O2O , a cross entropy loss L O2O is adopted to supervise a multi-class classification composing the estimated vectors and the one-hot ground truths:
L O2O = − k O2Ov O2O log(sof tmax(v O2O )),(7)
wherev O2O denotes corresponding one-hot ground truth vector, k denotes the number of associations. For vectors in Y B&D , we apply a mean square error (MSE) loss L B&D to enforce the vector to approach to negative infinity, which can be easily recognized in the tracking process:
L B&D = v B&D sigmoid(v B&D ) 2 ,(8)
where v = m + n − 2 * k. The multi-level matrix loss L matrix is then computed as a summation of losses on different levels over the matrix:
L M atrix = L E + L O2O + L B&D(9)
Assembled Supervision
Instead of only computing L e on the final output matrix Y , we also assemble it computed on affinity scores A, M , and S. During back-propagation, the gradient flows start at each matrix and flow backwards distributedly. As such, the gradients on earlier network layers are a summation of each flow, therefore the gradient is enhanced and the vanishing problem is alleviated.
Association Result Interpretation
The optimal association matrix X * produced by the optimization module contains indications for both one-to-one and object death & birth associations. The elements in X * is not binary indicators directly but can be easily interpreted. Conforming to our training setup as detailed in Section 4.1, each element is the result of a binary classification, where the one-to-one association is marked as positive. As a result, the one-to-one association is indicated by the largest positive value in a row, and the trajectory corresponding to this row occupies the detection denoted by the corresponding column that largest value resides in. Each one-to-one association is denoted with an indicator in O ∈ R 2 . Birth or death happens when a row or column contains only negative values, and each of them is marked with an indicator in B ∈ R and D ∈ R. To efficiently integrate O, B, and D from X * , we follow a straightforward procedure by iteratively finding the largest element x max = x * ij in X * . If x max > 0, then add (i, j) into O, and row i as well as column j is marked unavailable. Once added x max is negative, then all the available but un-associated rows and columns are added into D and B. Accordingly, the final association result is given indicators in O, B, and D.
Implementation
The implementation of our method is separated into two parts: the training methodology and the tracking methodology.
Training Methodology
The proposed model is implemented as follow. For the network design, we construct the CNN in the Siamese network with only four convolutional layers inserted with the ReLU nonlinearity. For the LSTM module, we set the length of tracklet L = 5. We use the ground-truth bounding boxes and target IDs in the MOT17 [49] training set to generate pairs of images and motion information. All crop images are resized to 84 × 32 to maintain the aspect ratio of targets. All modules are trained from scratch. The proposed model is trained end-to-end with Adam [39] optimizer, where weight decay is set to 0.0005 and initial learning rate is set to lr = 0.001, divided by 10 for every 10000 iterations, and 40000 iterations in total.
Tracking Methodology
To limit false births & deaths induced by noisy detections, we set up a time window with length T b to verify an object birth. Specifically, once a birth is indicated in B, the tracker postpones the trajectory initialization process until this object consecutively appears in O for the next T b frames. Similarly, T d is set up for confirming the death. Once D reports a death, we deploy a dummy ob- servation with the object's last seen bounding box dimension but propagated into future frames with a linear motion model calculated using the trajectory's short tracklet of coordinates. If the dummy fails to be associated in T d frames, it is terminated. We set T b = f ps/2 and T d = f ps/6 (f ps denotes the frame rate), according to the model's performance on the training set.
Experimental Results
In this section, we present our experimental results on 2D MOT 2015 [45] and MOT17 [49] benchmark datasets with ablation studies and comparisons with selected baselines. More details for the MOT benchmark are available at https://motchallenge.net.
Evaluation Metrics
Evaluation on MOT Benchmark
The evaluation results on both MOT17 and MOT15 datasets are shown in Table 1 and 2. The arrows in each column denote the favorable changing direction of the corresponding metric. Being the only fully end-to-end trained online tracker, we emphasize highlighting the benefits of the end-to-end training methodology as well as the GNN optimization module. To this end, we avoid excessive parameter tuning efforts during testing, and training data augmentation, as well as post-tracking performance boosting heuristics refrain in the experiments. Nevertheless, we still demonstrate competitive results with other published online trackers in both datasets. Particularly, on MOT15, we compare our results with the RNN LSTM [50], which is claimed the first fully end-to-end multi-object tracking method that inspires our work. As shown, we achieve favorable results on several important evaluation metrics, including 39.5%, 12.8%, and 3.5% improvements on IDF1, MOTA, and MT. These improvements results from the fact we integrate the appearance affinity into the end-to-end framework and the GNN optimization module can better cope with the varying cardinality problem than RNN and LSTM.
Ablation Study
Ablation study is also conducted on the MOT17 benchmark. As shown in Table 3, we demonstrate the contributions of the assembled supervision and the GNN optimization module. Specifically, the first row of the table shows the performance of the whole network trained with single supervision applied on the final association output Y without assembled supervision. The second row illustrates the result of directly reasoning the association result using the affinity matrix S, without the following GNN optimization module. Training for this configuration is assembled on affinity matrices A, M , and S. The last row reports the full network. As illustrated, the full network outperforms the first-row configuration in all three metrics with 4.9%, 3.5%, and 7.7% improvements, proving the merits of the assembled supervision. Comparing to the second-row configuration, the full network enlarges the improvements to 15%, 122.9%, and 24.4%. Although this configuration cannot be trained with the full assembled supervision due to the absence of Y , the extra performance improvements still advocate the contribution of the GNN optimization module.
Conclusion and Future work
We propose an end-to-end data association model for online multi-object tracking. By jointly training the affinity learning module and the GNN optimization module, they can co-adapt collaboratively, improving the adaptivity, scalability, and accuracy of the data association model. Particular, we firstly introduce GNN in the context of solving online data association with frequent birth and death, successfully settles the irregular linear assignment formulation in a data-driven way. In this paper, we emphasize on demonstrating the efficacy of end-to-end data association and the GNN optimization module, therefore the affinity computation module is lightly designed and no performance enhancing heuristics have been employed. The performance of the method can be further enhanced in future work. | 4,269 |
1901.02212 | 2909336075 | As we enter into the AI era, the proliferation of deep learning approaches, especially generative models, passed beyond research communities as it is being utilized for both good and bad intentions of the society. While generative models get stronger by creating more representative replicas, this strength begins to pose a threat on information integrity. We would like to present an approach to detect synthesized content in the domain of portrait videos, as a preventive solution for this threat. In other words, we would like to build a deep fake detector. Our approach exploits biological signals extracted from facial areas based on the observation that these signals are not well-preserved spatially and temporally in synthetic content. First, we exhibit several unary and binary signal transformations for the pairwise separation problem, achieving 99.39 accuracy to detect fake portrait videos. Second, we use those findings to formulate a generalized classifier of authentic and fake content, by analyzing the characteristics of proposed signal transformations and their corresponding feature sets. We evaluated FakeCatcher both on Face Forensics dataset [46] and on our newly introduced Deep Fakes dataset, performing with 82.55 and 77.33 accuracy respectively. Third, we are also releasing this mixed dataset of synthesized videos that we collected as a part of our evaluation process, containing fake portrait videos "in the wild", independent of a specific generative model, independent of the video compression, and independent of the context. We also analyzed the effects of different facial regions, video segment durations, and dimensionality reduction techniques and compared our detection rate to recent approaches. | Following the Generative Adversarial Networks proposed by @cite_28 , deep learning models have been advancing for generative tasks for inpainting @cite_8 , translation @cite_19 , and editing @cite_10 . GAN architecture can be simplified as the "game" between the generator network and the discriminator network. Generator adapts its parameters to create realistic images that mimic the distribution of the real data, and discriminator adapts its parameters to correctly differentiate real and fake images created by the generator. Inherently, all generative approaches suffer from the control over generation. In the context of GANs, this problem is mostly explored by Variational Autoencoders (VAE) and Conditional GANs to control the generation by putting constraints in the latent space @cite_41 @cite_46 . In addition to improving the control over GANs, other approaches improved the training efficiency, accuracy, and realism of GANs by deep convolutions @cite_54 , Wasserstein distances @cite_21 , least square @cite_53 , and progressive growing @cite_56 . All these advancements in the generative power, realism, and efficiency of GANs resulted in the development of "deep fake"s. | {
"abstract": [
"We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"",
"The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity."
],
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_41",
"@cite_54",
"@cite_21",
"@cite_53",
"@cite_56",
"@cite_19",
"@cite_46",
"@cite_10"
],
"mid": [
"2738588019",
"2962760235",
"2587284713",
"2173520492",
"2962879692",
"2593414223",
"2962760235",
"2962793481",
"",
"2964144352"
]
} | FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals | The technological advancements in deep learning have started to revolutionize our perspective about how we solve many difficult problems in computer vision, robotics, and related areas. Common deep learning models for recognition, classification, and segmentation tasks tend to improve how we and machines perceive, learn, and analyze the world. On the other hand, the developments in generative models significantly increased how we and machines tend to mimic the world and create realistic data. Even though it is easy to speculate dystopian scenarios based on both analysis and synthesis approaches, the latter brought the immediate threat on information integrity by disabling our biological "detectors" of authentic content: we cannot simply look at an image to understand if it is fake or not.
Following the recent initiatives for democratization of AI, generative models [22,45,62,56] are getting more popular and reachable. Although the widespread use of GANs is positively impacting some technologies (i.e., personalized avatars [40], animations [44], and image inpainting [37]), there are also uses of GANs with malicious intent, which impacts the society by introducing inauthentic content (i.e., celebrity porn [3], fake news [2], and AI art [1]). This lack of authenticity and increasing information obfuscation pose real threats to individuals, criminal system, and information integrity. As every technology is built simultaneously with the counter-technology to neutralize its negative effects, we believe that it is the perfect time to develop a deep fake detector to prevent this threat before having serious consequences.
We observed that, although GANs are powerful enough to learn and generate photorealistic visual and geometric signals beyond the discriminative capabilities of human eyes, biological signals hidden by nature are still not easily replicable. Biological signals are also intuitive and complimentary ingredients of facial videos, for which generative models are mostly applied in order to create deep fakes. Moreover, videos contain another layer of complexity for the synthesized content to satisfy the consistency in the time dimension, in addition to the signals' spatial coherence. To complete this narrative, our approach exploits hidden biological signals (such as heart rate) to detect inauthentic content on portrait videos, independent of the source of creation and transmission.
Our main contributions include,
• formulations of signal transformations to exploit spatial coherence and temporal consistency of biological signals for pairwise separation task, Figure 1. System Overview: We extract biological signals from three facial regions on authentic and fake portrait video pairs. We apply several transformations to compute the spatial coherence and temporal consistency, capture the signal characteristics in feature sets, and train a binary probabilistic SVM. Then we aggregate authenticity probabilities of all segments to decide whether the video is a deep fake.
• a generalized synthetic portrait video detector which can catch "deep fake"s based on biological signals,
• experimental validation of exploitability of spatial coherence and temporal consistency of biological signals for authenticity classification,
• a diverse and unconstrained dataset of synthesized portrait videos to create a test bed for inauthentic content detection in the wild.
Our system processes input videos by collecting video segments with facial parts, defining several regions of interests (ROIs), and extracting several biological signals from those regions. In the first part, those signals, their transformations to different domains (time, frequency, timefrequency), and their correlations in different domains are examined to formulate a solution to pairwise separation. In the second part, we combine the discoveries from the pairwise context with feature extractors in the literature to come up with a generalized authenticity classifier working in a high dimensional feature space. We also aggregate the class probabilities of segments into a binary decision of "fake or authentic" for the video. The system is depicted in Figure 1.
To evaluate FakeCatcher, we collected over 100 videos, totaling up to 59 GB, from the internet. It is important to note that, unlike existing datasets, our Deep Fakes dataset includes "in the wild" videos, independent of the generative model, independent of the resolution, independent of the compression, and independent of the content and context. We detect the in authentic content on our dataset with 77.33% accuracy. We also tested our approach on Face Forensics dataset [46], which reaches 82.55% accuracy to classify inauthentic content. We analyzed the effects of segment durations, facial regions, and dimensionality reduction techniques on those datasets, on our feature sets, and on our signal transformations.
GAN Empowerment
Following the Generative Adversarial Networks proposed by Goodfellow et al. [22], deep learning models have been advancing for generative tasks for inpainting [27], translation [62], and editing [11]. GAN architecture can be simplified as the "game" between the generator network and the discriminator network. Generator adapts its parameters to create realistic images that mimic the distribution of the real data, and discriminator adapts its parameters to correctly differentiate real and fake images created by the generator. Inherently, all generative approaches suffer from the control over generation. In the context of GANs, this problem is mostly explored by Variational Autoencoders (VAE) and Conditional GANs to control the generation by putting constraints in the latent space [32,52]. In addition to improving the control over GANs, other approaches improved the training efficiency, accuracy, and realism of GANs by deep convolutions [45], Wasserstein distances [23], least square [38], and progressive growing [30]. All these advancements in the generative power, realism, and efficiency of GANs resulted in the development of "deep fake"s.
Synthetic Faces
Since Viola-Jones [55], computer vision community treasures the domain of facial images and videos as one of the primary application domains. Following the pattern, applications and explorations of GANs have been done for face completion [37], facial attribute manipulation [49,16,24], frontal view synthesis [26], facial reenactment [31,53,59], identity-preserving synthesis [8] and expression editing [19]. In particular, VAEs and GANs for facial reenactment and video synthesis resulted in the emergence of "deep fake" concept, which can be expressed as replacing the face of a target person with another face in a given video. Although the exact approach is not published, the mainstream deep fake generator is assumed to consist of two autoencoders trained on source and target videos, while keeping the encoder weights similar, so that same general features (illumination, motion, expression, etc.) can be embedded in the encoder and face-specific features can be integrated by the decoder. Another approach, Face2Face [53], reconstructs a target face from a training video and then warps it with the blend shapes obtained by the source video in real-time. Deep Video Portraits [31] and vid2vid [56] follow this approach and employ GANs instead of blend shapes. Although the results of all of these approaches can be very realistic, there are still skipped frames and face misalignments due to illumination changes, occlusions, video compressions, and sudden motions. Such imperfections are best caught by biological signals that we will propose in the next section.
Image Forensics
In par with the increasing number of inauthentic facial images and videos, methods for detecting authenticity of such content have also been proposed. Those are mostly based on finding inconsistencies in images, such as detect-ing distortions [10], finding compression artifacts [9], and assessing image quality [21]. However, for synthetic images in our context, the noise and distortions are harder to detect due to the non-linearity and complexity of the learning process [34]. It is possible to investigate the color and noise distributions of specific networks [35,48], or training CNNs for synthetic images [4,51], but catching synthetic images and videos in the wild is still an open research topic. The last approach we would like to introduce in this domain, which can be considered as the most similar to ours as the only fake detector on videos, exploits blinks to detect inauthentic facial videos [36] as another biological signal.
Biological Signals
The remote extraction of biological signals roots back to the medical communities to reduce the intrusion on the patients for specific measurements, such as heart rate. Observing subtle color and motion based signals from videos [58,15] enabled methodologies like remote photoplethysmography (rPPG or iPPG) [47,43] and head motion based ballistocardiogram (BCD) [7] for non-intrusive heart rate detection from facial RGB videos. We will mostly focus on PPG as it is more robust against dynamic scenes, while BCD needs static videos. Several approaches proposed improvements to PPG, using chrominance features [18], G channel components [61], optical properties [20], kalman filters [5], and different facial areas for signal extraction [61,54,20,47]. We believe all of these PPG variations contain valuable information in the context of inauthentic videos. In addition, their inter-consistency for real video segments are higher than those from fake videos. Multiple signals also help us regularize the environmental effects (illumination, occlusion, motion, etc.) for robustness. Thus we will work on the following six signals that are combinations of G channel-based [61] (robust against compression artifacts) and chrominance-based PPG [18] (robust against illumination artifacts) on left cheek, right cheek [20], and midregion [54], namely
{G L , G R , G M , C L , C R , C M }.
Characteristics of Biological Signals on Original-Synthetic Video Pairs
In order to understand the nature of biological signals in the context of synthetic content, we first compared signal responses on original and synthetic video pairs using traditional signal analysis methods as shown in Figure 4. The motivation behind this analysis is to find the best error metric between pairs of original and synthetic facial videos with similar content. Concluding on an error metric directs us for which signals and which features to use for a generalized classifier. This analysis also gives ground for understanding generative systems in terms of biological replicability.
Statistical Features
Setting a sample toy subset (150 pairs of videos in the test set of Face Forensics [46]), we cut each video into ω frame windows (the effect of ω is extensively analyzed in Section 6.2). Our analysis starts by comparing simple metrics as the mean, standard deviation, and min-max ranges of PPG signals from original and synthetic video pairs, achieving the best accuracy as 65%. Next, we investigated same metrics on absolute values of differences between consecutive frames in original and synthetic videos, achieving the best accuracy of 75.69%. Since it was obvious that these features are aggregating the whole segment signals into single numbers which are not representative enough, we wanted to pursue metrics in other domains.
Power Spectra
We continued by looking at the power spectrum density (PSD) of those signals S o and S s , in linear and in log scale, achieving a best accuracy of 79.33% by
µ P (G Lo ) + σ P (G Lo ) − µ P (G Ls ) − σ P (G Ls )
. Fourth, we analyzed discrete cosine transforms (DCT) of the log scale of these signals, with the analysis of including (I, II, III, and IV) variants (see Appendix B) obtaining an accuracy of 77.41%. However this gave us more inspiration about the next comparison: The zero-frequency (DC value) of DCT led us to a significant improvement to 91.33% accuracy. We ran the same evaluation, reaching 84.89% accuracy on the entire Face Forensics dataset and reaching 64.35% accuracy on our Deep Fakes dataset.
Spatio-temporal Coherence of Multiple Signals
We also ran some analysis to accompany for the coherence of biological signals within each frame. For robustness
Generalized Authentic Content Classifier
Achieving considerably high accuracy for detecting authentic content from pairs of original and synthetic videos, we would like to generalize this approach as a binary classifier for authentic content. In the pair-wise setting, comparison of aggregate features are representative enough. However as these signals are continuous and noisy, we need to extract representative features from the ensemble of those signals to build a generalized classifier. We experimented with several signal transformations in time and frequency domains to explore the artifacts of synthetic videos towards characteristic features (Table 1).
Symbol Signal S {G L , G R , G M , C L , C R , C M } D {|C L − G L |, |C R − G R |, |C M − G M |} A(S) autocorrelation A(S) spectral autocorrelation P(S) power spectral density W(S) Wavelet transform L(S) Lyapunov function [39] G(S) Gabor-Wigner transform [42] S C {C L , C R , C M } D C {|C L − C M |, |C L − C R |, |C R − C M |} A p (S C )
pairwise cross spectral densities Table 1. Signal Definitions: We define the signals and transformation functions that will be used throughout the analysis.
Feature Sets
Following the signal transformations, we also explored the features to be extracted from those signals. Due to the fact that rPPG is mostly evaluated by the accuracy in heart rate, we researched other features for image authenticity [34], classification of EEG signals [28,41], statistical analysis [60,50,25], and emotion recognition [39,41]. The feature sets are enumerated in Table 7 together with the reference papers for biological signal classification, we refer the reader to the specific paper for the formulation and detailed explanation of all features.
Authenticity Classification
With the motivation of being able to detect deep fakes in the wild, we experimented with combinations of transformed biological signals and feature sets to classify all videos into authentic and fake videos. Approaching the problem without any assumptions, but still obtaining interpretable results is the key motivation that lead us to employ SVMs with a RBF kernel [17] for this binary classification task. Appendix A documents our experiments, however we would like to highlight some of them in the main paper, as well as the parameters that affect the classification accuracy. We will denote all experiments with F * (T (S)) where F * is the feature extractor from Table 7 applied to (transformed) signal T (S) from Table 1. Note that both the signal transformation and the feature extraction can be applied separately to all elements of the inner set.
For our exploration, we randomly split the [46] dataset to training (1540 samples, 60%) and validation sets (1054 samples, 40%). We created feature vectors with maximum and mean (F 1 ) of cross power spectral densities of C M and C L (A p (C M , C L )) of all videos in the training set, as it was the feature with the highest accuracy from Section 4.3. Unlike pairwise results, SVM accuracy with f = F 1 (A p (C M , C L )) was low (68.9394%) but this set a baseline for other methods. Next, we classified by f = µ P (S) (six features per sample) achieving 68.8805% accuracy, and by f = µ Ap(D C ) ∪ µ A(S) (9 features per sample) achieving 69.6395% accuracy on the entire [46] dataset. f |f |f F 3 (Â(S)) 4*6 67.5522% F 6 (L(S)) 600 69.0408% F 4 (log(S)) 60 69.0702% F 2 (S) 13*6 69.26% F 5 (P (W (S))) 390 69.6395% Some of the experiments that lead us to use a combination of features to improve the accuracy, both on the [46] dataset and our Deep Fakes dataset, are listed in Table 2. Based on the list of experiments (documented in Appendix A), we observed that the "authenticity" (i) can be observed both in time and frequency domains, (ii) is coupled with small motion changes, illumination effects, and compression artifacts in the video thus it is not separable, and (iii) can be discovered from the coherence of multiple biological signals. It is also important to note that the reason that we exhaustively tried all possible feature extractors in the literature is robustness. We would like our system to be independent of any generative model, any compression/transmission artifact, and any content-related influence: a robust and generalized FakeCatcher, based on the essence of consistency and coherence of biological signals in authentic videos. Our observations together with the experimental results concluded on this feature set for our authenticity classification:
F 4 (S)∪ F 3 (log(S))∪ 6*6+4*6+3 71.3472% µ Ap(D C ) F 4 (log(S) ∪ A p (D C ))∪ ∪ F 1 (log(D C ))∪ 6*9+6+4*9 72.0114% F 3 (log(S) ∪ A p (D C ))f = F 1 (log(D C )) ∪ F 3 (log(S) ∪ A p (D C )) ∪ F 4 (log(S) ∪ A p (D C )) ∪ µÂ(S) ∪ max(Â(S))
The SVM classifier trained with these features on the [46] dataset resulted in 75% accuracy, and on our Deep Fakes dataset resulted in 77.33% accuracy.
Probabilistic Video Classification
As mentioned in the beginning of Section 4.1, we divide the videos into some intervals of w frames for authentic-ity classification. Considering that our end goal is to classify videos, not only segments, we need to aggregate the segment classification into video classification. First, we converted segment classes to video class by majority voting. The segment classification accuracy of 75% increased to 78.1879%, hinting that some hard failure segments can be neglected assuming that they contain significant motion or illumination changes. We computed the true threshold to be 0.447, confirming the assumption. Consequently, we converted our SVM to an SVR to return the probability of containing authentic or synthetic facial actors. Then assigning the authenticity based on the mean of the segment class probabilities increased the video classification to 82.5503%.
Results and Analysis
Our system utilizes Matlab for heavy signal processing, Open Face library [6] for face detection, libSVM [14] for the classification experiments, and Wavelab [12] for Wavelet transformation and F 5 feature set. We evaluated our detection accuracy on Face Forensics dataset [46] and our Deep Fakes dataset. We have documented all experiments in Appendix A, effects of normalization, filtering, frequency quantization, and DCT coefficients in Appendix B, and ROI analysis on both DF and FF datasets with five different region options in Appendix C. Below, we will discuss some analysis on segment durations ω, and dimensionality reduction of techniques on the feature set f . We also included a comparison section to establish our system as the first approach analyzing biological signals on inauthentic portrait videos.
Datasets
We analyzed our approach on Face Forensics dataset [46] which has (i) original and synthetic pairs of videos with same content and actor, and (ii) a compilation of original and synthetic videos, both of which are created using the same generative model. We also collected 114 fake portrait videos from various sources, totaling up to 37 minutes, and 59 GBs. We trimmed the non-portrait parts of the videos, and coupled them with their original pairs. Figure 5 demonstrates a subset of Deep Fakes dataset, originals in the top half and fakes in the bottom half. The aspect ratio, size, frame rate ( [25,30] fps), resolution, compression, resource, and generative source of the videos significantly varies within the dataset. The dataset is released on our project page 1 for the community. Table 3. Accuracy per Segment Duration: We document ω, segment count, and corresponding segment and video accuracies, on toy (FF test), entire (FF) [46], and Deep Fakes (DF) datasets. Rows without video accuracy denotes pairwise evaluations.
Analysis of Segment Duration
The pairwise decision task differs from the main motivation of FakeCatcher, so we analyzed ω per context. Table 3 documents all results on the both the subset and entire [46] dataset, and Deep Fake dataset. The top half shows the effect of ω on the pairwise comparison. Accordingly, the choice of ω = 300 (10 sec) was long enough to detect strong correlations and not include much content based changes.
For the generalized classifier, we can be more flexible for hard failure segments, because we can depend on the probabilistic video classifier to accompany for those cases. Thus, selecting a relatively smaller segment duration, ω = 180 (6 sec) helped us increase the video classification accuracy while still keeping it long enough to extract the biological signals. Note that the classifier gets less accurate on Deep Fake video classification by increasing ω, this is due to the fact that some videos are very short to have multiple segments in them, so they are discarded for those segment durations.
Blind Source Separation
To better understand our features, feature sets, and their relative importance, we computed the Fisher criterion to see if we have any linearly separable features. No significantly high ratio was observed, guiding us towards kernel based SVMs and more source separation trials. We also tried PCA (principal component analysis) and CSP (common spatial patterns) to reduce the dimensionality of our feature spaces. Figure 6 shows 3D distribution of authentic (red) and synthetic (blue) samples by the most significant three components found by PCA and CSP, without clear class boundaries. We also tried to condense the feature vector with our best classification accuracy, however we achieved 71% accuracy after PCA and 65.43% accuracy after CSP.
Comparison
Even though "deep fake"s are a relatively new problem, there are already a few papers working in this domain. [46] employs another generative model for detection, but their model is restricted to the results of their previous method. [4] also has a high detection rate if the synthetic content is generated by [53] or the VAE network used in the Fak-erApp. [36] also reaches a high accuracy, but they are dependent on eye detection and eye parameterization. All of these approaches employs neural networks blindly and does not make an effort to understand the generative noise that we have experimentally characterized by biological signals. Also it is not clear how they would perform on our Deep Fakes dataset, because it is less constrained and more diverse then their validation data.
Implementation Details
For each video segment, we apply Butterworth filter [13] with frequency band of [0. 7,14]. We quantize the signal using Welch's method [57]. Then we collected the frequencies between [h low , h high ] which corresponds to below, in, and high ranges for heart beat. There was no clear frequency interval that accumulated generative noise in biological signals (see Appendix B), so we included all frequencies. In summary, we followed the PPG extraction methods in [61] for G L , G M , G R and [18] for C L , C M , C R . In heart rate extraction, PPG signal goes through significant denoising and componentizing steps to fit the signal into expected ranges and periods. We observed that some signal frequencies and temporal changes that may be considered as noise for heart rate extraction actually contains valuable information for our case. Moreover, as our main motivation is not finding accurate heart rates, we intentionally did not follow some steps of cleaning the PPG signals with the motivation of keeping subtle generative noise.
Discussions and Future Work
One approach that stands out from Section 6.4 is to use CNNs for discovering the generative noise from original and synthetic image pairs [4]. Because our component analysis in Section 6.3 do not show strong signs of separability of features, (which is intuitive as generative models and all additive artifacts would contribute to the noisy and nonlinear nature of the output), we foresee that an autoencoder can learn the latent representation of biological signal features, later to be used for authenticity classification. However, as the first comprehensive analysis, we wanted to incorporate interpretable features for our classifier, and we see VAEs as the next step of biological feature extractor.
Another future venue we want to explore is to employ our pairwise separation formulation in the loss function of a generative model, to create deeper fakes immune to biological signal analysis. In future, we would like to design a novel generative adversarial loss function to enable creating portrait videos enriched with biological signals, thus even more believable and photorealistic.
Conclusion
We present FakeCatcher, an inauthentic portrait video detector based on biological signals. We experimentally validated that spatial coherence and temporal consistency of such signals are not well preserved in GANerated content, thus, we can derive a high dimensional feature vector to aggregate those characteristics for authenticity classification. We evaluated our approach for pairwise separation and for generalized authenticity classification, of video segments and entire videos, on Face Forensics [46] and Deep Fakes datasets, achieving up to 99.39% (pairwise) and 82.55%
Denotation Explanation
Reference F 1 mean and maximum of cross spectral density Section 4.3 F 2 root mean square of differences, standard deviation, mean of absolute differences, [50] ratio of negative differences, zero crossing rate, average prominence of peaks standard deviation of prominence of peaks, average peak width, standard deviation of peak width, maximum derivative, minimum derivative, mean of derivative, mean of spectral centroid F 3 count of narrow pulses in the spectral autocorrelation, [25] count of spectral lines in the spectral autocorrelation, average energy of narrow pulses, maximum value of the spectral autocorrelation F 4 standard deviation, standard deviation of mean values [29] of 1 second windows, root mean square of 1 second differences, mean standard deviation of differences, standard deviation of differences, mean of autocorrelation, Shannon entropy F 5
First n Wavelet coefficients [28,39] Mean amplitude of high frequency signals, slope of PSD curves between high and low [33] frequencies, variance of inter-peak distance (generalized) accuracy. Apart from FakeCatcher, we believe the second main contribution of this paper is to understand deep fakes in the wild. To our knowledge, generative models are not explored by biological signals before, and we present the first experimental literature survey for understanding and explaining human signals in synthetic portrait videos (Appendix A). We expect the observations discussed in Section 5.2 to enlighten future analysis of both generative noise and of deep fake detectors. We also encourage this line of research to continue by sharing our Deep Fakes dataset. | 4,260 |
1901.02212 | 2909336075 | As we enter into the AI era, the proliferation of deep learning approaches, especially generative models, passed beyond research communities as it is being utilized for both good and bad intentions of the society. While generative models get stronger by creating more representative replicas, this strength begins to pose a threat on information integrity. We would like to present an approach to detect synthesized content in the domain of portrait videos, as a preventive solution for this threat. In other words, we would like to build a deep fake detector. Our approach exploits biological signals extracted from facial areas based on the observation that these signals are not well-preserved spatially and temporally in synthetic content. First, we exhibit several unary and binary signal transformations for the pairwise separation problem, achieving 99.39 accuracy to detect fake portrait videos. Second, we use those findings to formulate a generalized classifier of authentic and fake content, by analyzing the characteristics of proposed signal transformations and their corresponding feature sets. We evaluated FakeCatcher both on Face Forensics dataset [46] and on our newly introduced Deep Fakes dataset, performing with 82.55 and 77.33 accuracy respectively. Third, we are also releasing this mixed dataset of synthesized videos that we collected as a part of our evaluation process, containing fake portrait videos "in the wild", independent of a specific generative model, independent of the video compression, and independent of the context. We also analyzed the effects of different facial regions, video segment durations, and dimensionality reduction techniques and compared our detection rate to recent approaches. | In par with the increasing number of inauthentic facial images and videos, methods for detecting authenticity of such content have also been proposed. Those are mostly based on finding inconsistencies in images, such as detecting distortions @cite_9 , finding compression artifacts @cite_30 , and assessing image quality @cite_20 . However, for synthetic images in our context, the noise and distortions are harder to detect due to the non-linearity and complexity of the learning process @cite_57 . It is possible to investigate the color and noise distributions of specific networks @cite_17 @cite_6 , or training CNNs for synthetic images @cite_48 @cite_38 , but catching synthetic images and videos in the wild is still an open research topic. The last approach we would like to introduce in this domain, which can be considered as the most similar to ours as the only fake detector on videos, exploits blinks to detect inauthentic facial videos @cite_18 as another biological signal. | {
"abstract": [
"In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as \"deepfake\" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal-aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture.",
"",
"The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with neural networks. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is tested over benchmarks of eye-blinking detection datasets and also show promising performance on detecting videos generated with DeepFake.",
"This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face. Traditional image forensics techniques are usually not well suited to videos due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the mesoscopic properties of images. We evaluate those fast networks on both an existing dataset and a dataset we have constituted from online videos. The tests demonstrate a very successful detection rate with more than 98 for Deepfake and 95 for Face2Face.",
"Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.",
"",
"Existing research in the field of face recognition with variations due to disguises focuses primarily on images captured in controlled settings. Limited research has been performed on images captured in unconstrained environments, primarily due to the lack of corresponding disguised face datasets. In order to overcome this limitation, this work presents a novel Disguised Faces in the Wild (DFW) dataset, consisting of over 11,000 images for understanding and pushing the current state-of-the-art for disguised face recognition. To the best of our knowledge, DFW is a first-of-a-kind dataset containing images pertaining to both obfuscation and impersonation for understanding the effect of disguise variations. A major portion of the dataset has been collected from the Internet, thereby encompassing a wide variety of disguise accessories and variations across other covariates. As part of CVPR2018, a competition and workshop are organized to facilitate research in this direction. This paper presents a description of the dataset, the baseline protocols and performance, along with the phase-I results of the competition.",
"A new face anti-spoofing method based on general image quality assessment is presented. The proposed approach presents a very low degree of complexity which makes it suitable for real-time applications, using 14 image quality features extracted from one image (i.e., the same acquired for face recognition purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on two publicly available datasets, show very competitive results compared to other state-of-the-art methods tested on the same benchmarks. The findings presented in the work clearly suggest that the analysis of the general image quality of real face samples reveals highly valuable information that may be very efficiently used to discriminate them from fake images.",
"With the powerful deep network architectures, such as generative adversarial networks and variational autoencoders, large amounts of photorealistic images can be generated. The generated images, already fooling human eyes successfully, are not initially targeted for deceiving image authentication systems. However, research communities as well as public media show great concerns on whether these images would lead to serious security issues. In this paper, we address the problem of detecting deep network generated (DNG) images by analyzing the disparities in color components between real scene images and DNG images. Existing deep networks generate images in RGB color space and have no explicit constrains on color correlations; therefore, DNG images have more obvious differences from real images in other color spaces, such as HSV and YCbCr, especially in the chrominance components. Besides, the DNG images are different from the real ones when considering red, green, and blue components together. Based on these observations, we propose a feature set to capture color image statistics for detecting the DNG images. Moreover, three different detection scenarios in practice are considered and the corresponding detection strategies are designed. Extensive experiments have been conducted on face image datasets to evaluate the effectiveness of the proposed method. The experimental results show that the proposed method is able to distinguish the DNG images from real ones with high accuracies."
],
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_48",
"@cite_9",
"@cite_6",
"@cite_57",
"@cite_20",
"@cite_17"
],
"mid": [
"2911424785",
"",
"2806757392",
"2891145043",
"2341318667",
"",
"2891524773",
"2163744678",
"2888519208"
]
} | FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals | The technological advancements in deep learning have started to revolutionize our perspective about how we solve many difficult problems in computer vision, robotics, and related areas. Common deep learning models for recognition, classification, and segmentation tasks tend to improve how we and machines perceive, learn, and analyze the world. On the other hand, the developments in generative models significantly increased how we and machines tend to mimic the world and create realistic data. Even though it is easy to speculate dystopian scenarios based on both analysis and synthesis approaches, the latter brought the immediate threat on information integrity by disabling our biological "detectors" of authentic content: we cannot simply look at an image to understand if it is fake or not.
Following the recent initiatives for democratization of AI, generative models [22,45,62,56] are getting more popular and reachable. Although the widespread use of GANs is positively impacting some technologies (i.e., personalized avatars [40], animations [44], and image inpainting [37]), there are also uses of GANs with malicious intent, which impacts the society by introducing inauthentic content (i.e., celebrity porn [3], fake news [2], and AI art [1]). This lack of authenticity and increasing information obfuscation pose real threats to individuals, criminal system, and information integrity. As every technology is built simultaneously with the counter-technology to neutralize its negative effects, we believe that it is the perfect time to develop a deep fake detector to prevent this threat before having serious consequences.
We observed that, although GANs are powerful enough to learn and generate photorealistic visual and geometric signals beyond the discriminative capabilities of human eyes, biological signals hidden by nature are still not easily replicable. Biological signals are also intuitive and complimentary ingredients of facial videos, for which generative models are mostly applied in order to create deep fakes. Moreover, videos contain another layer of complexity for the synthesized content to satisfy the consistency in the time dimension, in addition to the signals' spatial coherence. To complete this narrative, our approach exploits hidden biological signals (such as heart rate) to detect inauthentic content on portrait videos, independent of the source of creation and transmission.
Our main contributions include,
• formulations of signal transformations to exploit spatial coherence and temporal consistency of biological signals for pairwise separation task, Figure 1. System Overview: We extract biological signals from three facial regions on authentic and fake portrait video pairs. We apply several transformations to compute the spatial coherence and temporal consistency, capture the signal characteristics in feature sets, and train a binary probabilistic SVM. Then we aggregate authenticity probabilities of all segments to decide whether the video is a deep fake.
• a generalized synthetic portrait video detector which can catch "deep fake"s based on biological signals,
• experimental validation of exploitability of spatial coherence and temporal consistency of biological signals for authenticity classification,
• a diverse and unconstrained dataset of synthesized portrait videos to create a test bed for inauthentic content detection in the wild.
Our system processes input videos by collecting video segments with facial parts, defining several regions of interests (ROIs), and extracting several biological signals from those regions. In the first part, those signals, their transformations to different domains (time, frequency, timefrequency), and their correlations in different domains are examined to formulate a solution to pairwise separation. In the second part, we combine the discoveries from the pairwise context with feature extractors in the literature to come up with a generalized authenticity classifier working in a high dimensional feature space. We also aggregate the class probabilities of segments into a binary decision of "fake or authentic" for the video. The system is depicted in Figure 1.
To evaluate FakeCatcher, we collected over 100 videos, totaling up to 59 GB, from the internet. It is important to note that, unlike existing datasets, our Deep Fakes dataset includes "in the wild" videos, independent of the generative model, independent of the resolution, independent of the compression, and independent of the content and context. We detect the in authentic content on our dataset with 77.33% accuracy. We also tested our approach on Face Forensics dataset [46], which reaches 82.55% accuracy to classify inauthentic content. We analyzed the effects of segment durations, facial regions, and dimensionality reduction techniques on those datasets, on our feature sets, and on our signal transformations.
GAN Empowerment
Following the Generative Adversarial Networks proposed by Goodfellow et al. [22], deep learning models have been advancing for generative tasks for inpainting [27], translation [62], and editing [11]. GAN architecture can be simplified as the "game" between the generator network and the discriminator network. Generator adapts its parameters to create realistic images that mimic the distribution of the real data, and discriminator adapts its parameters to correctly differentiate real and fake images created by the generator. Inherently, all generative approaches suffer from the control over generation. In the context of GANs, this problem is mostly explored by Variational Autoencoders (VAE) and Conditional GANs to control the generation by putting constraints in the latent space [32,52]. In addition to improving the control over GANs, other approaches improved the training efficiency, accuracy, and realism of GANs by deep convolutions [45], Wasserstein distances [23], least square [38], and progressive growing [30]. All these advancements in the generative power, realism, and efficiency of GANs resulted in the development of "deep fake"s.
Synthetic Faces
Since Viola-Jones [55], computer vision community treasures the domain of facial images and videos as one of the primary application domains. Following the pattern, applications and explorations of GANs have been done for face completion [37], facial attribute manipulation [49,16,24], frontal view synthesis [26], facial reenactment [31,53,59], identity-preserving synthesis [8] and expression editing [19]. In particular, VAEs and GANs for facial reenactment and video synthesis resulted in the emergence of "deep fake" concept, which can be expressed as replacing the face of a target person with another face in a given video. Although the exact approach is not published, the mainstream deep fake generator is assumed to consist of two autoencoders trained on source and target videos, while keeping the encoder weights similar, so that same general features (illumination, motion, expression, etc.) can be embedded in the encoder and face-specific features can be integrated by the decoder. Another approach, Face2Face [53], reconstructs a target face from a training video and then warps it with the blend shapes obtained by the source video in real-time. Deep Video Portraits [31] and vid2vid [56] follow this approach and employ GANs instead of blend shapes. Although the results of all of these approaches can be very realistic, there are still skipped frames and face misalignments due to illumination changes, occlusions, video compressions, and sudden motions. Such imperfections are best caught by biological signals that we will propose in the next section.
Image Forensics
In par with the increasing number of inauthentic facial images and videos, methods for detecting authenticity of such content have also been proposed. Those are mostly based on finding inconsistencies in images, such as detect-ing distortions [10], finding compression artifacts [9], and assessing image quality [21]. However, for synthetic images in our context, the noise and distortions are harder to detect due to the non-linearity and complexity of the learning process [34]. It is possible to investigate the color and noise distributions of specific networks [35,48], or training CNNs for synthetic images [4,51], but catching synthetic images and videos in the wild is still an open research topic. The last approach we would like to introduce in this domain, which can be considered as the most similar to ours as the only fake detector on videos, exploits blinks to detect inauthentic facial videos [36] as another biological signal.
Biological Signals
The remote extraction of biological signals roots back to the medical communities to reduce the intrusion on the patients for specific measurements, such as heart rate. Observing subtle color and motion based signals from videos [58,15] enabled methodologies like remote photoplethysmography (rPPG or iPPG) [47,43] and head motion based ballistocardiogram (BCD) [7] for non-intrusive heart rate detection from facial RGB videos. We will mostly focus on PPG as it is more robust against dynamic scenes, while BCD needs static videos. Several approaches proposed improvements to PPG, using chrominance features [18], G channel components [61], optical properties [20], kalman filters [5], and different facial areas for signal extraction [61,54,20,47]. We believe all of these PPG variations contain valuable information in the context of inauthentic videos. In addition, their inter-consistency for real video segments are higher than those from fake videos. Multiple signals also help us regularize the environmental effects (illumination, occlusion, motion, etc.) for robustness. Thus we will work on the following six signals that are combinations of G channel-based [61] (robust against compression artifacts) and chrominance-based PPG [18] (robust against illumination artifacts) on left cheek, right cheek [20], and midregion [54], namely
{G L , G R , G M , C L , C R , C M }.
Characteristics of Biological Signals on Original-Synthetic Video Pairs
In order to understand the nature of biological signals in the context of synthetic content, we first compared signal responses on original and synthetic video pairs using traditional signal analysis methods as shown in Figure 4. The motivation behind this analysis is to find the best error metric between pairs of original and synthetic facial videos with similar content. Concluding on an error metric directs us for which signals and which features to use for a generalized classifier. This analysis also gives ground for understanding generative systems in terms of biological replicability.
Statistical Features
Setting a sample toy subset (150 pairs of videos in the test set of Face Forensics [46]), we cut each video into ω frame windows (the effect of ω is extensively analyzed in Section 6.2). Our analysis starts by comparing simple metrics as the mean, standard deviation, and min-max ranges of PPG signals from original and synthetic video pairs, achieving the best accuracy as 65%. Next, we investigated same metrics on absolute values of differences between consecutive frames in original and synthetic videos, achieving the best accuracy of 75.69%. Since it was obvious that these features are aggregating the whole segment signals into single numbers which are not representative enough, we wanted to pursue metrics in other domains.
Power Spectra
We continued by looking at the power spectrum density (PSD) of those signals S o and S s , in linear and in log scale, achieving a best accuracy of 79.33% by
µ P (G Lo ) + σ P (G Lo ) − µ P (G Ls ) − σ P (G Ls )
. Fourth, we analyzed discrete cosine transforms (DCT) of the log scale of these signals, with the analysis of including (I, II, III, and IV) variants (see Appendix B) obtaining an accuracy of 77.41%. However this gave us more inspiration about the next comparison: The zero-frequency (DC value) of DCT led us to a significant improvement to 91.33% accuracy. We ran the same evaluation, reaching 84.89% accuracy on the entire Face Forensics dataset and reaching 64.35% accuracy on our Deep Fakes dataset.
Spatio-temporal Coherence of Multiple Signals
We also ran some analysis to accompany for the coherence of biological signals within each frame. For robustness
Generalized Authentic Content Classifier
Achieving considerably high accuracy for detecting authentic content from pairs of original and synthetic videos, we would like to generalize this approach as a binary classifier for authentic content. In the pair-wise setting, comparison of aggregate features are representative enough. However as these signals are continuous and noisy, we need to extract representative features from the ensemble of those signals to build a generalized classifier. We experimented with several signal transformations in time and frequency domains to explore the artifacts of synthetic videos towards characteristic features (Table 1).
Symbol Signal S {G L , G R , G M , C L , C R , C M } D {|C L − G L |, |C R − G R |, |C M − G M |} A(S) autocorrelation A(S) spectral autocorrelation P(S) power spectral density W(S) Wavelet transform L(S) Lyapunov function [39] G(S) Gabor-Wigner transform [42] S C {C L , C R , C M } D C {|C L − C M |, |C L − C R |, |C R − C M |} A p (S C )
pairwise cross spectral densities Table 1. Signal Definitions: We define the signals and transformation functions that will be used throughout the analysis.
Feature Sets
Following the signal transformations, we also explored the features to be extracted from those signals. Due to the fact that rPPG is mostly evaluated by the accuracy in heart rate, we researched other features for image authenticity [34], classification of EEG signals [28,41], statistical analysis [60,50,25], and emotion recognition [39,41]. The feature sets are enumerated in Table 7 together with the reference papers for biological signal classification, we refer the reader to the specific paper for the formulation and detailed explanation of all features.
Authenticity Classification
With the motivation of being able to detect deep fakes in the wild, we experimented with combinations of transformed biological signals and feature sets to classify all videos into authentic and fake videos. Approaching the problem without any assumptions, but still obtaining interpretable results is the key motivation that lead us to employ SVMs with a RBF kernel [17] for this binary classification task. Appendix A documents our experiments, however we would like to highlight some of them in the main paper, as well as the parameters that affect the classification accuracy. We will denote all experiments with F * (T (S)) where F * is the feature extractor from Table 7 applied to (transformed) signal T (S) from Table 1. Note that both the signal transformation and the feature extraction can be applied separately to all elements of the inner set.
For our exploration, we randomly split the [46] dataset to training (1540 samples, 60%) and validation sets (1054 samples, 40%). We created feature vectors with maximum and mean (F 1 ) of cross power spectral densities of C M and C L (A p (C M , C L )) of all videos in the training set, as it was the feature with the highest accuracy from Section 4.3. Unlike pairwise results, SVM accuracy with f = F 1 (A p (C M , C L )) was low (68.9394%) but this set a baseline for other methods. Next, we classified by f = µ P (S) (six features per sample) achieving 68.8805% accuracy, and by f = µ Ap(D C ) ∪ µ A(S) (9 features per sample) achieving 69.6395% accuracy on the entire [46] dataset. f |f |f F 3 (Â(S)) 4*6 67.5522% F 6 (L(S)) 600 69.0408% F 4 (log(S)) 60 69.0702% F 2 (S) 13*6 69.26% F 5 (P (W (S))) 390 69.6395% Some of the experiments that lead us to use a combination of features to improve the accuracy, both on the [46] dataset and our Deep Fakes dataset, are listed in Table 2. Based on the list of experiments (documented in Appendix A), we observed that the "authenticity" (i) can be observed both in time and frequency domains, (ii) is coupled with small motion changes, illumination effects, and compression artifacts in the video thus it is not separable, and (iii) can be discovered from the coherence of multiple biological signals. It is also important to note that the reason that we exhaustively tried all possible feature extractors in the literature is robustness. We would like our system to be independent of any generative model, any compression/transmission artifact, and any content-related influence: a robust and generalized FakeCatcher, based on the essence of consistency and coherence of biological signals in authentic videos. Our observations together with the experimental results concluded on this feature set for our authenticity classification:
F 4 (S)∪ F 3 (log(S))∪ 6*6+4*6+3 71.3472% µ Ap(D C ) F 4 (log(S) ∪ A p (D C ))∪ ∪ F 1 (log(D C ))∪ 6*9+6+4*9 72.0114% F 3 (log(S) ∪ A p (D C ))f = F 1 (log(D C )) ∪ F 3 (log(S) ∪ A p (D C )) ∪ F 4 (log(S) ∪ A p (D C )) ∪ µÂ(S) ∪ max(Â(S))
The SVM classifier trained with these features on the [46] dataset resulted in 75% accuracy, and on our Deep Fakes dataset resulted in 77.33% accuracy.
Probabilistic Video Classification
As mentioned in the beginning of Section 4.1, we divide the videos into some intervals of w frames for authentic-ity classification. Considering that our end goal is to classify videos, not only segments, we need to aggregate the segment classification into video classification. First, we converted segment classes to video class by majority voting. The segment classification accuracy of 75% increased to 78.1879%, hinting that some hard failure segments can be neglected assuming that they contain significant motion or illumination changes. We computed the true threshold to be 0.447, confirming the assumption. Consequently, we converted our SVM to an SVR to return the probability of containing authentic or synthetic facial actors. Then assigning the authenticity based on the mean of the segment class probabilities increased the video classification to 82.5503%.
Results and Analysis
Our system utilizes Matlab for heavy signal processing, Open Face library [6] for face detection, libSVM [14] for the classification experiments, and Wavelab [12] for Wavelet transformation and F 5 feature set. We evaluated our detection accuracy on Face Forensics dataset [46] and our Deep Fakes dataset. We have documented all experiments in Appendix A, effects of normalization, filtering, frequency quantization, and DCT coefficients in Appendix B, and ROI analysis on both DF and FF datasets with five different region options in Appendix C. Below, we will discuss some analysis on segment durations ω, and dimensionality reduction of techniques on the feature set f . We also included a comparison section to establish our system as the first approach analyzing biological signals on inauthentic portrait videos.
Datasets
We analyzed our approach on Face Forensics dataset [46] which has (i) original and synthetic pairs of videos with same content and actor, and (ii) a compilation of original and synthetic videos, both of which are created using the same generative model. We also collected 114 fake portrait videos from various sources, totaling up to 37 minutes, and 59 GBs. We trimmed the non-portrait parts of the videos, and coupled them with their original pairs. Figure 5 demonstrates a subset of Deep Fakes dataset, originals in the top half and fakes in the bottom half. The aspect ratio, size, frame rate ( [25,30] fps), resolution, compression, resource, and generative source of the videos significantly varies within the dataset. The dataset is released on our project page 1 for the community. Table 3. Accuracy per Segment Duration: We document ω, segment count, and corresponding segment and video accuracies, on toy (FF test), entire (FF) [46], and Deep Fakes (DF) datasets. Rows without video accuracy denotes pairwise evaluations.
Analysis of Segment Duration
The pairwise decision task differs from the main motivation of FakeCatcher, so we analyzed ω per context. Table 3 documents all results on the both the subset and entire [46] dataset, and Deep Fake dataset. The top half shows the effect of ω on the pairwise comparison. Accordingly, the choice of ω = 300 (10 sec) was long enough to detect strong correlations and not include much content based changes.
For the generalized classifier, we can be more flexible for hard failure segments, because we can depend on the probabilistic video classifier to accompany for those cases. Thus, selecting a relatively smaller segment duration, ω = 180 (6 sec) helped us increase the video classification accuracy while still keeping it long enough to extract the biological signals. Note that the classifier gets less accurate on Deep Fake video classification by increasing ω, this is due to the fact that some videos are very short to have multiple segments in them, so they are discarded for those segment durations.
Blind Source Separation
To better understand our features, feature sets, and their relative importance, we computed the Fisher criterion to see if we have any linearly separable features. No significantly high ratio was observed, guiding us towards kernel based SVMs and more source separation trials. We also tried PCA (principal component analysis) and CSP (common spatial patterns) to reduce the dimensionality of our feature spaces. Figure 6 shows 3D distribution of authentic (red) and synthetic (blue) samples by the most significant three components found by PCA and CSP, without clear class boundaries. We also tried to condense the feature vector with our best classification accuracy, however we achieved 71% accuracy after PCA and 65.43% accuracy after CSP.
Comparison
Even though "deep fake"s are a relatively new problem, there are already a few papers working in this domain. [46] employs another generative model for detection, but their model is restricted to the results of their previous method. [4] also has a high detection rate if the synthetic content is generated by [53] or the VAE network used in the Fak-erApp. [36] also reaches a high accuracy, but they are dependent on eye detection and eye parameterization. All of these approaches employs neural networks blindly and does not make an effort to understand the generative noise that we have experimentally characterized by biological signals. Also it is not clear how they would perform on our Deep Fakes dataset, because it is less constrained and more diverse then their validation data.
Implementation Details
For each video segment, we apply Butterworth filter [13] with frequency band of [0. 7,14]. We quantize the signal using Welch's method [57]. Then we collected the frequencies between [h low , h high ] which corresponds to below, in, and high ranges for heart beat. There was no clear frequency interval that accumulated generative noise in biological signals (see Appendix B), so we included all frequencies. In summary, we followed the PPG extraction methods in [61] for G L , G M , G R and [18] for C L , C M , C R . In heart rate extraction, PPG signal goes through significant denoising and componentizing steps to fit the signal into expected ranges and periods. We observed that some signal frequencies and temporal changes that may be considered as noise for heart rate extraction actually contains valuable information for our case. Moreover, as our main motivation is not finding accurate heart rates, we intentionally did not follow some steps of cleaning the PPG signals with the motivation of keeping subtle generative noise.
Discussions and Future Work
One approach that stands out from Section 6.4 is to use CNNs for discovering the generative noise from original and synthetic image pairs [4]. Because our component analysis in Section 6.3 do not show strong signs of separability of features, (which is intuitive as generative models and all additive artifacts would contribute to the noisy and nonlinear nature of the output), we foresee that an autoencoder can learn the latent representation of biological signal features, later to be used for authenticity classification. However, as the first comprehensive analysis, we wanted to incorporate interpretable features for our classifier, and we see VAEs as the next step of biological feature extractor.
Another future venue we want to explore is to employ our pairwise separation formulation in the loss function of a generative model, to create deeper fakes immune to biological signal analysis. In future, we would like to design a novel generative adversarial loss function to enable creating portrait videos enriched with biological signals, thus even more believable and photorealistic.
Conclusion
We present FakeCatcher, an inauthentic portrait video detector based on biological signals. We experimentally validated that spatial coherence and temporal consistency of such signals are not well preserved in GANerated content, thus, we can derive a high dimensional feature vector to aggregate those characteristics for authenticity classification. We evaluated our approach for pairwise separation and for generalized authenticity classification, of video segments and entire videos, on Face Forensics [46] and Deep Fakes datasets, achieving up to 99.39% (pairwise) and 82.55%
Denotation Explanation
Reference F 1 mean and maximum of cross spectral density Section 4.3 F 2 root mean square of differences, standard deviation, mean of absolute differences, [50] ratio of negative differences, zero crossing rate, average prominence of peaks standard deviation of prominence of peaks, average peak width, standard deviation of peak width, maximum derivative, minimum derivative, mean of derivative, mean of spectral centroid F 3 count of narrow pulses in the spectral autocorrelation, [25] count of spectral lines in the spectral autocorrelation, average energy of narrow pulses, maximum value of the spectral autocorrelation F 4 standard deviation, standard deviation of mean values [29] of 1 second windows, root mean square of 1 second differences, mean standard deviation of differences, standard deviation of differences, mean of autocorrelation, Shannon entropy F 5
First n Wavelet coefficients [28,39] Mean amplitude of high frequency signals, slope of PSD curves between high and low [33] frequencies, variance of inter-peak distance (generalized) accuracy. Apart from FakeCatcher, we believe the second main contribution of this paper is to understand deep fakes in the wild. To our knowledge, generative models are not explored by biological signals before, and we present the first experimental literature survey for understanding and explaining human signals in synthetic portrait videos (Appendix A). We expect the observations discussed in Section 5.2 to enlighten future analysis of both generative noise and of deep fake detectors. We also encourage this line of research to continue by sharing our Deep Fakes dataset. | 4,260 |
1901.01965 | 2907172645 | Convolution is the core operation for many deep neural networks. The Winograd convolution algorithms have been shown to accelerate the widely-used small convolution sizes. Quantized neural networks can effectively reduce model sizes and improve inference speed, which leads to a wide variety of kernels and hardware accelerators that work with integer data. The state-of-the-art Winograd algorithms pose challenges for efficient implementation and execution by the integer kernels and accelerators. We introduce a new class of Winograd algorithms by extending the construction to the field of complex and propose optimizations that reduce the number of general multiplications. The new algorithm achieves an arithmetic complexity reduction of @math x over the direct method and an efficiency gain up to @math over the rational algorithms. Furthermore, we design and implement an integer-based filter scaling scheme to effectively reduce the filter bit width by @math without any significant accuracy loss. | The Winograd convolution algorithm was first used to accelerate convnets by @cite_10 . The authors derived several small fixed-size algorithms over the field of rationals based on the minimal filtering algorithm proposed by Winograd @cite_21 , which achieve arithmetic complexity reductions ranging from @math x to @math x for the popular filter sizes. | {
"abstract": [
"Three examples General background Product of polynomials FIR filters Product of polynomials modulo a polynomial Cyclic convolution and discrete Fourier transform.",
"Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We introduce a new class of fast algorithms for convolutional neural networks using Winograd's minimal filtering algorithms. The algorithms compute minimal complexity convolution over small tiles, which makes them fast with small filters and small batch sizes. We benchmark a GPU implementation of our algorithm with the VGG network and show state of the art throughput at batch sizes from 1 to 64."
],
"cite_N": [
"@cite_21",
"@cite_10"
],
"mid": [
"1487564550",
"2949245006"
]
} | EFFICIENT WINOGRAD CONVOLUTION VIA INTEGER ARITHMETIC | Quantized convolutional neural networks (convnet) have been shown to work for inference with integer weights and activations (Krishnamoorthi, 2018;Warden, 2015). By quantizing to 8-bit integers, model sizes can be reduced by a factor of four compared to the 32-bit floating-point models. Speedups of 2x-3x have been observed for quantized networks on CPUs compared to their floating-point counterparts. On hardware where optimized fixed-point capabilities are available, the speedup can reach up to 10x (Warden, 2017). Numerous efficient kernels with reduced-precision computation have achieved fast inference, such as ARM CMSIS (Lai et al., 2018), GEMMLOWP (GLP), Nvidia Tensor RT (Migacz, 2017). Custom hardware (Sze et al., 2017;Han et al., 2016;Nvidia) with reduced-precision has also been designed and built for fast inference.
Over 90% of the computation in convnets during inference and training is in convolutions (Krizhevsky et al., 2012;Szegedy et al., 2015). Different algorithmic methods have been devised to speed up this core operation. The methods include using the fast Fourier transform (FFT) (Mathieu et al., 2013;Vasilache et al., 2014), or the Winograd convolution algorithms (Winograd, 1980;Lavin, 2015). Particularly, the Winograd convolution has proved to work well for the typical small convolution sizes, such as 3 × 3 in popular convnets, due to its arithmetic complexity reduction. However, the best-known Winograd algorithms for convnets are 1 Arm Inc., San Jose, California, USA. Correspondence to: Lingchuan Meng <[email protected]>. derived over the field of rationals Q (Lavin, 2015) which exhibit undesirable overhead for full-precision implementation on custom inference accelerators with integer arithmetic.
These recent advances and limitations lead to the question: can we design efficient Winograd convolution algorithms and optimizations that use only integer arithmetic? This paper answers the question from both the algorithm perspective and implementation perspective with the main contributions as follows:
1. We derive new complex Winograd convolution algorithms by extending the construction field from rationals to complex for convnet acceleration (Section 3.3).
2. We propose optimization techniques that effectively reduce the number of general multiplications in the complex algorithms, achieving an arithmetic reduction of 3.13x over the direct method with the example. We also provide a quantitative analysis on the efficiency gain which ranges from 15.93% to 17.37% over the best-known Winograd algorithms (Section 3.3).
3. We design and implement a hardware-friendly precision scaling scheme for Winograd-domain filters using integer arithmetic.
FAST ALGORITHMS
In this section, we review the fast algorithms for integer and complex multiplication and for short convolutions, namely the Karatsuba algorithm and the Winograd convolution algorithm. We analyze the best-known Winograd algorithms in the rational field Q and expose the challenges for integer accelerators to adopt the more efficient algorithms. We derive new convolution algorithms by extending to the field of complex C and analyze their arithmetic complexity and efficiency gains.
Karatsuba Multiplication
The Karatsuba multiplication method is a classical divideand-conquer algorithm that performs the multiplication of two n-digit numbers using at most n log23 ≈ n 1.585 singledigit multiplications in general.
Let X and Y be two n-digit numbers in some base B. The basic step of Karatsuba algorithm computes the product of X and Y using three multiplications and some additions and shifts. Let m be any positive integer less than n, we write X and Y as
X = x 0 + x 1 B m , Y = y 0 + y 1 B m ,
where x 0 and y 0 are the remainders of X and Y modulo B m , and x 1 and y 1 are the quotients, respectively. With this representation, The product of X and Y becomes
XY = x 0 y 0 + (x 1 y 0 + x 0 y 1 )B m + x 1 y 1 B 2m .
The Karatsuba algorithm computes the coefficient of B m as
(x 1 y 0 + x 0 y 1 ) = (x 1 + x 0 )(y 1 + y 0 ) − x 1 y 1 − x 0 y 0 ,
which reuses x 1 y 1 and x 0 y 0 , leading to a multiplication of X and Y with three multiplications instead of four.
The algorithm can also be used in complex multiplication, where the base B is replaced with the imaginary unit i. The product of X = x 0 + x 1 · i and Y = y 0 + y 1 · i can be similarly computed with three multiplications as
(x 0 y 0 mul 1 − x 1 y 1 mul 2 ) + ((x 1 + x 0 )(y 1 + y 0 ) mul 3 −x 1 y 1 − x 0 y 0 ) · i.
Winograd Convolution
The Winograd convolution algorithm generalizes the wellknown method of the convolution theorem and fast Fourier transfrom (FFT) and outperforms it for short convolutions, as measured by the number of general multiplications.
Define a polynomial over a field F as a mathematical expression
f (x) = f n x n + f n−1 x n−1 + · · · + f 1 x + f 0 ,
where x is symbolic and f 0 , . . . , f n are elements of the field F known as the coefficients. Then convolutions can be formulated as polynomial products:
• Linear convolution can be written as s(x) = g(x)d(x);
• Cyclic convolution can be written as
s(x) = g(x)d(x) (mod x n − 1).
Fast convolution algorithms can be constructed with the Lagrange interpolation or the Chinese remainder theorem (CRT) for polynomials. The Winograd convolution algorithm computes s(
x) = g(x)d(x) (mod m(x)), where m(x), g(x)
and d(x) are polynomials in F . The linear and cyclic convolutions can be trivially cast to this format. For example, setting m(x) = x n − 1 yields the cyclic convolution. The algorithm breaks the problem into smaller pieces by factoring m(x) into pairwise coprime polynomials m (k) (x) over a subfield of F and constructs the solution using the CRT or interpolation.
As in (Lavin, 2015), let F (m, r) denote the computation of m outputs with an r-tap FIR filter. F (m, r) consumes m + r − 1 input values, the same number of general multiplications when computed with the Winograd algorithm. We express the algorithms in matrix form as
Y = A T [(Gg) (B T d)],
where represents element-wise multiplication, also known as the Hadamard product.
Higher dimensional algorithms F (m × n, r × s) can be constructed by nesting the corresponding 1D algorithms F (m, r) and F (n, s) along each dimension. Particularly in convnets, square-shaped filters and activation patches are common, and a 2D algorithm F (m × m, r × r) can be written as
Y = A T [(GgG T ) (B T dB)]A,
whose arithmetic complexity reduction can be computed as
m 2 r 2 (m + r − 1) 2 .
Therefore, the commonly-used algorithms such as F (2 × 2, 3 × 3) and F (4 × 4, 3 × 3) achieve reductions of 2.25x and 4x, respectively.
In order to avoid additional general multiplications other than those in the Hadamard product , good interpolation points must be used in the derivation of Winograd algorithms (Blahut, 2010). For F (2, 3), [0, 1, −1] are used to generate the auxiliary matrices that involve only additions, subtractions, and shifts by 1.
For F (4 × 4, 3 × 3), the best-known algorithm is derived using the interpolation points at [0, 1, −1, 2, −2]. As introduced in (Lavin, 2015), the filter transform matrix is
G = 0 0 1 .
G and its transpose G T cause significant performance overhead for accelerators designed with integer arithmetic for quantized neural networks. Both matrices contain the large denominator of 24 in its fractional values and have to be scaled up accordingly for full-precision integer arithmetic. This requires widening the spatial domain filter of w-bit by at least log 2 (24 2 ) = 10 bits when it is transformed into the Winograd domain with G and G T , resulting a significant area increase for any custom integer multipliers that compute the element-wise multiplications in the Winograd domain.
To date, only the field of rationals Q has been used as the subfield of F in the derivation of Winograd algorithms for neural network acceleration. Due to the undesirable numerical properties, most integer-based accelerators designed with Winograd convolution are limited to using F (2 × 2, 3 × 3) with only 2.25x complexity reduction and its 1D variants.
Complex Winograd Convolution
We extend the subfield of F from Q to the complex field C to derive new complex Winograd algorithms. This may seem counter-intuitive, as each multiplication in C takes four multiplications if implemented naively or three multiplications if the Karatsuba algorithm is used. Two key insights behind the complex Winograd are: (1) the symmetry of interpolation points and (2) the redundancy of information in complex arithmetic. The symmetry leads to the extension to the field of complex numbers. The redundancy leads to the optimization that exploits the complex conjugates. We will use F (4 × 4, 3 × 3) as an example throughout this section for derivation and optimization.
B T = 1 0 0 0 −1 0 0 1 1 1 1 0 0 −1 1 −1 1 0 0 −i −1 i 1 0 0 i −1 −i 1 0 0 −1 0 0 0 1 , G = 1 0 0 1 4 1 4 1 4 1 4 − 1 4 1 4 1 4 i 4 − 1 4 1 4 − i 4 − 1 4 0 0 1 , A T = 1 1 1 1 1 0 0 1 −1 i −i 0 0 1 1 −1 −1 0 0 1 −1 −i i 1
Recall that F (4 × 4, 3 × 3) requires five interpolation points. We replace the previously-known good points of
[0, 1, −1, 2, −2] in Q with [0, 1, −1, i, −i] in C, where i
is the imaginary unit. Using the same construction technique as in Q, the new transform matrices for complex F (4 × 4, 3 × 3) can be generated as above.
By extending to and using the symmetric interpolation points in the complex plane, the magnitudes of elements in all three transform matrices have been reduced. B T and A T now only involve additions and subtractions. And the largest denominator in G has been reduced from 24 to 4.
Complexity Analysis
This section analyzes the arithmetic complexity reduction of the new complex Winograd algorithm and shows how it reduces area and improves efficiency for integer arithmetic.
First, we show an optimization technique that reduces the number of complex multiplications by exploiting the underlying complex conjugate pairs. The idea is simple: if we have calculated x = a+bi, then no additional multiplication is needed for its complex conjugate x = a − bi.
We use B T dB as an example.
Let d = [d i,j ] for i, j ∈ [0, 1, . . . , 5], d = B T d and D = d B, then we have for j = [0, 1, . . . , 5] d [0, j] = d 0,j − d 4,j d [1, j] = 4 k=1 d k,j d [2, j] = −d 1,j + d 2,j − d 3,j + d 4,j d [3, j] = −d 2,j + d 4,j − (d 1,j − d 3,j )i d [4, j] = −d 2,j + d 4,j + (d 1,j − d 3,j )i d [5, j] = −d 1,j + d 5,j .
The
D = D 0,0 D 0,1 D 0,2 D 0,3 D 0,3 D 0,5 D 1,0 D 1,1 D 1,2 D 1,3 D 1,3 D 1,5 D 2,0 D 2,1 D 2,2 D 2,3 D 2,3 D 2,5 D 3,0 D 3,1 D 3,2 D 3,3 D 3,4 D 3,5 D 3,0 D 3,1 D 3,2 D 3,4 D 3,3 D 3,5 D 5,0 D 5,1 D 5,2 D 5,3 D 5,3 D 5,5
That is, the 6 × 6 transformed activation contains 10 pairs of complex conjugates and the other 16 values in Q.
The same pattern can be found in the transformed filter W = GgG T by noticing the rows [3,4] in G are structurally the same as those in B T , in terms of producing complex conjugate pairs. Therefore, we have
W = W 0,0 W 0,1 W 0,2 W 0,3 W 0,3 W 0,5 W 1,0 W 1,1 W 1,2 W 1,3 W 1,3 W 1,5 W 2,0 W 2,1 W 2,2 W 2,3 W 2,3 W 2,5 W 3,0 W 3,1 W 3,2 W 3,3 W 3,4 W 3,5 W 3,0 W 3,1 W 3,2 W 3,4 W 3,3 W 3,5 W 5,0 W 5,1 W 5,2 W 5,3 W 5,3 W 5,5
Rewrite the 2D Winograd algorithm in matrix form:
Y = A T [(GgG T ) (B T dB)]A = A T [W D]A
Only the Hadamard product W D contains general multiplications. Furthermore, the complex values and their conjugates are at the matching positions in D and W . The 16 pairs of rational elements, such as {D 0,0 , W 0,0 }, require 16 general multiplications; The 20 complex multiplications can be grouped into 10 pairs of complex conjugate multiplications, such as {{D 0,3 · W 0,3 }, {D 0,3 · W 0,3 }}. Since x · y = x · y, each set requires only one complex multiplication. Using the Karatsuba algorithm introduced in Section 3.1, each complex multiplication takes 3 real multiplications. Therefore, the complex F (4 × 4, 3 × 3) performs a total of 16 + 10 × 3 = 46 general multiplications, leading to an arithmetic complexity reduction of 144/46 = 3.13x as measured by the number of general multiplications.
Efficiency gain for hardware implementation. Recall for the F (4 × 4, 3 × 3) in Q with 4x reduction, the bitwidth of Winograd-domain filters has to be widened by log 2 (24 2 ) = 10 bits. With the F (4 × 4, 3 × 3) in C, the widening is reduced to log 2 (4 2 ) = 4 bits. Given the typical bitwidth of spatial filters in quantized neural networks as 8-bit, using the complex F (4 × 4, 3 × 3) instead of its rational counterpart reduces the bitwidth by 1 − 8 + 4 8 + 10 = 33.33% and achieves an efficiency gain of 3.13/12 4.0/18 − 1 = 17.37% with respect to the bitwidth. Comparing to the rational F (2 × 2, 3 × 3), the efficiency gain is 3.13/(8 + 4) 2.25/(8 + 2) − 1 = 15.93%.
Efficiency gain for software implementation. Software speedup by CPU/GPU benefits from improved SIMD vectorization. The complex F(4x4, 3x3) reduces the bitwidth from 18 to 12, enabling int-16 SIMD instructions where available and extending an n-way vectorization to 2n-way.
Additional optimizations include:
• Keeping the Hadamard product in the Karatsuba format if the products are summed across multiple channels. • Skipping the calculations for the imaginary coefficients in the final results, as we know they will sum to 0 because of the original computation of convolving two integer tensors g and d.
The optimization techniques and analysis developed in this section extend to the derivation of larger Winograd algorithms that require more good interpolations points in addition to [0, 1, −1].
FILTER PRECISION SCALING
In this section, we propose an efficient precision scaling scheme for the Winograd-domain filters which further improves the efficiency of the Hadamard products without any significant accuracy loss. The scheme works in parallel with the complex Winograd algorithms introduced in Section 3.3.
Quantized Filters in Winograd Domain
For inference on mobile and edge devices, it has been shown that quantized neural network models can achieve comparable accuracies as the full-precision float-point (fp32) models. Mainstream machine learning frameworks such as Tensor-Flow (Abadi et al., 2016) have also developed their quantization flows that convert fp32 models to int8 models. As summarized in (Krishnamoorthi, 2018), typical quantization methods include: (1) uniform affine quantization, (2) uniform symmetric quantization, and (3) stochastic quantization.
In this work, we assume the quantized filter weights are generated by the uniform affine quantization and represented in unsigned int8 (uint8) together with a dynamic range. Figure 1 illustrates an example of the quantized weights extracted from a fully-connected layer.
The bottom x-axis shows the uint8 weights ranging from 0 to 255; the top x-axis shows the dequantized fp32 weights.
The spatial filters are transformed into the Winograd domain using the transform matrices. We use F (2 × 2, 3 × 3) in Q as an example. The filter transform matrix is G
G = 2 0 0 1 1 1 1 −1 1 0 0 2
which is produced by scaling up the original filter transform matrix G (Lavin, 2015) by a factor of 2 element-wise for integer arithmetic.
Let g = [g i,j ] for i, j ∈ [0, 1, 2], then G g becomes
2g 0,0 2g 0,1 2g 0,2 g 0,0 + g 1,0 + g 2,0 g 0,1 + g 1,1 + g 2,1 g 0,2 + g 1,2 + g 2,2 g 0,0 − g 1,0 + g 2,0 g 0,1 − g 1,1 + g 2,1 g 0,2 − g 1,2 + g 2,2 2g 2,0 2g 2,1 2g 2,2 Denote the elements in G g as p i , i ∈ [0, 1, . . . , 11] in the row-major order for a simpler representation, then G gG T becomes
2p 0 p 0 + p 1 + p 2 p 0 − p 1 + p 2 2p 2 2p 3 p 3 + p 4 + p 5
p 3 − p 4 + p 5 2p 5 2p 6 p 6 + p 7 + p 8 p 6 − p 7 + p 8 2p 8 2p 9 p 9 + p 10 + p 11 p 9 − p 10 + p 11 2p 11 .
In order to adjust for the asymmetry introduced by the uniform affine quantization, the zero-point needs to be subtracted from the uint8 quantized weights, resulting in int9 weights ranging in [−255 : 255]. As a result, the worst-case magnitudes and bitwidths for each element in G gG T are The same analysis can be applied to the activation transform B T dB whose results can be represented by 11-bit. In this paper, we focus on the filter precision scaling that can be preprocessed offline and incur no overhead at inference time.
Filter Precision Scaling
Targeting quantized filters in the Winograd domain, we propose an efficient lossy precision scaling scheme using only integer arithmetic. We continue to use F (2 × 2, 3 × 3) as the running example.
The precision scaling is applied to the filters used to generate one output feature map (OFM). The scheme computes the minimum downscale factor at each X-Y location across all channels using the maximum magnitude. The scale factors are computed to put the transformed weights back into the int9 range which are then consumed by the multipliers. The downscaled Hadamard products are accumulated over all channels. Finally, the scaling is inverted before the final transforms A T and A.
Since the maximum magnitude in G gG T is 2295 = 9·255. The scale factors must cover the range of 1 9 to 1. For cost reasons, we implement the scaling by (1) multiplying with a 4-bit number n, and (2) shifting right by a variable amount p. That is, the scale factor is in the form of n 2 p . Recall that F (2 × 2, 3 × 3) transforms a 3 × 3 spatial filter to a 4 × 4 Winograd-domain filter. Since we share the same scale factor at each X-Y location across all channels, 16 scale factors are computed for a 4 × 4 × c Winograd-domain filter where c is the number of channels.
Next we describe the steps to compute the scale factors.
1. Transform all weights for the current OFM.
Compute 16 maximum magnitudes for the 4 × 4 × c
Winograd-domain kernel.
3. If the maximum magnitude for a given X-Y location ≤ 255, set n and p to 0, meaning no scaling applied. Note that p ranges from 4 to 7 with an offset of 4, therefore can be represented with 2 bits. As a result, a total of 6 bits are used to specify each scale factor, with the value 0 meaning "no scaling". The scaling factors are summarized in Table 1 where the out-of-range and some duplicated scale factors are grayed out.
The reverse scaling before the final transforms is applied as a combination of 8-bit multiply and a right shift between 4 and 7 bits, which constitutes to a more precise approximation of the reciprocal of the corresponding scaling factor. During the application of A T and A, note that the original matrix G has been scaled up by 2 to G , thus a right shift by 1 must be taken after each final transform to cancel the scaling.
Efficiency and Error Analysis
The transform and downscaling of the filters are performed before inference time, and Winograd-domain filters can be reused during inference. The downscaling step performs O(n) comparisons, 4-bit multiplications, and right shifts, where n is the number of weights. The reverse scaling performs only h × w 8-bit multiplications and h × w right Recall that subtracting the zero-point extends the range of quantized weights from uint8 to int9. In the example of F (2 × 2, 3 × 3), applying the Winograd transforms further extends the required range to 13-bit as calculated in Section 4.1. By using the proposed lossy filter precision scaling scheme, we reduce the range of Winograd weights back to 9-bit, leading to a filter bitwidth reduction of 30.77%.
The integer approximations of scaling factors introduce errors. We analyze the static errors here and measure the dynamic data-driven errors in terms of inference accuracy loss in Section 4.4.
For static scaling errors, Figure 3 uses the dashed vertical lines to show the applicable boundaries and the proportional scaling errors of each unique n 2 p scaling factor. Figure 4 and 5 describe the numerical and proportional errors of all the scalable weights after being downscaled and then upscaled by the best integer-approximated scaling factors. The average numerical errors is 1.12, and the average proportional error is 0.1%, indicating the filter precision scaling scheme introduces a small positive-biased error overall.
Evaluation
The filter precision scaling scheme is tested on the combination of popular convnet models of Inception V3 (Szegedy et al., 2015) and ResNet V2 50, and a benchmark dataset ILSCVR-12 (Russakovsky et al., 2014). To produce the quantized models, we first obtain the pre-trained fp32 models published by TensorFlow-Slim (Silberman & Guadarrama, 2016). Then we apply the standard quantization approach recommended by TensorFlow (TFQ).
The quantization method replaces the fp32 Conv2D nodes in Efficient Winograd Convolution via Integer Arithmetic the original pre-trained model with int8 QuantizedConv2D nodes (usually followed by Requantize), an example of which is illustrated by Figure 2 using the TensorBoard (TB).
Our experiment captures a subset of QuantizedConv2D nodes in the quantized models where the filter height and width are both 3 and the strides and dilations are both [1, 1, 1, 1]. Note that the nodes with non-unit strides or dilations or of 1D shapes (1 × 3 or 3 × 1) do not affect accuracy and are therefore skipped. The subsets of captured nodes, twelve for Inception V3 and sixteen for ResNet V2 50, are then edited dynamically using the Graph Editor library (GE). The editing takes place on two levels:
• On the graph level, the subgraph that contains the captured nodes is duplicated within the same graph, such that the same image will be processed by both the reference subgraph and the Winograd and scaling-enabled counterpart.
• On the node level, each captured QuantizedConv2D node in the duplicated subgraph is replaced with a custom-built F (2 × 2, 3 × 3) convolution scaled by the filter precision scaling method proposed in Section 4.2.
Inception V3. The quantized model records a 73.91% top-1 accuracy and a 90.97% top-5 accuracy, the precision-scaled Winograd model achieves a 73.69% top-1 accuracy (∆ = −0.22%) and a 90.3% top-5 accuracy (∆ = −0.67%).
ResNet V2 50. The precision-scaled Winograd model leads to a small loss of 0.13% for top-1 accuracy (73.34% → 73.21%) and the same top-5 accuracy 90.83%, compared to the quantized counterpart.
Both experiments confirmed the proposed precision scaling scheme leads to very small accuracy loss for quantized models. Extensive experiments on more neural networks are planned as part of the future work.
CONCLUSION
The Winograd convolution has proved its advantages for the small convolution sizes with the reduction in arithmetic complexity, but also poses challenges to efficient implementation by the emerging kernels and hardware accelerators with integer arithmetic.
This paper is the first to extend the algorithm construction to the field of complex and derive the new complex algorithms for convnets. As an example, the complex F (4 × 4, 3 × 3) achieves a complexity reduction of 3.13x over the direct method and an efficiency gain in the range of 15.93% to 17.37% over the best-known rational Winograd algorithms with the hardware bitwidth is considered. The derivation method and optimization techniques developed in this paper extend to the construction of larger Winograd convolutions. This paper also answers the challenges from the implementation perspective. We proposed a fast integer-based precision scaling scheme for Winograd-domain filters. The scheme has been analyzed to show a significant reduction in filter bit width with very small static errors. Furthermore, we have shown the combination of Winograd convolution and the lossy scaling scheme can achieve good inference accuracy compared to the reference model without any significant loss.
For future work, a quantitative impact analysis of the additional optimizations listed for complex Winograd convolution will be extended, and more experiments on the impact of precision scaling in a wider range of neural networks will be performed. | 4,495 |
1901.01860 | 2907178700 | In this paper, we propose a method for clustering image-caption pairs by simultaneously learning image representations and text representations that are constrained to exhibit similar distributions. These image-caption pairs arise frequently in high-value applications where structured training data is expensive to produce but free-text descriptions are common. MultiDEC initializes parameters with stacked autoencoders, then iteratively minimizes the Kullback-Leibler divergence between the distribution of the images (and text) to that of a combined joint target distribution. We regularize by penalizing non-uniform distributions across clusters. The representations that minimize this objective produce clusters that outperform both single-view and multi-view techniques on large benchmark image-caption datasets. | Joint embedding of image and text models have been increasingly popular in applications including image captioning @cite_0 @cite_39 @cite_24 , question answering @cite_4 , and information retrieval @cite_24 @cite_35 @cite_16 . DeVise @cite_5 is the first method to generate visual-semantic embeddings that linearly transform a visual embedding from a pre-trained deep neural network into the embedding space of textual representation. The method begins with a pre-trained language model, then optimizes the visual-semantic model with a combination of dot-product similarity and hinge rank loss as the loss function. After DeVise, several visual semantic models have been developed by optimizing bi-directional pairwise ranking loss @cite_23 @cite_27 and maximum mean discrepancy loss @cite_7 . Maximizing CCA (Canonical Correlation Analysis) @cite_2 is also a common way to acquire cross-modal representation. @cite_29 address the problem of matching images and text in a joint latent space learned with deep canonical correlation analysis. @cite_35 develop a canonical correlation analysis layer and then apply pairwise ranking loss to learn a common representation of image and text for information retrieval tasks. However, most image-text multi-modal studies focus on matching image and text. Few methods study the problem of unsupervised clustering of image-text pairs. | {
"abstract": [
"Cross-modality retrieval encompasses retrieval tasks where the fetched items are of a different type than the search query, e.g., retrieving pictures relevant to a given text query. The state-of-the-art approach to cross-modality retrieval relies on learning a joint embedding space of the two modalities, where items from either modality are retrieved using nearest-neighbor search. In this work, we introduce a neural network layer based on canonical correlation analysis (CCA) that learns better embedding spaces by analytically computing projections that maximize correlation. In contrast to previous approaches, the CCA layer allows us to combine existing objectives for embedding space learning, such as pairwise ranking losses, with the optimal projections of CCA. We show the effectiveness of our approach for cross-modality retrieval on three different scenarios (text-to-image, audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a multi-view network using freely learned projections optimized by a pairwise ranking loss, especially when little training data is available (the code for all three methods is released at: https: github.com CPJKU cca_layer).",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our frame-work improves over the current state of the art on many of the considered tasks.",
"This paper addresses the problem of matching images and captions in a joint latent space learnt with deep canonical correlation analysis (DCCA). The image and caption data are represented by the outputs of the vision and text based deep neural networks. The high dimensionality of the features presents a great challenge in terms of memory and speed complexity when used in DCCA framework. We address these problems by a GPU implementation and propose methods to deal with overfitting. This makes it possible to evaluate DCCA approach on popular caption-image matching benchmarks. We compare our approach to other recently proposed techniques and present state of the art results on three datasets.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Abstract: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.",
"",
"",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases."
],
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_7",
"@cite_29",
"@cite_39",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_16"
],
"mid": [
"2620341577",
"2950761309",
"2603705233",
"1949478088",
"2950178297",
"2962706528",
"2951805548",
"2287889828",
"",
"",
"2123024445",
"2798397499"
]
} | MultiDEC: Multi-Modal Clustering of Image-Caption Pairs | In many science and engineering applications, images are equipped with free-text descriptions, but structured training labels are difficult to acquire. For example, the figures in the scientific literature are an important source of information (Sethi et al. 2018;Lee et al. 2017), but no training data exists to help models learn to recognize particular types of figures. These figures are, however, equipped with a caption describing the content or purpose of the figure, and these captions can be used as a source of (noisy) supervision. Grechkin et al. used distant supervision and co-learning to jointly train an image classifier and a text classifier, and showed that this approach offered improved performance (Grechkin, Poon, and Howe 2018). However, this approach relied on an ontology as a source of class labels. No consensus on an ontology exists in specialized domains, and any ontology that does exist will change frequently, requiring re-training. Our goal is to perform unsupervised learning using only the image-text pairs as input.
A conventional approach is to cluster the images alone, ignoring the associated text. Unsupervised image clustering has received significant research attention in computer vision recently (Xie, Girshick, and Farhadi 2016;Yang, Parikh, and Batra 2016). However, as we will show, these single-view approaches fail to produce semantically meaningful clusters on benchmark datasets. Another Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. conventional solution is to cluster the corresponding captions using NLP techniques, ignoring the content of the images. However, the free-text descriptions are not a reliable representation of the content of the image, resulting in incorrect assignments.
Current multi-modal image-text models focus on matching images and corresponding captions for information retrieval tasks (Karpathy and Fei-Fei 2015;Dorfer et al. 2018;Carvalho et al. 2018), but there is less work on unsupervised learning for both images and text. Jin et al. (Jin et al. 2015) solved a similar problem where they utilized Canonical Correlation Analysis (CCA) to characterize correlations between image and text. However, the textual information for the model were explicit tag rather than long-form freetext descriptions. Unlike tags, free-text descriptions are extremely noisy: they always contain significant irrelevant information, and may not even describe the content of the image.
We propose MultiDEC, a clustering algorithm for imagetext pairs that considers both visual features and text features and simultaneously learns representations and cluster assignments for images. MultiDEC extends prior work on Deep Embedded Clustering (DEC) (Xie, Girshick, and Farhadi 2016), which is a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping function from the data space to a lower-dimensional feature space in which it iteratively optimizes Kullback-Leibler divergence between embedded data distribution and a computed target distribution. DEC has shown success on clustering several benchmark datasets including both images and text (separately).
Despite its utility, in our experiments DEC may generate empty clusters or assigns clusters to outlier data points, which is a common problem in clustering tasks (Dizaji et al. 2017;Caron et al. 2018). We address the problem of empty clusters by introducing a regularization term to force the model to find a solution with a more balanced assignment.
We utilize a pair of DEC models to take data from image and text. Derived from the target distribution in (Xie, Girshick, and Farhadi 2016), we propose a joint distribution for both the embedded image features and the text features. MultiDEC simultaneously learns the image representation by iterating between computing the joint target distribution and minimizing KL divergence between the embed-ded data distribution to the computed joint target distribution. We evaluate our method with four benchmark datasets and compare to both single-view and multi-view methods. Our method shows significantly improvement over other algorithms in all large datasets.
In this paper, we make the following contributions: • We propose a novel model, MultiDEC, that considers semantic features from corresponding captions to simultaneously learn feature representations and cluster centroids for images. MultiDEC iterates between computing a joint target distribution from image and text and minimizing the regularized KL divergence between the soft assignments and the joint target distribution. • We run a battery of experiments to compare our method to multiple single-view and multi-view algorithms on four different datasets and demonstrate the superior performance of our model. • We conduct a qualitative analysis to show that MultiDEC separates the semantically and visually similar data points and is robust to noisy and missing text.
Method
Parameter Initialization
We initialize DNN parameters with two stacked autoencoders (SAE). A stacked autoencoder has shown success in generating semantically meaningful representation in several studies (c.f., (Vincent et al. 2010;Le 2013;Xie, Girshick, and Farhadi 2016)). We utilize a symmetric stacked autoencoder to learn the initial DNN parameters for each view by minimizing mean square error loss between the output and the input. After training the autoencoder, we discard the decoder, pass data X and T through trained encoder and apply K-means to the embeddings on Z and Z to obtain initial centroids µ j and µ j .
With the initialization of DNN parameters and centroids, MultiDEC updates the parameters and the centroids by iterating between computing a joint target distribution and minimizing a (regularized) KL divergence of both data views to it. In the first step, we compute soft assignments (i.e., a distribution over cluster assignments) for both views. The process stops when convergence is achieved.
Soft Assignment Following Xie et al. (Xie, Girshick, and Farhadi 2016), we model the probability of data point i being assigned to cluster j using the Student's t-distribution (Maaten and Hinton 2008), producing a distribution (q ij for images and r ij for text).
q ij = (1 + z i − µ j 2 /α) − α+1 2 j (1 + z i − µ j 2 /α) α+1 2
(1)
r ij = (1 + z i − µ j 2 /α) − α+1 2 j (1 + z i − µ j 2 /α) α+1 2(2)
where q ij and r ij are the soft assignments for image view and text view, respectively, and α is the number of degrees of freedom of the Student's t-distribution. z i is the embedding on latent space Z of data x i , which can be described as z i = f θ X (x i ). z i is the embedding on latent space Z of data t i , which can be illustrated as z i = g θ T (t i ). Following Xie et al., we set α to 1 because we are not able to tune it in an unsupervised manner.
Aligning Image Clusters and Text Clusters After calculating the soft assignments for both views, we need to align the two sets of k clusters. This cluster alignment is obtained from the highest probability cluster (i.e., image i is assigned to cluster arg max j q ij ). Next, to align image clusters and text clusters, we use the Hungarian algorithm to find the minimum cost assignment. We create a k × k confusion matrix where an entry (m, n) represents the number of data points being assigned to m-th image cluster and n-th text cluster. We then subtract the maximum value of the matrix from the value of each cell to obtain the "cost." The Hungarian algorithm is then applied to the cost matrix.
KL Divergence Minimization Xie et al. trained DEC by minimizing the KL divergence between the soft assignment q ij and a target distribution p ij (presented below in Eq. 8), with a goal of purifying the clusters by learning from high confidence assignments:
L = KL(p||q) = 1 N N i k j p ij log p ij q ij(3)
DEC fails to address the issue of trivial solutions and empty clusters which happen frequently in clustering problems (Dizaji et al. 2017;Caron et al. 2018). Dizaji et al. (Dizaji et al. 2017) used a regularization term to penalize non-uniform cluaster assignments. Following this concept, we define a target label distribution by averaging the joint target distribution from all data points.
h j = P (y = j) = 1 N N i p ij(4)
where h j can be interpreted as the prior frequency of clusters in the joint target distribution. To impose the preference of a balanced assignment, we add a term representing the KL divergence from a uniform distribution u. The regularized KL divergence is computed as L = KL(p||q) + KL(h||u) (5) where the first term aims to minimize the dissimilarity between soft assignment and joint target distribution and the second term is to force the model to prefer a balanced assignment. The uniform distribution can be replaced with other distribution if there is any prior knowledge of the cluster frequency.
MultiDEC is trained by matching the image distribution q ij to the joint distribution p ij , and similarly for the text distribution r ij .
L img = KL(p||q) + KL(h||u)(6)
L txt = KL(p||r) + KL(h||u) (7) At this point, we have presented half of the iteration: how MultiDEC generates the soft assignments, and the objective function of the image and text models. Next, we compute a joint target distribution from both views. (Xie, Girshick, and Farhadi 2016) which aims to improve cluster purity and to emphasize data points with high assignment confidence:
p ij = q 2 ij/β j j q 2 ij /β j(8)
where β j = i q ij . To fit the model with multi-view problem setting, we propose a joint target distribution:
p ij = q 2 ij/β j 2 × j q 2 ij /β j + r 2 ij/σ j 2 × j r 2 ij /σ j(9)
where β j = i q ij and σ j = i r ij are soft cluster frequencies for image view and text view, respectively. With this joint target distribution, MultiDEC is able to take both sources of information into account during training.
Some images do not have associated text; we want the model to be robust to this situation. Missing text causes the second term in equation (9) to be 0 and the data points with text would have higher value of p ij and contribute a larger gradient to the model. We will discuss this issue in more detail in Section 5.
Experiments
We evaluate our method with four datasets and compare to several single-view and multi-view algorithms. In these experiments, we aim to verify the effectiveness of MultiDEC on real datasets, validate that MultiDEC outperforms singleview methods and state-of-the-art multi-view methods.
Datasets
To evaluate our method, we use datasets that have images with corresponding captions as well as ground-truth labels to define the clusters. Our proposed model is tested with four datasets from three different sources and compared against several single-view and multi-view algorithms. We summarize the results in Table 1 We use ResNet-50 (He et al. 2016), pretrained on 1.2M ImageNet (Deng et al. 2009) corpus, for extracting 2048 dimensional image features and doc2vec (Le and Mikolov 2014), which is pre-trained on Wikipedia via skip-gram, to embed captions and obtain text features. Recent studies have shown image features embedded by ImageNet pretrained models improve general image clustering tasks and ResNet-50 features are superior than representation extracted from other state-of-the-art CNN architectures (Guérin et al. 2017;Guérin and Boots 2018). Doc2vec also has shown to produce effective representations for long text paragraphs (Lau and Baldwin 2016).
Competitive Methods
We compare our method to a variety of single-view and multi-view methods.
Single-view methods We run two single-view methods to serve as baseline comparison. (Xie, Girshick, and Farhadi 2016): DEC simultaneously learns feature representations and cluster assignments of the data by minimizing KL divergence between data distribution and an auxiliary target distribution. We then apply K-means to output. We show results for both text and image inputs. Table 2: Clustering performance of several single-view and mutli-view algorithms on four datasets. The results reported are the average of 10 iterations. MultiDEC outperforms the comparing methods on three datasets by a large margin. The insufficient performance from DNN models on Pascal dataset might be caused by insufficient amount of data.
Multi-view methods We evaluate three state-of-the-art multi-view methods. Current methods for matching image and text models are based on minimizing ranking loss or maximizing CCA between text and image.
Evaluation Metrics
All experiments are evaluated by three standard clustering metrics: clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). For all metrics, higher numbers indicates better performance. We use hyperparameter settings following Xie et al. (Xie, Girshick, and Farhadi 2016). For baseline algorithms, we use the same setting in their corresponding paper. All the results are the average of 10 trials. Table 4.1 displays the quantitative results for different methods on various datasets. MultiDEC outperforms other tools on almost every dataset by a noticeable margin with an exception of Pascal dataset. All the DNN models suffer from poor performance on Pascal dataset. Our interpretation is that there is not sufficient data (1,000 data points with 50 images per cluster) for the DNNs to converge. However, Mul-tiDEC still surpasses other DNN models on this dataset.
Experimental Results
Discussion
In this section, we discuss additional experiments to expand on MultiDEC's effectiveness.
Qualitative Comparison
The cluster metrics are difficult to interpret, so we are interested in exploring a qualitative comparison between Multi-DEC and the best single-view and multi-view competitors, DEC and CCA, respectively. Figure 2 is a visualization of the latent space of MultiDEC to illustrate its effectiveness in producing coherent clusters. We use t-SNE to visualize the embeddings from the latent space with perplexity = 30. The positions and shapes of the clusters are not meaningful due to the operation of t-SNE. Both DEC and MultiDEC are able to generate distinct clusters, but DEC appears to have many more false assignments. For example, DEC can struggle to differentiate giraffe from pizza and kite from airplane. CCA is able to gather semantically similar images together, but the cluster boundaries are much less distinct.
We further compare three algorithms by inspecting examples of the clusters. Figure 3 shows the top five images with highest confidence from each cluster from the Coco-cross dataset. The ten categories in Coco-cross are stop sign, airplane, suitcase, pizza, cell phone, person, giraffe, kite, toilet, and clock. The figure shows that DEC clusters are not always coherent. For example, cluster #1 and cluster #7 seem to include mostly airplane images and cluster #0 and cluster #4 are clock clusters. Cluster #9 from DEC is a fusion of giraffe and pizza, which are semantically significantly different. Our guess is that both giraffe and pizza share similar colors (yellow) and patterns (spots on the giraffe body and toppings on the pizza). MultiDEC, on the other hand, is easily able to distinguish these objects, because the text descriptions expose their semantic differences. Surprisingly, Multi-DEC is also able to distinguish airplane and kite, which are not only visually similar, but are also semantically related. However, we are still able to observe some errors from Mul-tiDEC, such as examples of suitcase and cellphone, which (Color encoding is based on groundtruth labels.) We can observe that MultiDEC successfully separates overlapped data points in original latent space and generates semantic meaningful clusters. DEC has trouble with separating kite from airplane and giraffe from pizza. While CCA is able to gather semantic similar images, but the latent space is still difficult for clustering analysis with unclear bondaries between clusters. are visually similar, assigned into the same cluster (cluster #1) and clock examples separated into two clusters: clocks on towers (cluster #2) and indoors clocks (cluster #9). As we saw in Figure 2, the cluster boundaries are indistinct in CCA latent space, and the qualitative results shown in Figure 3 corroborate this result. We can see several different clusters include similar objects; for example, both cluster #1 and cluster #9 include airplanes and cluster #0 and cluster #7 include giraffes.
Model Robustness
Model Sensitivity to Text Features We use learned embeddings for text as input to the model. To examine the model's sensitivity to the quality of the input text representations, we experiment with two other baseline text representations, TF-IDF and FastText (Bojanowski et al. 2017). TF-IDF ignores all co-occurrence relationships, and therefore has significantly less semantic content, so we expect performance to be worse. We produce a 2000-d vector for text features for each data point. FastText is a word embed- ) to represent paragraphs. Table 3 shows the results of MultiDEC trained using these different text features. MultiDEC produces similar results despite different text features, which demonstrates that the performance is the result of the MultiDEC algorithm itself rather than the quality of the input text features. Figure 4: Experiment results on model robustness to missing text. The clustering accuracy holds even with very little data, which verify our hypothesis in method section.
Robustness to Missing Text Incomplete views are a very common problem in mutli-view clustering (Xu, Tao, and Xu 2015). In realistic settings, not all images will be equipped with text descriptions, and we want MultiDEC to degrade gracefully in these situations. To analyze the robustness of MultiDEC when text descriptions are missing, we remove text from a random set of images at varying rates. We expect that performance will degrade as we remove text labels -if it did not, then MultiDEC would not be making use of the information in the text labels. The results are shown in Figure 4, and we see that performance does indeed degrade, but not by a significant amount. This result shows that the joint target distribution can work with either or both sources of information (Equation 9). Images with missing text have smaller value of p ij because the second term in equation (9) is ignored, while images with captions have larger value of p ij and contribute larger gradient to the model. We also ran an experiment to test the robustness to noisy text by scrambling the image-text pairs, such that a given percentage of the images would be associated with the text of a different image. This change is more adversarial than missing text, as the incorrect labels could train the model to learn incorrect Figure 5: MultiDEC performance when swapping the text labels for a random portion of the image-text pairs. The model performance remains high until over 60% of the input text are scrambled.
signals. (Figure 5). The performance of MultiDEC remains steady until almost 60% of the text is perturbed, indicating that MultiDEC is robust to incorrect labels.
Conclusion
We present MultiDEC, a method that learns representations from multi-modal image-Text pairs for clustering analysis.
MultiDEC consists a pair of DEC models to take data from image and text, and works by iteratively computing a proposed joint target distribution and minimizing KL divergence between the embedded data distribution to the computed joint target distribution. We also address the issue of empty cluster by adding a regularized term to our KL divergence loss. MultiDEC demonstrates superior performance on various datasets and outperforms single view algorithms and state-of-the-art multi-view models. We further examine the robustness of MultiDEC to input text features, missing and noisy text. Our experimental results indicate that Multi-DEC is a promising model for image-text pair clustering. | 3,209 |
1901.01860 | 2907178700 | In this paper, we propose a method for clustering image-caption pairs by simultaneously learning image representations and text representations that are constrained to exhibit similar distributions. These image-caption pairs arise frequently in high-value applications where structured training data is expensive to produce but free-text descriptions are common. MultiDEC initializes parameters with stacked autoencoders, then iteratively minimizes the Kullback-Leibler divergence between the distribution of the images (and text) to that of a combined joint target distribution. We regularize by penalizing non-uniform distributions across clusters. The representations that minimize this objective produce clusters that outperform both single-view and multi-view techniques on large benchmark image-caption datasets. | addressed a related problem where they aim to cluster images by integrating the multimodal feature generation with the Locality Linear Coding (LLC) and co-occurrence association network, multimodal feature fusion with CCA, and accelerated hierarchical k-means clustering @cite_25 . However, the text data they handled are tags instead of longer, noisy, and unreliable free-text descriptions as we do in MultiDEC. proposed EZLearn @cite_14 , a co-training framework which takes image-text data and an ontology to classify images using labels from the ontology. This model requires prior knowledge of the data in order to derive an ontology; this prior knowledge is not always available, and can significantly bias the results toward the clusters implied by the ontology. | {
"abstract": [
"Many real-world applications require automated data annotation, such as identifying tissue origins based on gene expressions and classifying images into semantic categories. Annotation classes are often numerous and subject to changes over time, and annotating examples has become the major bottleneck for supervised learning methods. In science and other high-value domains, large repositories of data samples are often available, together with two sources of organic supervision: a lexicon for the annotation classes, and text descriptions that accompany some data samples. Distant supervision has emerged as a promising paradigm for exploiting such indirect supervision by automatically annotating examples where the text description contains a class mention in the lexicon. However, due to linguistic variations and ambiguities, such training data is inherently noisy, which limits the accuracy of this approach. In this paper, we introduce an auxiliary natural language processing system for the text modality, and incorporate co-training to reduce noise and augment signal in distant supervision. Without using any manually labeled data, our EZLearn system learned to accurately annotate data samples in functional genomics and scientific figure comprehension, substantially outperforming state-of-the-art supervised methods trained on tens of thousands of annotated examples.",
"A new algorithm via Canonical Correlation Analysis (CCA) is developed in this paper to support more effective cross-modal image clustering for large-scale annotated image collections. It can be treated as a bi-media multimodal mapping problem and modeled as a correlation distribution over multimodal feature representations. It integrates the multimodal feature generation with the Locality Linear Coding (LLC) and co-occurrence association network, multimodal feature fusion with CCA, and accelerated hierarchical k-means clustering, which aims to characterize the correlations between the inter-related visual features in images and semantic features in captions, and measure their association degree more precisely. Very positive results were obtained in our experiments using a large quantity of public data."
],
"cite_N": [
"@cite_14",
"@cite_25"
],
"mid": [
"2760228341",
"881606563"
]
} | MultiDEC: Multi-Modal Clustering of Image-Caption Pairs | In many science and engineering applications, images are equipped with free-text descriptions, but structured training labels are difficult to acquire. For example, the figures in the scientific literature are an important source of information (Sethi et al. 2018;Lee et al. 2017), but no training data exists to help models learn to recognize particular types of figures. These figures are, however, equipped with a caption describing the content or purpose of the figure, and these captions can be used as a source of (noisy) supervision. Grechkin et al. used distant supervision and co-learning to jointly train an image classifier and a text classifier, and showed that this approach offered improved performance (Grechkin, Poon, and Howe 2018). However, this approach relied on an ontology as a source of class labels. No consensus on an ontology exists in specialized domains, and any ontology that does exist will change frequently, requiring re-training. Our goal is to perform unsupervised learning using only the image-text pairs as input.
A conventional approach is to cluster the images alone, ignoring the associated text. Unsupervised image clustering has received significant research attention in computer vision recently (Xie, Girshick, and Farhadi 2016;Yang, Parikh, and Batra 2016). However, as we will show, these single-view approaches fail to produce semantically meaningful clusters on benchmark datasets. Another Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. conventional solution is to cluster the corresponding captions using NLP techniques, ignoring the content of the images. However, the free-text descriptions are not a reliable representation of the content of the image, resulting in incorrect assignments.
Current multi-modal image-text models focus on matching images and corresponding captions for information retrieval tasks (Karpathy and Fei-Fei 2015;Dorfer et al. 2018;Carvalho et al. 2018), but there is less work on unsupervised learning for both images and text. Jin et al. (Jin et al. 2015) solved a similar problem where they utilized Canonical Correlation Analysis (CCA) to characterize correlations between image and text. However, the textual information for the model were explicit tag rather than long-form freetext descriptions. Unlike tags, free-text descriptions are extremely noisy: they always contain significant irrelevant information, and may not even describe the content of the image.
We propose MultiDEC, a clustering algorithm for imagetext pairs that considers both visual features and text features and simultaneously learns representations and cluster assignments for images. MultiDEC extends prior work on Deep Embedded Clustering (DEC) (Xie, Girshick, and Farhadi 2016), which is a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping function from the data space to a lower-dimensional feature space in which it iteratively optimizes Kullback-Leibler divergence between embedded data distribution and a computed target distribution. DEC has shown success on clustering several benchmark datasets including both images and text (separately).
Despite its utility, in our experiments DEC may generate empty clusters or assigns clusters to outlier data points, which is a common problem in clustering tasks (Dizaji et al. 2017;Caron et al. 2018). We address the problem of empty clusters by introducing a regularization term to force the model to find a solution with a more balanced assignment.
We utilize a pair of DEC models to take data from image and text. Derived from the target distribution in (Xie, Girshick, and Farhadi 2016), we propose a joint distribution for both the embedded image features and the text features. MultiDEC simultaneously learns the image representation by iterating between computing the joint target distribution and minimizing KL divergence between the embed-ded data distribution to the computed joint target distribution. We evaluate our method with four benchmark datasets and compare to both single-view and multi-view methods. Our method shows significantly improvement over other algorithms in all large datasets.
In this paper, we make the following contributions: • We propose a novel model, MultiDEC, that considers semantic features from corresponding captions to simultaneously learn feature representations and cluster centroids for images. MultiDEC iterates between computing a joint target distribution from image and text and minimizing the regularized KL divergence between the soft assignments and the joint target distribution. • We run a battery of experiments to compare our method to multiple single-view and multi-view algorithms on four different datasets and demonstrate the superior performance of our model. • We conduct a qualitative analysis to show that MultiDEC separates the semantically and visually similar data points and is robust to noisy and missing text.
Method
Parameter Initialization
We initialize DNN parameters with two stacked autoencoders (SAE). A stacked autoencoder has shown success in generating semantically meaningful representation in several studies (c.f., (Vincent et al. 2010;Le 2013;Xie, Girshick, and Farhadi 2016)). We utilize a symmetric stacked autoencoder to learn the initial DNN parameters for each view by minimizing mean square error loss between the output and the input. After training the autoencoder, we discard the decoder, pass data X and T through trained encoder and apply K-means to the embeddings on Z and Z to obtain initial centroids µ j and µ j .
With the initialization of DNN parameters and centroids, MultiDEC updates the parameters and the centroids by iterating between computing a joint target distribution and minimizing a (regularized) KL divergence of both data views to it. In the first step, we compute soft assignments (i.e., a distribution over cluster assignments) for both views. The process stops when convergence is achieved.
Soft Assignment Following Xie et al. (Xie, Girshick, and Farhadi 2016), we model the probability of data point i being assigned to cluster j using the Student's t-distribution (Maaten and Hinton 2008), producing a distribution (q ij for images and r ij for text).
q ij = (1 + z i − µ j 2 /α) − α+1 2 j (1 + z i − µ j 2 /α) α+1 2
(1)
r ij = (1 + z i − µ j 2 /α) − α+1 2 j (1 + z i − µ j 2 /α) α+1 2(2)
where q ij and r ij are the soft assignments for image view and text view, respectively, and α is the number of degrees of freedom of the Student's t-distribution. z i is the embedding on latent space Z of data x i , which can be described as z i = f θ X (x i ). z i is the embedding on latent space Z of data t i , which can be illustrated as z i = g θ T (t i ). Following Xie et al., we set α to 1 because we are not able to tune it in an unsupervised manner.
Aligning Image Clusters and Text Clusters After calculating the soft assignments for both views, we need to align the two sets of k clusters. This cluster alignment is obtained from the highest probability cluster (i.e., image i is assigned to cluster arg max j q ij ). Next, to align image clusters and text clusters, we use the Hungarian algorithm to find the minimum cost assignment. We create a k × k confusion matrix where an entry (m, n) represents the number of data points being assigned to m-th image cluster and n-th text cluster. We then subtract the maximum value of the matrix from the value of each cell to obtain the "cost." The Hungarian algorithm is then applied to the cost matrix.
KL Divergence Minimization Xie et al. trained DEC by minimizing the KL divergence between the soft assignment q ij and a target distribution p ij (presented below in Eq. 8), with a goal of purifying the clusters by learning from high confidence assignments:
L = KL(p||q) = 1 N N i k j p ij log p ij q ij(3)
DEC fails to address the issue of trivial solutions and empty clusters which happen frequently in clustering problems (Dizaji et al. 2017;Caron et al. 2018). Dizaji et al. (Dizaji et al. 2017) used a regularization term to penalize non-uniform cluaster assignments. Following this concept, we define a target label distribution by averaging the joint target distribution from all data points.
h j = P (y = j) = 1 N N i p ij(4)
where h j can be interpreted as the prior frequency of clusters in the joint target distribution. To impose the preference of a balanced assignment, we add a term representing the KL divergence from a uniform distribution u. The regularized KL divergence is computed as L = KL(p||q) + KL(h||u) (5) where the first term aims to minimize the dissimilarity between soft assignment and joint target distribution and the second term is to force the model to prefer a balanced assignment. The uniform distribution can be replaced with other distribution if there is any prior knowledge of the cluster frequency.
MultiDEC is trained by matching the image distribution q ij to the joint distribution p ij , and similarly for the text distribution r ij .
L img = KL(p||q) + KL(h||u)(6)
L txt = KL(p||r) + KL(h||u) (7) At this point, we have presented half of the iteration: how MultiDEC generates the soft assignments, and the objective function of the image and text models. Next, we compute a joint target distribution from both views. (Xie, Girshick, and Farhadi 2016) which aims to improve cluster purity and to emphasize data points with high assignment confidence:
p ij = q 2 ij/β j j q 2 ij /β j(8)
where β j = i q ij . To fit the model with multi-view problem setting, we propose a joint target distribution:
p ij = q 2 ij/β j 2 × j q 2 ij /β j + r 2 ij/σ j 2 × j r 2 ij /σ j(9)
where β j = i q ij and σ j = i r ij are soft cluster frequencies for image view and text view, respectively. With this joint target distribution, MultiDEC is able to take both sources of information into account during training.
Some images do not have associated text; we want the model to be robust to this situation. Missing text causes the second term in equation (9) to be 0 and the data points with text would have higher value of p ij and contribute a larger gradient to the model. We will discuss this issue in more detail in Section 5.
Experiments
We evaluate our method with four datasets and compare to several single-view and multi-view algorithms. In these experiments, we aim to verify the effectiveness of MultiDEC on real datasets, validate that MultiDEC outperforms singleview methods and state-of-the-art multi-view methods.
Datasets
To evaluate our method, we use datasets that have images with corresponding captions as well as ground-truth labels to define the clusters. Our proposed model is tested with four datasets from three different sources and compared against several single-view and multi-view algorithms. We summarize the results in Table 1 We use ResNet-50 (He et al. 2016), pretrained on 1.2M ImageNet (Deng et al. 2009) corpus, for extracting 2048 dimensional image features and doc2vec (Le and Mikolov 2014), which is pre-trained on Wikipedia via skip-gram, to embed captions and obtain text features. Recent studies have shown image features embedded by ImageNet pretrained models improve general image clustering tasks and ResNet-50 features are superior than representation extracted from other state-of-the-art CNN architectures (Guérin et al. 2017;Guérin and Boots 2018). Doc2vec also has shown to produce effective representations for long text paragraphs (Lau and Baldwin 2016).
Competitive Methods
We compare our method to a variety of single-view and multi-view methods.
Single-view methods We run two single-view methods to serve as baseline comparison. (Xie, Girshick, and Farhadi 2016): DEC simultaneously learns feature representations and cluster assignments of the data by minimizing KL divergence between data distribution and an auxiliary target distribution. We then apply K-means to output. We show results for both text and image inputs. Table 2: Clustering performance of several single-view and mutli-view algorithms on four datasets. The results reported are the average of 10 iterations. MultiDEC outperforms the comparing methods on three datasets by a large margin. The insufficient performance from DNN models on Pascal dataset might be caused by insufficient amount of data.
Multi-view methods We evaluate three state-of-the-art multi-view methods. Current methods for matching image and text models are based on minimizing ranking loss or maximizing CCA between text and image.
Evaluation Metrics
All experiments are evaluated by three standard clustering metrics: clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). For all metrics, higher numbers indicates better performance. We use hyperparameter settings following Xie et al. (Xie, Girshick, and Farhadi 2016). For baseline algorithms, we use the same setting in their corresponding paper. All the results are the average of 10 trials. Table 4.1 displays the quantitative results for different methods on various datasets. MultiDEC outperforms other tools on almost every dataset by a noticeable margin with an exception of Pascal dataset. All the DNN models suffer from poor performance on Pascal dataset. Our interpretation is that there is not sufficient data (1,000 data points with 50 images per cluster) for the DNNs to converge. However, Mul-tiDEC still surpasses other DNN models on this dataset.
Experimental Results
Discussion
In this section, we discuss additional experiments to expand on MultiDEC's effectiveness.
Qualitative Comparison
The cluster metrics are difficult to interpret, so we are interested in exploring a qualitative comparison between Multi-DEC and the best single-view and multi-view competitors, DEC and CCA, respectively. Figure 2 is a visualization of the latent space of MultiDEC to illustrate its effectiveness in producing coherent clusters. We use t-SNE to visualize the embeddings from the latent space with perplexity = 30. The positions and shapes of the clusters are not meaningful due to the operation of t-SNE. Both DEC and MultiDEC are able to generate distinct clusters, but DEC appears to have many more false assignments. For example, DEC can struggle to differentiate giraffe from pizza and kite from airplane. CCA is able to gather semantically similar images together, but the cluster boundaries are much less distinct.
We further compare three algorithms by inspecting examples of the clusters. Figure 3 shows the top five images with highest confidence from each cluster from the Coco-cross dataset. The ten categories in Coco-cross are stop sign, airplane, suitcase, pizza, cell phone, person, giraffe, kite, toilet, and clock. The figure shows that DEC clusters are not always coherent. For example, cluster #1 and cluster #7 seem to include mostly airplane images and cluster #0 and cluster #4 are clock clusters. Cluster #9 from DEC is a fusion of giraffe and pizza, which are semantically significantly different. Our guess is that both giraffe and pizza share similar colors (yellow) and patterns (spots on the giraffe body and toppings on the pizza). MultiDEC, on the other hand, is easily able to distinguish these objects, because the text descriptions expose their semantic differences. Surprisingly, Multi-DEC is also able to distinguish airplane and kite, which are not only visually similar, but are also semantically related. However, we are still able to observe some errors from Mul-tiDEC, such as examples of suitcase and cellphone, which (Color encoding is based on groundtruth labels.) We can observe that MultiDEC successfully separates overlapped data points in original latent space and generates semantic meaningful clusters. DEC has trouble with separating kite from airplane and giraffe from pizza. While CCA is able to gather semantic similar images, but the latent space is still difficult for clustering analysis with unclear bondaries between clusters. are visually similar, assigned into the same cluster (cluster #1) and clock examples separated into two clusters: clocks on towers (cluster #2) and indoors clocks (cluster #9). As we saw in Figure 2, the cluster boundaries are indistinct in CCA latent space, and the qualitative results shown in Figure 3 corroborate this result. We can see several different clusters include similar objects; for example, both cluster #1 and cluster #9 include airplanes and cluster #0 and cluster #7 include giraffes.
Model Robustness
Model Sensitivity to Text Features We use learned embeddings for text as input to the model. To examine the model's sensitivity to the quality of the input text representations, we experiment with two other baseline text representations, TF-IDF and FastText (Bojanowski et al. 2017). TF-IDF ignores all co-occurrence relationships, and therefore has significantly less semantic content, so we expect performance to be worse. We produce a 2000-d vector for text features for each data point. FastText is a word embed- ) to represent paragraphs. Table 3 shows the results of MultiDEC trained using these different text features. MultiDEC produces similar results despite different text features, which demonstrates that the performance is the result of the MultiDEC algorithm itself rather than the quality of the input text features. Figure 4: Experiment results on model robustness to missing text. The clustering accuracy holds even with very little data, which verify our hypothesis in method section.
Robustness to Missing Text Incomplete views are a very common problem in mutli-view clustering (Xu, Tao, and Xu 2015). In realistic settings, not all images will be equipped with text descriptions, and we want MultiDEC to degrade gracefully in these situations. To analyze the robustness of MultiDEC when text descriptions are missing, we remove text from a random set of images at varying rates. We expect that performance will degrade as we remove text labels -if it did not, then MultiDEC would not be making use of the information in the text labels. The results are shown in Figure 4, and we see that performance does indeed degrade, but not by a significant amount. This result shows that the joint target distribution can work with either or both sources of information (Equation 9). Images with missing text have smaller value of p ij because the second term in equation (9) is ignored, while images with captions have larger value of p ij and contribute larger gradient to the model. We also ran an experiment to test the robustness to noisy text by scrambling the image-text pairs, such that a given percentage of the images would be associated with the text of a different image. This change is more adversarial than missing text, as the incorrect labels could train the model to learn incorrect Figure 5: MultiDEC performance when swapping the text labels for a random portion of the image-text pairs. The model performance remains high until over 60% of the input text are scrambled.
signals. (Figure 5). The performance of MultiDEC remains steady until almost 60% of the text is perturbed, indicating that MultiDEC is robust to incorrect labels.
Conclusion
We present MultiDEC, a method that learns representations from multi-modal image-Text pairs for clustering analysis.
MultiDEC consists a pair of DEC models to take data from image and text, and works by iteratively computing a proposed joint target distribution and minimizing KL divergence between the embedded data distribution to the computed joint target distribution. We also address the issue of empty cluster by adding a regularized term to our KL divergence loss. MultiDEC demonstrates superior performance on various datasets and outperforms single view algorithms and state-of-the-art multi-view models. We further examine the robustness of MultiDEC to input text features, missing and noisy text. Our experimental results indicate that Multi-DEC is a promising model for image-text pair clustering. | 3,209 |
1901.01716 | 2908344088 | In this paper, we study Forman's discrete Morse theory in the context of weighted homology. We develop weighted versions of classical theorems in discrete Morse theory. A key difference in the weighted case is that simplicial collapses do not necessarily preserve weighted homology. We work out some sufficient conditions for collapses to preserve weighted homology, as well as study the effect of elementary removals on weighted homology. An application to sequence analysis is included, where we study the weighted ordered complexes of sequences. | Other papers involving usage of weights and discrete Morse theory include @cite_23 , where weights are applied to different colors in the Red-Green-Blue (RGB) encoding. Discrete Morse theory is then used in combination with persistent homology for data analysis. In @cite_15 , discrete Morse theory is used to extract the extremal structure of scalar and vector fields on 2D manifolds embedded in @math . Weights @math are assigned to the edges of the cell graph, followed by computing the sequence of maximum weight matchings. An algorithmic pipeline computes a hierarchy of extremal structures, where the hierarchy is defined by an importance measure and enables the user to select an appropriate level of detail. | {
"abstract": [
"This paper presents a computational framework that allows for a robust extraction of the extremal structure of scalar and vector fields on 2D manifolds embedded in 3D. This structure consists of critical points, separatrices, and periodic orbits. The framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed extremal structure. Using a graph theoretical formulation of this theory, we present an algorithmic pipeline that computes a hierarchy of extremal structures. This hierarchy is defined by an importance measure and enables the user to select an appropriate level of detail.",
"Understanding and comparing images for the purposes of data analysis is currently a very computationally demanding task. A group at Australian National University (ANU) recently developed open-source code that can detect fundamental topological features of a grayscale image in a computationally feasible manner. This is made possible by the fact that computers store grayscale images as cubical cellular complexes. These complexes can be studied using the techniques of discrete Morse theory. We expand the functionality of the ANU code by introducing methods and software for analyzing images encoded in red, green, and blue (RGB), because this image encoding is very popular for publicly available data. Our methods allow the extraction of key topological information from RGB images via informative persistence diagrams by introducing novel methods for transforming RGB-to-grayscale. This paradigm allows us to perform data analysis directly on RGB images representing water scarcity variability as well as crime variability. We introduce software enabling a a user to predict future image properties, towards the eventual aim of more rapid image-based data behavior prediction."
],
"cite_N": [
"@cite_15",
"@cite_23"
],
"mid": [
"2147683672",
"2787753804"
]
} | 0 |
||
1907.04385 | 2957560463 | We propose to study unweighted graphs of constant distance VC-dimension as a broad generalization of many graph classes for which we can compute the diameter in truly subquadratic-time. In particular for any fixed @math , the class of @math -minor free graphs has distance VC-dimension at most @math . Our first main result is that on graphs of distance VC-dimension at most @math , for any fixed @math we can either compute the diameter or conclude that it is larger than @math in time @math , where @math only depends on @math . Then as a byproduct of our approach, we get the first truly subquadratic-time algorithm for constant diameter computation on all the nowhere dense graph classes. Finally, we show how to remove the dependency on @math for any graph class that excludes a fixed graph @math as a minor. More generally, our techniques apply to any graph with constant distance VC-dimension and polynomial expansion. As a result for all such graphs one obtains a truly subquadratic-time algorithm for computing their diameter. Our approach is based on the work of Chazelle and Welzl who proved the existence of spanning paths with strongly sublinear stabbing number for every hypergraph of constant VC-dimension. We show how to compute such paths efficiently by combining the best known approximation algorithms for the stabbing number problem with a clever use of @math -nets, region decomposition and other partition techniques. | An early example of linear-time solvable special case for diameter computation is the class of interval graphs @cite_0 . For every interval graph @math and for any integer @math , if we first compute an interval representation for @math in linear-time @cite_31 then we can compute by dynamic programming, for every vertex @math , the contiguous segment of all the vertices at a distance @math from @math in @math . It takes almost linear-time and it implies a straightforward quasi linear-time algorithm for diameter computation. More efficient algorithms for diameter computation on interval graphs and related graph classes were proposed in @cite_23 . Nevertheless we will show in what follows that interval orderings are a powerful tool for diameter computation on more general geometric graph classes. | {
"abstract": [
"The computational problem of finding the center of a graph is motivated by a number of facility-location problems. We exploit a new characterization of interval graphs for the purpose of obtaining a linear-time algorithm for computing both the center and the diameter of an interval graph.",
"",
"Determining the diameter of a graph is a fundamental graph operation, yet no efficient (i.e. linear or quadratic time) algorithm is known. In this paper, we examine the diameter problem on chordal graphs and AT-free graphs and show that a very simple (linear time) 2-sweep LexBFS algorithm identifies a vertex of maximum eccentricity unless the given graph has a specified induced subgraph (it was previously known that a single LexBFS algorithm is guaranteed to end at a vertex that is within 1 of the diameter for chordal graphs and AT-free graphs). As a consequence of the forbidden induced subgraph result on chordal graphs, our algorithm is guaranteed to work optimally for directed path graphs (it was previously known that a single LexBFS algorithm is guaranteed to work optimally for interval graphs)."
],
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_23"
],
"mid": [
"2102564932",
"9515613",
"2091612923"
]
} | 0 |
||
1901.01187 | 2763400571 | In this paper, we propose PopNetCod, a popularity-based caching policy for network coding enabled Named Data Networking. PopNetCod is a distributed caching policy, in which each router measures the local popularity of the content objects by analyzing the requests that it receives. It then uses this information to decide which Data packets to cache or evict from its content store. Since network coding is used, partial caching of content objects is supported, which facilitates the management of the content store. The routers decide the Data packets that they cache or evict in an online manner when they receive requests for Data packets. This allows the most popular Data packets to be cached closer to the network edges. The evaluation of PopNetCod shows an improved cache-hit rate compared to the widely used Leave Copy Everywhere placement policy and the Least Recently Used eviction policy. The improved cache-hit rate helps the clients to achieve higher goodput, while it also reduces the load on the source servers. | None of the approaches above consider the use of network coding @cite_25 , and all are evaluated in single-path scenarios. Given the benefits that network coding brings to multipath communications in NDN @cite_17 @cite_9 @cite_10 @cite_3 , some approaches have been proposed to improve the benefits of caching in network coding enabled NDN architectures @cite_26 @cite_24 @cite_15 . @cite_26 and @cite_24 propose optimal solutions to the problem of efficiently caching in network coding enabled NDN. However, both approaches need a central entity that is aware of the network topology and the Interests, which does not scale well with the number of network nodes. @cite_15 is an eviction policy in which routers, before evicting a Data packet, apply network coding to the Data packet by means of combining it with other Data packets with the same name prefix that will remain in the cache. Due to the increased Data packet diversity in the network, the cache-hit rate is improved. However, in Interest aggregation and Interest pipelining are problematic, limiting the benefits that network coding brings to the NDN architecture. | {
"abstract": [
"",
"Content-Centric Networking (CCN) naturally supports multi-path communication, as it allows the simultaneous use of multiple interfaces (e.g. LTE and WiFi). When multiple sources and multiple clients are considered, the optimal set of distribution trees should be determined in order to optimally use all the available interfaces. This is not a trivial task, as it is a computationally intense procedure that should be done centrally. The need for central coordination can be removed by employing network coding, which also offers improved resiliency to errors and large throughput gains. In this paper, we propose NetCodCCN, a protocol for integrating network coding in CCN. In comparison to previous works proposing to enable network coding in CCN, NetCodCCN permits Interest aggregation and Interest pipelining, which reduce the data retrieval times. The experimental evaluation shows that the proposed protocol leads to significant improvements in terms of content retrieval delay compared to the original CCN. Our results demonstrate that the use of network coding adds robustness to losses and permits to exploit more efficiently the available network resources. The performance gains are verified for content retrieval in various network scenarios.",
"We consider the benefits brought by network coding in Information-Centric Networks (ICNs) in the case of a video streaming application. Network coding, when combined with ICN, allows a data transfer session to use multiple sources for the content seamlessly. It permits the client fetching the content to use multiple interfaces at the same time in an asynchronous manner, while using their capacity in an additive manner. This allows to create a logical link between the user and the content. In the case of video streaming, this logical link allows the rate adaptation logic to find the proper streaming rate while using multiple links concurrently. We implemented a video streaming system which works using network coding and CCN. We have shown that this implementation performs satisfactorily, and has comparable performance to a system without network coding in the unicast single source, single path case, and delivers significant performance gain (better QoE, higher throughput) when the video client retrieves the stream from multiple concurrent sources. We hope to demonstrate that network coding and CCN provides seamless mobility for a video streaming application.",
"The increasing demand for media-rich content has driven many efforts to redesign the Internet architecture. As one of the major candidates, information-centric network (ICN) has attracted significant attention, where in-network cache is a key component in different ICN architectures. In this paper, we propose a novel framework for optimal cache management in ICNs which jointly considers caching strategy and content routing. Specifically, we propose a cache management framework for ICNs based on software-defined networking (SDN) where a controller is responsible for determining the optimal caching strategy and content routing via linear network coding (LNC). Under the proposed cache management framework, we formally formulate the problem of minimizing the network bandwidth cost by jointly considering caching strategy and content routing with LNC. We develop an efficient network coding based cache management (NCCM) algorithm to obtain a near-optimal caching and routing solution for ICNs. We further develop a lower bound of the problem and conduct extensive experiments to compare the performance of the NCCM algorithm with the lower bound. Simulation results validate the effectiveness of the NCCM algorithm and framework.",
"Content Centric Networking (CCN) performance by definition depends on the in-network caching efficiency. We propose CodingCache which utilizes network coding and random forwarding to improve caching efficiency under multipath forwarding. Its advantage is that existing caching strategies can be easily incorporated with it for better performance. We evaluate CodingCache by extensive simulation experiments with the China Telecom network topology and a unique dataset consisting of video access logs from the PPTV system. The results demonstrate that compared with the CCN caching strategy, CodingCache improves the cache hit rate by about 60 .",
"The fast and huge increase of Internet traffic motivates the development of new communication methods that can deal with the growing volume of data traffic. To this aim, named data networking (NDN) has been proposed as a future Internet architecture that enables ubiquitous in-network caching and naturally supports multipath data delivery. Particular attention has been given to using dynamic adaptive streaming over HTTP to enable video streaming in NDN as in both schemes data transmission is triggered and controlled by the clients. However, state-of-the-art works do not consider the multipath capabilities of NDN and the potential improvements that multipath communication brings, such as increased throughput and reliability, which are fundamental for video streaming systems. In this paper, we present a novel architecture for dynamic adaptive streaming over network coding enabled NDN. In comparison to previous works proposing dynamic adaptive streaming over NDN, our architecture exploits network coding to efficiently use the multiple paths connecting the clients to the sources. Moreover, our architecture enables efficient multisource video streaming and improves resiliency to Data packet losses. The experimental evaluation shows that our architecture leads to reduced data traffic load on the sources, increased cache-hit rate at the in-network caches and faster adaptation of the requested video quality by the clients. The performance gains are verified through simulations in a Netflix-like scenario.",
"We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.",
"User behavior in the Internet has changed over the recent years towards being driven by exchanging and accessing information. Many advances in networking technologies have utilized this change by focusing on the content of an exchange rather than on the endpoints exchanging the content, in particular to better support mobility. Network coding and information-centric networking are two examples of these trends, each being developed largely independently thus far. This paper brings these areas together at the internetworking layer. We outline opportunities for applying network coding in a novel and performance-enhancing way that could push forward the case for information-centric networking itself."
],
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2257611623",
"2575082881",
"1973550857",
"1996673765",
"2663509035",
"2105831729",
"2119861934"
]
} | PopNetCod: A Popularity-based Caching Policy for Network Coding enabled Named Data Networking | Data intensive applications, e.g., video streaming, software updates, etc., are the major sources of data traffic in the Internet, and their predominance is expected to further increase in the near future [1]. Moreover, nowadays Internet users are more concerned about what data they request, rather than where that data is located. To address the increased data traffic and the shift in interest from location to data, technologies like Content Delivery Networks (CDN) have been proposed. However, these solutions cannot fully exploit the network resources and deal effectively with the increasing amount of data traffic, since they work on top of the current Internet architecture, which is based on host-to-host communication. To address this issue, the Named Data Networking (NDN) architecture [2], [3] has been proposed, which replaces the addresses of the communicating hosts (i.e., IP addresses) with the name of the data being communicated. In the NDN architecture, clients request data by sending an Interest that contains the name of the requested data. Any network node that receives the Interest and holds a copy of the requested data can satisfy it by sending a Data packet back to the client.
Two of the main advantages that the NDN architecture has over the traditional host-to-host architectures are: (i) the inherent use of in-network caching, and (ii) the builtin support for multipath communications. The pervasive innetwork caching concept proposed by NDN reduces the number of hops that Interests and Data packets need to travel in the network. This reduces the delay perceived by the application retrieving the requested data. However, having caches in all the routers is not always necessary to yield the full benefits that caching brings to the data delivery process. Previous works [4]- [6] have shown that enabling caches only at the edge of the network may achieve performance improvements similar to those obtained when every router is equipped with a cache. Furthermore, NDN provides natural multipath support by allowing clients to distribute the Interests that they need to send to retrieve content objects over all their network interfaces (e.g., LTE, Wi-Fi), which enables the applications to better use the clients' network resources. However, in the presence of multiple clients and/or multiple data sources, the optimal use of multiple paths requires the nodes to coordinate where they forward each Interest in order to reduce the number of Data packet transmissions and the network load.
To optimally exploit the benefits brought by in-network caching and multipath communication, previous works [7], [8] had proposed the use of network coding [9]. In a network coding enabled NDN architecture, the network routers code Data packets by combining the Data packets available at their caches prior to forwarding them. The use of network coding (i) increases Data packet diversity in the network, hence, the use of in-network caches is optimized, and (ii) in multi-client and multi-source scenarios it removes the need for coordinating the faces where the nodes forward each Interest, which enables efficient multipath communication. Although there are works that consider the use of network coding in NDN, they do not consider that caching capacity is limited [7], [8], [10], [11] or they assume that a centralized node coordinates the caching decisions [12], [13], which is unrealistic or difficult to deploy.
In this paper, our goal is to develop a distributed caching policy that preserves the benefits that network coding brings to NDN for the realistic case when the caches have limited capacity. We propose PopNetCod, a popularity-based caching policy for network coding enabled NDN architectures. PopNet-Cod is a caching policy in which routers distributedly estimate the popularity of the content objects based on the received Interest. Based on this information, each router decides which Data packets to insert or evict from its cache. The decision to cache a particular Data packet is taken before the Data packet arrives at the router, i.e., while processing the corresponding Interest. Since the first routers to process Interests in their path to the source are the edge routers, this helps to cache the most popular Data packets closer to the network edges, which reduces the data delivery delay [4]- [6]. To avoid caching the same Data packet in multiple routers over the same path, routers communicate the Data packets that they decide to cache by setting a binary flag in the Interests to be forwarded upstream. This increases the Data packet diversity in the caches. When the cache of a router is full and a Data packet should be cached, the router decides which Data packet should be evicted from its cache based on the popularity information.
We implement the proposed caching policy on top of ndnSIM [14], based on the NetCodNDN codebase [8], [10]. We evaluate the performance of PopNetCod in a Netflix-like video streaming scenario, designed using parameters available in the literature [15]- [17]. In comparison with a caching policy that uses the NDN's default Leave Copy Everywhere (LCE) placement policy and the Least Recently Used (LRU) eviction policy, PopNetCod achieves a higher cache-hit rate, which translates into higher video quality at the clients and reduced load at the sources.
The remainder of this paper is organized as follows. Section II provides an overview of the related works. Section III describes the system architecture. Section IV introduces the problem of caching in network coding enabled NDN for data intensive applications. Then, Section V presents our caching policy, PopNetCod. A practical implementation of the PopNetCod caching policy is described in Section VI. Section VII presents the evaluation of the PopNetCod caching policy.
III. OVERVIEW OF NETWORK CODING ENABLED NDN
A. Data Model
We consider a set of content objects P that is made available by a content provider to a set of end users. Each content object is uniquely identified by a name n. Clients use this name to request that particular content object. Each content object is divided into a set of Data packets P n , such that the size of each Data packet does not exceed the Maximum Transmission Unit (MTU) of the network. The set of Data packets P n that compose a content object is divided into smaller sets of Data packets, which are known as generations [23]. The size of each generation g is a design parameter chosen to enable network coding at scale. The set of Data packets that form the generation g is denoted asP n,g and a network coded Data packet belonging to generation g is represented byp n,g .
B. Router Model
The routers have three main tables: a Content Store (CS), where they cache Data packets to reply to future Interests, a Pending Interest Table (PIT), where they keep track of the Interests that have been received and forwarded, to know where to send the Data packets backward to the clients, and a Forwarding Information Base (FIB), which associates upstream faces with name prefixes, to route the Interests towards the sources. In order to enable the use of caching policies in the NetCodNDN architecture, we extend its design by adding a new module called Content Store Manager (CSM). The CSM manages the content store by enforcing a determined caching policy.
Whenever a router receives an Interestî n,g , it first verifies if it can reply to this Interest with the Data packets available in the CS. The router replies to the Interest if it is able to generate a network coded Data packet that has high probability of being innovative when forwarded on the path where the Interest arrived, i.e., if the generated Data packet is linearly independent with respect to all the Data packets that have been sent over the face where the Interest arrived. In this case, the router generates a new Data packet by randomly combining the Data packets in its CS and then sends it downstream over the face where the Interest arrived. Otherwise, the router forwards the Interest to its upstream neighbors to receive a new Data packet that enables it to satisfy this Interest. However, if the router has already forwarded one or multiple Interests with the same name prefix (n, g) and it expects to receive enough Data packets to reply to all the pending Interests stored in the PIT, the router simply aggregates this Interest in the PIT, and waits for enough innovative Data packets to arrive before replying to the Interest.
Whenever a router receives a Data packetp n,g , it first determines if the Data packet is innovative or not. A Data packetp n,g is innovative if it is linearly independent with respect to all the Data packets in the CS of the router, i.e., if it increases the rank ofP r n,g . Non-innovative Data packets are discarded. If the Data packetp n,g is innovative, the router sends the Data packet to the CSM, which decides to cache it or not according to the caching policy. Finally, the router generates a new network coded Data packet and sends it over every face that has a pending Interest to be satisfied.
C. Content Store Model
The Content Store (CS) is a temporary storage space in which a router r can cache Data packets that it has received and considers useful to reply to future Interests. The maximum number of Data packets that can be cached in the CS is given by M , while the set of Data packets that are cached in the CS is denoted asP r . Thus, |P r | ≤ M .
Data packets in the CS are organized in CS entries. Each CS entry contains a set of network coded Data packets,P r n,g , that belong to the same generation g. Since the CS has a limited capacity of M Data packets, then n,g |P r n,g | ≤ M . The Data packets that compose a CS entry are stored in a matrixP r n,g , where each row is a vectorp n,g that represents the network coded Data packetp n,g .
Router r generates a network coded Data packetp n,g by randomly combining the Data packetsP r n,g in its CS. Thus,
p n,g = |P r n,g | j=1 a j ·p (j)
n,g , where a j is a randomly selected coding coefficient andp
(j)
n,g is the jth Data packet inP r n,g .
Additionally to the matrixP r n,g , each CS entry also stores a counter σ f n,g for each face f of router r. This counter measures the number of Data packets generated by applying network coding to the Data packets stored in matrixP r n,g that have already been sent over face f , i.e., it measures the amount of information from matrixP r n,g that has been transmitted from router r to its neighboring node connected over face f . The counter σ f n,g is used to compute the number of network coded Data packets with name prefix (n, g) that the router can generate with the Data packets cached in its CS and have high probability of being innovative to its neighboring node connected over face f . This number is denoted as ξ f n,g and is computed as follows:
ξ f n,g = rank(P r n,g ) − σ f n,g .(1)
When a Data packet with name prefix (n, g) is evicted from the CS of router r, the amount of information in the matrix P r n,g is reduced by 1. Correspondingly, the value of σ f n,g is decreased by 1 for all faces.
IV. CACHING IN NETWORK CODING ENABLED NDN
Whenever a router r receives an Interestî n,g over face f , it either (i) replies with a Data packetp n,g , if it can generate a network coded Data packet that has high probability of being innovative to its neighboring node connected over face f , i.e., ξ f n,g > 0, or, otherwise, (ii) forwards the Interestî n,g upstream. If at time t router r receives the Interestî n,g , a cache-hit is defined as:
h f n,g (t) = 1, if ξ f n,g > 0 0, otherwise.(2)
Let us now assume that during a time period [t, t + T ] router r receives a set of Interests I(t, T ). The cache-hit rate during this time period is defined as follows:
H(t, T ) = 1 T t+T t =t h f n,g (t ).(3)
The overall cache-hit rate seen by router r at time t can be computed as follows:
H(t) = lim T →∞ H(t, T ) = lim T →∞ 1 T t+T t =t h f n,g (t ).(4)
To make optimal use of the limited CS capacity, the objective of each router is to maximize the number of Interests that it can satisfy with the Data packets available in its CS, i.e., maximize its overall cache-hit rate. Achieving a high cache-hit rate at the routers is beneficial for both clients and sources. For the sources, an increased cache-hit rate reduces their processing load and bandwidth needs, since the number of Interests that they receive is reduced. For the clients, the delivery delay is reduced, since the Interests are satisfied with Data packets cached at routers closer to them.
It is clear from (2), (3), and (4) that in order to maximize the overall cache-hit rate, routers should maintain the value of ξ f n,g high enough so that most of the Interests received can Popularity of (n,g) be satisfied with the Data packets in their CS. However, since in this paper we consider that the routers' CS have limited capacity, it is unfeasible for a router to cache all the Data packets that it receives [7], [8], [10], [11]. Optimal solutions to this issue have been proposed in previous works [12], [13], which consider a central controller that knows the network topology and is aware of all the Interests received by the routers. However, these solutions do not scale well with the size of the network, since they require a high number of signaling messages and a powerful enough controller. Hence, in this work we consider that each router decides online and independently from other routers if a Data packet should be cached or not, and which Data packet should be evicted from the CS when it is full. This is achieved by using a distributed caching policy π that maximizes the overall cache-hit rate H(t) of each router,
max π H(t).(5)
The optimal caching policy π predicts which Interests will be received in the future, so that the router caches the Data packets that will be useful to satisfy those Interests.
V. THE POPNETCOD CACHING POLICY
In this section, we present our popularity-based caching policy for network coding enabled NDN, called PopNetCod. To increase the overall cache-hit rate, the PopNetCod caching policy exploits real-time data popularity measurements to determine the number of Data packets that each router should cache for each name prefix. In order to determine which Data packets to cache in and/or evict from the CS, such that the overall cache-hit rate is maximized, PopNetCod performs the following steps. First, it measures the popularity of the different name prefixes contained in the Interests that pass through it. Then, it uses this popularity to predict the Interests that it will receive. Finally, it uses this prediction to determine in an online manner the Data packets that should be cached and the ones that should be evicted from the CS.
A. Popularity Prediction
The popularity prediction in PopNetCod is based on the fact that the rate λ f n,g (t) at which Interests for a particular content object arrive at a router r over face f at time t tends to vary smoothly, as shown in Fig. 1. Thus, router r can predict the rate of the Interests that it will receive in the near future by observing the Interests that it recently received. Let us denote I f n,g (τ, t) as the set of Interests for the name prefix (n, g) that router r has received over face f in the past period [t − τ, t], where t is the current time and τ is the observation period. Let us also denote I f (τ, t) as the total set of Interests for all name prefixes received over face f during the period [t − τ, t].
Using the sets I f n,g (τ, t) and I f (τ, t), router r can compute the average Interest rate for the name prefix (n, g) over face f as follows:
λ f n,g (τ, t) = |I f n,g (τ, t)| |I f (τ, t)| ,(6)
Note that since the average Interest rate does not vary abruptly, the average Interest rate λ f n,g (τ, t) of the recent period [t − τ, t] will be very close to that expected in the near future, i.e., in the period [t, t + T ] where T is the length of the prediction period. Thus, λ f n,g (τ, t) = λ f n,g (t, T ), which hereafter we denote as λ f n,g (t). The PopNetCod caching policy uses λ f n,g (t) to predict the number of Interests with name prefix (n, g) that will be received over face f in the near future, and hence, to allocate more storage space in the CS to Data packets with higher cache-hit probability.
In order to prepare the CS for the Interests that the router may receive, the PopNetCod caching policy maps the received Interest rate to the capacity of the CS, such that name prefixes with high rate are allocated more space in the CS. The number of network coded Data packets with name prefix (n, g) that the router should cache in its CS at time t to satisfy the Interests expected over face f is denoted as M f n,g (t) and computed as:
M f n,g (t) = λ f n,g (t) · M, if λ f n,g (t) · M < |P n,g | |P n,g |,
otherwise.
B. PopNetCod Placement
In the PopNetCod caching policy, the placement decision is taken following the reception of an Interest. Whenever a router decides to cache the Data packet that is expected as a reply to the received Interest, it sets a flag on the Interest signaling upstream routers about its decision. In the case of a set flag, the upstream nodes do not consider this Interest for caching. Since the edge routers (i.e., the routers that are directly connected to the clients) are the first ones that have the possibility to decide whether they will cache a Data packet, the PopNetCod caching policy naturally enables edge caching. This is inline with recent works [4]- [6] arguing that most of the gains from caching in NDN networks come from edge caches, and thus, it is natural to cache the most popular content at edge routers.
Whenever a router receives an Interestî n,g over face f t at time t, the PopNetCod caching policy follows the next steps to decide if the Data packetp n,g should be cached. First, it uses popularity prediction to compute M f n,g (t), i.e., the total number of Data packets that it aims to cache for name prefix (n, g), as defined in (7). Then, it computes the number of Data packets that it should cache in order to satisfy the expected Interests as:
δ f n,g (t) = M f n,g (t) − ξ f n,g (t) ∀f ∈ F.(8)
Finally, the caching policy decides to cache the Data packet p n,g that is expected as reply to the received Interest if the average number of Data packets needed by all the faces is greater than 0. However, it should be noted that the Data packetp n,g will not be useful to the node connected over the downstream face f t over which the Interest arrived. This is because when the Data packetp n,g arrives at the router, it is sent to face f t in order to satisfy the received Interest. Then, replying with the same Data packet to a subsequent Interest received over the same face f t does not add any innovative information, i.e., the Data packet is considered as duplicated. Instead, the expected Data packetp n,g is potentially useful for all the nodes connected over all the other downstream faces of the router. For this reason, the average number of Data packets needed is measured only over the downstream faces different to the one over which the Interest arrived. It is computed as:
∆ + n,g (t) = 1 |F r | − 1 f ∈F f =ft δ f n,g (t) > 0,(9)
where F r denotes the downstream faces of router r.
C. PopNetCod Eviction
The steps followed by the PopNetCod caching policy to decide how many Data packets with name prefix (n, g) can be evicted from the router's CS are the following. Similarly to the placement case, first, the caching policy uses popularity prediction to compute M f n,g (t), i.e., the number of Data packets that it aims to cache for name prefix (n, g). Then, it computes the number of Data packets that it can evict from its CS and still satisfy the expected Interests as: δ f n,g (t) = rank(P r n,g ) − M f n,g (t)∀f ∈ F.
Finally, the number of Data packets the router can evict from a particular name prefix (n, g) is computed as the minimum number of Data packets that it can evict over all the faces:
∆ − n,g (t) = min f ∈Fδ f n,g (t).(11)
VI. PRACTICAL IMPLEMENTATION OF POPNETCOD
In this section, we describe a practical implementation of the PopNetCod caching policy in the NetCodNDN architecture [10]. First, we describe the signaling between routers, which is used to prevent routers of the same path to cache duplicate Data packets. Next, we present the Interest processing algorithm, where placement decisions are made. Finally, we describe the Data packet processing algorithm for placement enforcement, eviction decision, and eviction enforcement.
A. Signaling Between Routers
The PopNetCod caching policy is distributed and requires very limited signaling between routers. The only signaling that exists between routers to implement the PopNetCod caching policy is a binary flag added to the Interest and Data packets that is used to inform neighbor routers that an expected Data packet will be cached or that a received Data packet has been cached. Distributed caching policy decisions help to keep the complexity of the system low and to make our system scalable to a large number of routers.
Each Interestî n,g carries a flag CachingDown, which is set to 1 by a router when it decides to cache the Data packet p n,g that is expected to come as reply to the Interest. This flag informs upstream routers that another router downstream has already decided to cache the Data packet that is expected to come as reply to this Interest. The routers receiving an Interest with the CachingDown flag set to 1 do not consider to cache the Data packet that is expected to come as reply to this Interest, therefore reducing the number of duplicated Data packets in the path and the processing load in the nodes.
Since Interests for network coded data do not request particular Data packets, but rather any network coded Data packet with the requested name prefix, the routers need a way to know that a Data packet has been already cached by another router, so that they avoid caching duplicated Data packets. For this reason, each Data packetp n,g has a flag CachedUp, which is set to 1 by a router when it caches this Data packet in its CS. This flag informs the downstream routers that another router has already cached this Data packet. A router receiving a network coded Data packet with the flag set to 1 does not consider it for caching. Instead, it waits for another Data packet with the same name prefix that has not been cached upstream. This ensures that a Data packet is cached by only one router on its way to the client.
B. Status Information at Routers
Each router implementing the PopNetCod caching policy should store information that assists to identify the Data packets that should be cached or evicted. In particular, the router needs to keep the Recently received Interests information to compute the popularity prediction. Moreover, since the placement decision takes place when the Interest is received, the router needs to remember the Names to be cached, such that the selected Data packets are cached when they arrive. Finally, since the popularity information can vary over time, the routers should keep a list with the Names to consider for eviction, which is used when they decide about eviction. Below, we describe the data structures used to store this information.
• Recently received Interests -The router maintains a list L f for each face f of the router, where it stores the names of the Interests I f (τ, t) received over face f during the period [τ, t]. The parameter τ controls how much into the past is observed by the router to compute the popularity prediction. Together with the name prefix, each element in L f also stores the time t i at which the Interest was received, such that it can be removed from L f at time t i + τ .
• Names to be cached -The router maintains a table A, where it stores the name prefixes (i.e., the content object name appended with the generation ID) and the number of the Data packets that should be cached. When the router receives an Interestî n,g and the PopNetCod caching policy decides that the network coded Data packet that is expected as reply should be cached, the router adds its name prefix (n, g) to the list A. Then, whenever a network coded Data packet arrives, the if ξ f n,g > 0 then (î n,g can be satisfied from the CS)
12:
Generate a Data packetp n,g from the CS 13: Returnp n,g 14:
else ifî n,g will be aggregated by the PIT then if ∆ + n,g (t) > 0 then (p n,g should be cached) 19: Insert (n, g) into A • Names to consider for eviction -The router also maintains a queue E, where it stores the name prefixes of the CS entries that can be considered for Data packet eviction. When a name prefix (n, g) is removed from the list L f , the popularity of this name prefix decreases, i.e., it is a good candidate to consider for eviction. Thus, each time a name prefix is removed from L f , it is added to E.
C. Interest Processing
As depicted in Fig. 2, when a CSM configured with the PopNetCod caching policy receives an Interestî n,g from for all expired entries (n l , g l ) in L f do 3: Remove (n l , g l ) from L f 4: Add (n l , g l ) to E 5: end for 6: end for downstream, it (i) determines if the Interest can be replied from the CS. Then, if the CSM could not reply to the Interest with the content of its CS, it (ii) updates the popularity information, and, (iii) determines if the Data packet that is expected as reply to this Interest should be cached. The CSM should provide the NetCodNDN forwarder with either a Data packet that should be sent as reply to the Interest, or an Interest that should be forwarded upstream. Below we describe the details of this procedure, which is summarized in Algorithm 1.
After receiving an Interestî n,g , the CSM first checks the flag CachingDown to see if any previous node downstream in the path has decided to cache the Data packet that is expected as reply to this Interest (lines 2 to 8). If the flag CachingDown is set to 1, then the CSM only checks its CS to determine if the Interest can be satisfied from the CS. If this is possible, i.e., if ξ f n,g is greater than 0, it generates a network coded Data packet from the CS and provides it to the NetCodNDN forwarder, which sends it over face f . If the Interest can not be satisfied from the CS, the CSM provides the same Interest to the NetCodNDN forwarder, which forwards it upstream.
If the flag CachingDown is set to 0, the CSM first inserts name (n, g) of the Interest into the list L f (line 10). Then, the CSM checks if it can satisfy the Interest with the content of the CS (lines 11 to 13). If this is possible, i.e., if ξ f n,g is greater than 0, it generates a network coded Data packet from the CS and provides it to the NetCodNDN forwarder which sends it over face f . Otherwise, the node needs to forward the Interest to its neighbor nodes. If the router does not send the Interest upstream, but aggregates it in the PIT with a previously received Interest, the CSM does not need to do anything else and provides the Interest to the NetCodNDN forwarder, which aggregates it (line 15). If the Interest will not be aggregated, then the CSM determines if it will cache the Data packet with name prefix (n, g) that is expected as reply to this Interest, by computing ∆ − n,g (t) using Eq. (9). In order to obtain an accurate value of ∆ − n,g (t), the CSM first updates the popularity information, removing all the expired elements from L f and adding their name prefix to the list E of name prefixes to be considered for eviction (line 17). This procedure is summarized in Algorithm 2. Then, the CSM computes the value of ∆ + n,g (t). If ∆ + n,g (t) > 0, it means that the Data packet should be cached. In this case, the CSM inserts name prefix (n, g) into the list A, sets the flag CachingDown on the Interestî n,g to 1 and, finally, provides the modified Interest to the NetCodNDN forwarder, which forwards it upstream (lines 18 to 21). If ∆ + n,g (t) ≤ 0, then the CSM provides the same Interest to the NetCodNDN forwarder,
D. Data Packet Processing
As depicted in Fig. 3, when a CSM configured with the PopNetCod caching policy receives a network coded Data packetp n,g from upstream, it (i) determines if the Data packet should be cached in the CS, by consulting A. If the Data packet should be cached, the CSM ensures that there is enough free space in the CS, (ii) updating the popularity information and (iii) executing the cache replacement procedure if needed. Finally, the CSM (iv) inserts the received Data packet into the CS, and (v) generates a new network coded Data packet that should be forwarded downstream. This procedure is detailed below and summarized in Algorithm 3.
After receiving a Data packetp n,g , the CSM first checks the flag CachedUp to determine if any router upstream has already cached this Data packet. If the flag CachedUp has been set to 1, then, the CSM understands that another router upstream has already cached this Data packet. In this case, the CSM returns the Data packet to the NetCodNDN forwarder, which replies to any matching pending Interest (line 1).
When the flag CachedUp is set to 0, then the CSM first verifies if any entry in A matches name prefix (n, g). If there is no matching entry, the CSM returns the Data packet to the NetCodNDN forwarder (line 3). If there is a match, the Data packet should be cached, and A is updated by increasing the counter of the matching entry by one (line 6). However, if the CS is full, the CSM first needs to release some space in the CS (lines 7 to 15). To evict Data packets, the CSM goes through the list E, each time selecting a name prefix (n e , g e ) and computing the number of Data packets that can be evicted for the name prefix using Eq. (11). If this number is greater than 0, then the CSM evicts the corresponding number of Data packets from the CS and interrupts the scan of the list. Note that, since the cached Data packets are network coded, the CSM does not need to decide which particular Data packets from the CS entryP n,g it should evict from the CS, but it can select randomly network coded Data packets from the CS entry and evict them. After evicting at least one Data packet, the CSM caches the received Data packetp n,g . Then, the router generates a new Data packetp * n,g by applying network coding to the cached Data packets with name prefix (n, g). Since the new Data packetp * n,g contains the cached Data packetp n,g , the router sets the flag CachedUp ofp * n,g to 1. Finally, the CSM provides Data packetp * n,g to the NetCodNDN forwarder, which uses it to reply to pending Interests with name prefix (n, g).
VII. EVALUATION
In this section, we evaluate the performance of the PopNet-Cod caching policy in an adaptive video streaming architecture based on NetCodNDN [10]. First, we describe the evaluation setup. Then, we present the caching policies with which we compare the PopNetCod caching policy. Finally, we show the performance evaluation results.
A. Evaluation Setup
We consider a layered topology consisting of 1 source, 123 clients, and 45 routers connecting the clients and the sources. The routers are arranged in a two-tier topology, with 10 routers directly connected to the source and 35 edge routers directly connected to the clients. The links connecting the routers between them and the links connecting the routers to the source have a bandwidth of 20M bps. The bandwidth of the links connecting the clients to the routers follow a normal distribution, with mean 4M bps and standard deviation 1.5. These values are chosen based on the Netflix ISP Speed Index [17]. Each client is connected with two routers, considering that nowadays most end-user devices have multiple interfaces, e.g., LTE, Wi-Fi.
For the evaluation, we consider that the source offers 5 videos for streaming, each one composed of 50 video segments with a duration of 2 seconds each, i.e., in total, each video has a duration of 100 seconds. The video segments are available in three different representations, Q = {480p, 720p, 1080p} with bitrates {1750kbps, 3000kbps, 5800kbps}, respectively. These values for the representations and bitrates are according to the values that had been used by Netflix [15]. As presented in Section III-A, the content objects (i.e., the video segments in our evaluation scenario) are divided into Data packets and generations, in order to implement network coding. In particular, for the representations Q = {480p, 720p, 1080p}, each video segment is divided into {359, 615, 1188} Data packets of 1250 bytes each, and {4, 7, 12} generations, respectively. Thus, in total, the source stores 540, 500 Data packets. All the routers are equipped with content stores able to cache between 0.9% and 2.3% of the total Data packets available at the source.
The clients randomly choose a video to request and start the adaptive video retrieval process at a random time during the first 5 seconds of the simulation. The network coding operations are performed in a finite field of size 2 8 . The clients use the dash.js adaptation logic [24] to choose the representation that better adapts to the current conditions, i.e., the measured goodput and the number of buffered video segments.
B. Benchmarks
We compare the performance of our caching algorithm with the following benchmarks:
• LCE-NoLimit -The placement policy is Leave Copy Everywhere (LCE). We assume that the CSs of the routers have enough space to store all the videos.
• LCE+LRU -The placement policy is LCE, while the eviction policy is Least Recently Used (LRU), which evicts Data packets with the least recently requested name.
• NoCache -In this setting, the routers do not have a CS, i.e., all the Data packets should be retrieved from the source.
C. Evaluation Results
We first evaluate the average cache-hit rate at the routers. In Fig. 4, we can see that by using the PopNetCod caching policy, the routers achieve a higher cache-hit rate than with LCE-LRU. This is because with PopNetCod the number of Data packets cached for a certain name prefix increases smoothly, according to the popularity. In comparison, with LCE+LRU all Data packets received by the router are cached, and the least recently used are evicted from the CS when the capacity is exceeded. Thus, if a router receives Data packets that are requested by a single client, the router still caches them, wasting storage capacity that could be used to cache more popular Data packets that are requested by multiple clients. We can also see that the LCE+NoLimit caching policy defines an upper bound to the cache-hit rate at the routers, since caching all the Data packets with unlimited CS capacity represents the best caching scenario. On the contrary, the NoCache case, where the routers do not have CS capacity, defines a lower bound to the cache-hit rate. Note that in our evaluation the NoCache policy has a non-zero cache-hit rate because our measurement of cache-hit rate also includes Interest aggregations, which is what is being measured in this case.
The increased cache-hit rate that the PopNetCod caching policy brings to the routers has two major consequences: (i) the goodput at the clients increases, which enables the adaptation logic to choose higher quality representations when bandwidth is sufficient, and (ii) the source receives less Interests, meaning that its processing and network load is reduced.
Let us first evaluate the impact that the increased cachehit rate at the routers has for the clients. In Fig. 5, it is shown that by using PopNetCod, the clients benefit from an increased goodput, compared to the LCE+LRU policy. This is a consequence not only of the increased cache-hit rate in the network, but also because PopNetCod caches the most popular content in the network edge, which reduces the content retrieval delay. The percentage of video segments delivered to the clients for each of the available representations (i.e., 480p, 720p, and 1080p) with the PopNetCod and LCE+LRU caching policies is shown in Figs. 6 and 7, respectively. We can see that, compared to the LCE+LRU policy, with the PopNetCod caching policy a higher percentage of video segments are delivered in the highest representation available, i.e., 1080p. This happens because the Data packet retrieval delay is reduced, since more Interests are being satisfied from the routers' content stores, which increases the goodput measured by the clients. The percentage of video segments delivered to the clients in each of the available representations with the upper bound LCE+NoLimit caching policy can be seen in Fig. 8.
Finally, we analyze the impact that the increased cache-hit rate in the routers has for the sources by measuring the load reduction at the source. This metric measures the percentage of Data packets received at the clients that have not been directly provided by the source. It is computed as 1 − N sent
S /N rcvd C ,
where N sent S denotes the total number of Data packets sent by the source, and N rcvd C denotes the total number of Data packets received by all the clients. In Fig. 9, we can see that by using the PopNetCod caching policy, the source load is reduced by up to 10% more than by using LCE+LRU, when the CS size is 12.5K Data packets. Note that the load reduction on the source in the NoCache scenario is larger than 0, even if no Data packet is being served from the CSs. This is because the Interest aggregation at the routers makes it possible to serve multiple Interests with the same Data packet, reducing the number of Data packets delivered by the source.
VIII. CONCLUSIONS
In this paper, we have presented PopNetCod, a popularitybased caching policy for data intensive applications communicating over network coding enabled NDN. PopNetCod is a distributed caching policy, where each router aims at increasing its local cache-hit rate, by measuring the popularity of each content object and using it to determine the number of Data packets for each content object that it caches in its content store. PopNetCod takes cache placement decisions when Interests arrive at the routers, which naturally enables edge caching. The evaluation of the PopNetCod caching policy is performed in a Netflix-like video streaming scenario. The results show that, in comparison with a caching policy that uses the LCE placement policy and the LRU eviction policy, PopNetCod achieves a higher cache-hit rate. The increased cache-hit rate reduces the number of Interests that the source should satisfy, and also increases the goodput seen by the clients. Thus, our caching policy presents benefits for the content providers, by reducing the load of its servers and hence its operative costs, and for the end-users, who are able to watch higher quality videos. | 6,885 |
1907.04978 | 2961482815 | Domain adaptation investigates the problem of leveraging knowledge from a well-labeled source domain to an unlabeled target domain, where the two domains are drawn from different data distributions. Because of the distribution shifts, different target samples have distinct degrees of difficulty in adaptation. However, existing domain adaptation approaches overwhelmingly neglect the degrees of difficulty and deploy exactly the same framework for all of the target samples. Generally, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. In this paper, we aim to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks. We propose a novel approach, named agile domain adaptation , which agilely applies optimal frameworks to different target samples and classifies the target samples according to their adaptation difficulties. Specifically, we propose a paradigm which performs several early detections before the final classification. If a sample can be classified at one of the early stage with enough confidence, the sample would exit without the subsequent processes. Notably, the proposed method can significantly reduce the running cost of domain adaptation approaches, which can extend the application scenarios of domain adaptation to even mobile devices and real-time systems. Extensive experiments on two open benchmarks verify the effectiveness and efficiency of the proposed method. | A typical domain adaptation @cite_28 @cite_3 @cite_10 @cite_11 problem consists of two domains: a well-labeled source domain and an unlabeled target domain. The two domains generally have the same label space but different data distributions @cite_2 . Domain adaptation aims to mitigate the gap between the two data distributions, so that the knowledge, e.g., features and parameters, learned from the source domain can be transfered to the target domain. Recently, domain adaptation has been successfully applied to many real-world applications, such as image recognition @cite_21 @cite_25 , multimedia analysis @cite_26 @cite_31 and recommender systems @cite_5 @cite_0 . | {
"abstract": [
"In this paper, we study the heterogeneous domain adaptation (HDA) problem, in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. By introducing two different projection matrices, we first transform the data from two domains into a common subspace such that the similarity between samples across different domains can be measured. We then propose a new feature mapping function for each domain, which augments the transformed samples with their original features and zeros. Existing supervised learning methods ( e.g., SVM and SVR) can be readily employed by incorporating our newly proposed augmented feature representations for supervised HDA. As a showcase, we propose a novel method called Heterogeneous Feature Augmentation (HFA) based on SVM. We show that the proposed formulation can be equivalently derived as a standard Multiple Kernel Learning (MKL) problem, which is convex and thus the global solution can be guaranteed. To additionally utilize the unlabeled data in the target domain, we further propose the semi-supervised HFA (SHFA) which can simultaneously learn the target classifier as well as infer the labels of unlabeled target samples. Comprehensive experiments on three different applications clearly demonstrate that our SHFA and HFA outperform the existing HDA methods.",
"In real-world applications of visual recognition, many factors — such as pose, illumination, or image quality — can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.",
"",
"Currently, unsupervised heterogeneous domain adaptation in a generalized setting, which is the most common scenario in real-world applications, is under insufficient exploration. Existing approaches either are limited to special cases or require labeled target samples for training. This paper aims to overcome these limitations by proposing a generalized framework, named as transfer independently together (TIT). Specifically, we learn multiple transformations, one for each domain (independently) , to map data onto a shared latent space, where the domains are well aligned. The multiple transformations are jointly optimized in a unified framework (together) by an effective formulation. In addition, to learn robust transformations, we further propose a novel landmark selection algorithm to reweight samples, i.e., increase the weight of pivot samples and decrease the weight of outliers. Our landmark selection is based on graph optimization. It focuses on sample geometric relationship rather than sample features. As a result, by abstracting feature vectors to graph vertices, only a simple and fast integer arithmetic is involved in our algorithm instead of matrix operations with float point arithmetic in existing approaches. At last, we effectively optimize our objective via a dimensionality reduction procedure. TIT is applicable to arbitrary sample dimensionality and does not need labeled target samples for training. Extensive evaluations on several standard benchmarks and large-scale datasets of image classification, text categorization and text-to-image recognition verify the superiority of our approach.",
"Zero-shot learning (ZSL) and cold-start recommendation (CSR) are two challenging problems in computer vision and recommender system, respectively. In general, they are independently investigated in different communities. This paper, however, reveals that ZSL and CSR are two extensions of the same intension. Both of them, for instance, attempt to predict unseen classes and involve two spaces, one for direct feature representation and the other for supplementary description. Yet there is no existing approach which addresses CSR from the ZSL perspective. This work, for the first time, formulates CSR as a ZSL problem, and a tailor-made ZSL method is proposed to handle CSR. Specifically, we propose a Low-rank Linear Auto-Encoder (LLAE), which challenges three cruxes, i.e., domain shift, spurious correlations and computing efficiency, in this paper. LLAE consists of two parts, a low-rank encoder maps user behavior into user attributes and a symmetric decoder reconstructs user behavior from user attributes. Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"The number of \"hits\" has been widely regarded as the lifeblood of many web systems, e.g., e-commerce systems, advertising systems and multimedia consumption systems. However, users would not hit an item if they cannot see it, or they are not interested in the item. Recommender system plays a critical role of discovering interested items from near-infinite inventory and exhibiting them to potential users. Yet, two issues are crippling the recommender systems. One is \"how to handle new users\", and the other is \"how to surprise users\". The former is well-known as cold-start recommendation, and the latter can be investigated as long-tail recommendation. This paper, for the first time, proposes a novel approach which can simultaneously handle both cold-start and long-tail recommendation in a unified objective. For the cold-start problem, we learn from side information, e.g., user attributes, user social relationships, etc. Then, we transfer the learned knowledge to new users. For the long-tail recommendation, we decompose the overall interested items into two parts: a low-rank part for short-head items and a sparse part for long-tail items. The two parts are independently revealed in the training stage, and transfered into the final recommendation for new users. Furthermore, we effectively formulate the two problems into a unified objective and present an iterative optimization algorithm. Experiments of recommendation on various real-world datasets, such as images, blogs, videos and musics, verify the superiority of our approach compared with the state-of-the-art work.",
"Domain adaptation is a promising technique when addressing limited or no labeled target data by borrowing well-labeled knowledge from the auxiliary source data. Recently, researchers have exploited multi-layer structures for discriminative feature learning to reduce the domain discrepancy. However, there are limited research efforts on simultaneously building a deep structure and a discriminative classifier over both labeled source and unlabeled target. In this paper, we propose a semi-supervised deep domain adaptation framework, in which the multi-layer feature extractor and a multi-class classifier are jointly learned to benefit from each other. Specifically, we develop a novel semi-supervised class-wise adaptation manner to fight off the conditional distribution mismatch between two domains by assigning a probabilistic label to each target sample, i.e., multiple class labels with different probabilities. Furthermore, a multi-class classifier is simultaneously trained on labeled source and unlabeled target samples in a semi-supervised fashion. In this way, the deep structure can formally alleviate the domain divergence and enhance the feature transferability. Experimental evaluations on several standard cross-domain benchmarks verify the superiority of our proposed approach.",
"In real-world transfer learning tasks, especially in cross-modal applications, the source domain and the target domain often have different features and distributions, which are well known as the heterogeneous domain adaptation (HDA) problem. Yet, existing HDA methods focus on either alleviating the feature discrepancy or mitigating the distribution divergence due to the challenges of HDA. In fact, optimizing one of them can reinforce the other. In this paper, we propose a novel HDA method that can optimize both feature discrepancy and distribution divergence in a unified objective function. Specifically, we present progressive alignment , which first learns a new transferable feature space by dictionary-sharing coding, and then aligns the distribution gaps on the new space. Different from previous HDA methods that are limited to specific scenarios, our approach can handle diverse features with arbitrary dimensions. Extensive experiments on various transfer learning tasks, such as image classification, text categorization, and text-to-image recognition, verify the superiority of our method against several state-of-the-art approaches.",
"Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.",
"Domain adaptation aims to leverage knowledge from a well-labeled source domain to a poorly labeled target domain. A majority of existing works transfer the knowledge at either feature level or sample level. Recent studies reveal that both of the paradigms are essentially important, and optimizing one of them can reinforce the other. Inspired by this, we propose a novel approach to jointly exploit feature adaptation with distribution matching and sample adaptation with landmark selection. During the knowledge transfer, we also take the local consistency between the samples into consideration so that the manifold structures of samples can be preserved. At last, we deploy label propagation to predict the categories of new instances. Notably, our approach is suitable for both homogeneous- and heterogeneous-domain adaptations by learning domain-specific projections. Extensive experiments on five open benchmarks, which consist of both standard and large-scale datasets, verify that our approach can significantly outperform not only conventional approaches but also end-to-end deep models. The experiments also demonstrate that we can leverage handcrafted features to promote the accuracy on deep features by heterogeneous adaptation."
],
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2170607218",
"2149466042",
"",
"2801477643",
"2950323881",
"2165698076",
"2765897485",
"2811444512",
"2892946488",
"2584009249",
"2955547856"
]
} | Agile Domain Adaptation | Conventional machine learning algorithms generally assume that the training set and the test set are drawn from the same data distribution (Pan, Yang, and others 2010). The assumption, however, cannot always be guaranteed in realworld applications (Long et al. 2015;Ding, Shao, and Fu 2014). To address this, domain adaptation (Pan et al. 2011;Gong et al. 2012;Bousmalis et al. 2017;Ding and Fu 2017) has been proposed to mitigate the data distribution shifts among different domains.
Existing domain adaptation approaches can be roughly grouped into traditional methods and deep learning methods. Traditional methods (Gong et al. 2012;Pan et al. 2011; The authors are with the School of Computer Science and Engineering,University of Electronic Science and Technology. Email to: [email protected] source domain and target samples are from webcam dataset and amazon dataset, respectively. All the shown samples have the same label Calculator. It is obvious that some target samples are easier to be classified than others when we use the source samples as classification reference. Li et al. 2018b) generally do not care how to extract sample features. They only focus on the transfer techniques, such as distribution alignment (Pan et al. 2011), feature augmentation (Li et al. 2014) and landmark selection (Aljundi et al. 2015). Deep learning methods (Ganin et al. 2016;Ganin and Lempitsky 2014;Ding, Nasrabadi, and Fu 2018) take care of both feature extraction and knowledge adaptation via an end-to-end architecture. Yet, no matter what learning paradigm they take, previous domain adaptation methods deploy exactly the same leaning framework for all of the target samples. Specifically, all of the target samples, both easy and hard, are processed via exactly the same pipeline, e.g., the same optimization steps in tradition learning and the same neural network layers in deep learning. Existing methods just feed all the samples into a general formulation which can output an average result. They neglect the inherent degrees of difficulty (as shown in Fig. 1) with the target samples.
In general, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. It is worth noting that a network shared by both easy and hard samples tends to get deeper and larger since the model has to handle hard samples. For a better understanding, thinking about a smart phone. Although one mostly uses the phone for calls and messages, it has to be powerful enough just in case one wants to play the Temple Run from time to time. As a result, the neglect of different adaptation difficulties makes the domain adaptation methods hard to be deployed in real-time and energy-sensitive applications. Fig. 1 has clearly shown that different adaptation difficulties do exist in real-world datasets. Intuitively, if the two domains are highly related, most of the target samples would be easy ones and hard samples tend to be less frequent. Otherwise, one should consider choosing a different source domain for adaptation. Since easy samples can be adapted by a simple or shadow framework and hard samples need more effort, so, why don't we finish the easy ones first with almost no effort and then fully focus on hard ones? Motivated by the above observations, we propose a novel domain adaptation paradigm which takes the degrees of classification difficulty into consideration. Specifically, we present a deep domain adaptation architecture which has multiple exits. Different exits are located after different layers along the backbone deep network. A common sense about deep neural network is that the features extracted from earlier layers are more general but coarse, while the features extracted from the latter layers are more fine but specific (Long et al. 2015). Therefore, the most easy samples are supposed to be classified by very few layers with coarse features, and then these samples can be finished via the first exit. Similarly, the medium hard samples can be handled by the second exit, third exit and so on. At last, the very hard samples are handled by the final exit.
Since deep learning is computing-intensive, more layers generally need more computing power, e.g., GPU, memory and electricity. The early exit paradigm can significantly reduce the computational costs. As a result, it is possible to deploy our solution on a distributed platform. For instance, the first exit can be deployed on edge device, e.g., mobile devices. The second exit on local servers and the final exit on cloud. In particular, we can agilely tailor the deep architecture according to the specific application scenarios. At last, the main contributions of this paper can be listed as follows: 1) We propose a novel learning paradigm for domain adaptation. It explicitly handles the degrees of adaptation difficulty by introducing multiple exits in the learning pipeline. The proposed paradigm can be easily incorporated into deep domain adaptation approaches and significantly reduce their computational costs.
2) We present a novel domain adaptation method, i.e., agile domain adaptation networks (ADAN), which puts the learning paradigm into practice. Extensive experiments on open benchmarks verify the effectiveness and efficiency of ADAN.
3) For deep transfer learning methods, it is confusing to choose how many layers should be used to extract features. The earlier layers are more transferable but the corresponding features are too coarse. On the contrary, the features extracted from the latter layers are more fine/distinctive but these layers tend to be task-specific and hard to be transfered. In our approach, we find a way out of the dilemma. Specifically, earlier layers are used to classify easy samples and latter layers to classify hard ones. Our formulation takes full advantages from both the early layers and the latter layers. It is a practical way to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks.
The remainder of this paper is organized as follows. Section II briefly reviews related work and highlights the merits of our approach. Section III details the proposed learning paradigm and corresponding approach ADAN. Section IV reports the experiments and analyzes ADAN. At last, section V is the conclusion and future work.
Source samples
… … … … … … … … … … … … … … … … … … … …
in Fig. 1). It is so straightforward that everyone can imagine that some target samples are closer to source samples and some target samples are far away (Gong, Grauman, and Sha 2013). Intuitively, the closed samples are easier to be adapted than distant ones. Therefore, we advocate different pipelines for different samples. Specifically, simpler networks for easy and frequent samples, more complex networks for hard and rare ones. From the perspective of network structure, BranchyNet (Teerapittayanon, McDanel, and Kung 2016) and hard-aware deeply cascaded embedding (HDC) (Yuan, Yang, and Zhang 2017) are related with our work. BranchyNet leverages the insight that many test samples can be correctly classified early and therefore do not need the later network layers. HDC ensembles a set of models with different complexities in cascaded manner to mine hard examples at multiple levels. However, both BranchyNet and HDC are conventional machine learning approaches. They are limited to scenarios where the training set and the test set are drawn from the same data distribution. In transfer learning tasks, we have to address the distribution gaps along the network layers and exits.
Agile Domain Adaptation Notations and Definitions
In this paper, we use a bold uppercase letter and a bold lowercase letter to denote a matrix and a vector, respectively. Subscripts s and t are used to indicate the source domain and the target domain, respectively. We investigate the unsupervised domain adaptation problem defined as follows.
Definition 1 A domain D consists of three parts: feature space X , its probability distribution P (X) and a label set Y, where X ∈ X .
Problem 1 Given a labeled source domain D s and an unlabeled target domain D t , unsupervised domain adaptation handles the problem of transferring knowledge, e.g., samples, features and parameters, from
D s to D t , where D s = D t , Y s = Y t , P (X s ) = P (X t ) and P (y s |X s ) = P (y t |X t ).
Overall Idea
In unsupervised domain adaptation, we have a labeled source domain {X s , y s } with n s samples and an unlabeled target domain X t with n t samples. Our goal is to train a deep neural network f (x) which has multiple exits, e.g., exit 1 , exit 2 , ..., exit m . In the learned deep neural network, the very easy samples can be classified via the first exit, i.e., exit 1 , which is located after only a few early layers of the backbone network. In the same manner, more difficult samples are handled by subsequent exit 2 , ..., exit m−1 until the last remaining samples are handled by the final exit m . At the testing stage, if samples come in one by one instead of in batches, the sample x would be first handled by exit 1 . If exit 1 has enough confidence to classify x, the testing would finish. Otherwise, x will be successively handled by the following exits until one of the exit has high confidence to classify it or finally handled by the last exit. The learning pipeline of this idea is shown in Fig. 2. It is worth noting that the number of exits and the structure of each exit can be tailored according to specific tasks.
Problem Formulation
Since we have the labels of X s , we can train the deep model on the source domain data in a supervised fashion. Specifically, the empirical error of f (x) on X s can be written as:
L sup = 1 ns ns i=1 J(f (x s,i ), y s,i ),(1)
where J(·) is a loss metric. Minimizing L sup can train a model which is suitable for the source domain. However, the model cannot handle the target domain due to the data distribution shift. Therefore, we further introduce domain adaptation layers into the network, so that the trained model can be applied for the target domain. In a deep neural network, e.g., AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and ResNet (He et al. 2016), deep features extracted from the earlier layers are more general and features from the latter layers tend towards specific (Long et al. 2015). In other words, the features vary from domaininvariant to domain-specific along the network. Activations in the domain-specific layers are hard to be transferred. Consequently, we remold the domain-specific layers to domain adaptation layers by optimizing a transfer loss L tran . As a result, the loss function of each exit ( = 1, · · · , m) which considers both the source domain and the target domain can be written as:
L exit = L sup + λL tran ,(2)
where λ > 0 is the balancing parameter, indicates the index of network exits. In our agile domain adaptation networks, we have multiple exits, e.g., exit 1 , exit 2 , ..., exit m . To train the whole ADAN in an end-to-end manner, we jointly optimize the loss functions of each exit. Formally, we optimize a weighted loss sum of L exit ( = 1, 2, · · · , m):
L = m =1 w L exit ,(3)
where w > 0 ( = 1, 2, · · · , m) is the loss weight of exit . In our paper, we simply set w = 1 ( = 1, 2, · · · , m).
In this paper, we deploy the cross-entropy loss for the labeled source data. If we use y to denote the one-hot groundtruth label vector of sample x, J(·) can be written as:
J(ŷ, y) = − 1 |C| c∈C y c logŷ c ,(4)
where C is the set of all possible labels,ŷ is the predicted label vector of x:
y = softmax(f (x)) = exp(f (x)) c∈C exp(f (x) c ) .(5)
For the transfer learning part L tran , we deploy the multikernel MMD (Gretton et al. 2012) loss as our metric. Specifically, given two data distributions of the source and the target domain, their MMD can be computed as the square distance between the empirical kernel means as:
MMD(X s , X t ) = 1 n 2 s ns i=1 ns j=1 k(x s,i , x s,j ) − 2 nsnt ns i=1 nt j=1 k(x s,i , x t,j ) + 1 n 2 t ns i=1 ns j=1 k(x t,i , x t,j ),(6)
where k(·) is a kernel function which maps the data features into a reproducing kernel Hilbert space (RKHS). In this paper, we deploy the widely used Gaussian kernel.
Algorithm 1. Agile Domain Adaptation Networks Training
Learning the network parameters by optimizing Eq. (3). Test = 1; %Initialize the network exit index while z=fexit (x); %Get the features from exit y = softmax(z); % Predict the label at exit e = En(y); %Calculate the sample entropy if e ≤ T %If the entropy is lower than a threshold return y;
%Return the predicted label and finish = + 1;
%Otherwise, go to next exit Until > m; return y;
Eq. (6) calculates the MMD on the original data feature x. However, minimizing Eq. (6) is implicit in matching the activations generated by the domain adaptation layers. In transfer learning, the source activations and target activations generated by the domain adaptation layers are encouraged to be well-aligned so that the layer parameters can be shared by the two domains. Therefore, we explicitly minimize the MMD on layer activations (Long et al. 2017). Let Z l s and Z l t denote the activations generated by layer l from the source domain and the target domain, respectively, the MMD with respect to layer activations can be calculated by:
L tran = 1 n 2 s ns i=1 ns j=1 L l=1 k l (z l s,i , z l s,j ) − 2 nsnt ns i=1 nt j=1 L l=1 k l (z l s,i , z l t,j ) + 1 n 2 t ns i=1 ns j=1 L l=1 k l (z l t,i , z l t,j ).(7)
Furthermore, Eq. (7) can be rewritten as the following equivalent form to reduce the computational costs (Gretton et al. 2012). For the earlier detections, we need to estimate whether a sample should be finished at each exit. Therefore, we need a metric to measure the classification confidence. In this paper, we use the sample entropy as the metric, which is defined as:
L tran = 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l s,2i ) + L l=1 k l (z l t,2i−1 , z l t,2i ) − 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l t,2i ) + L l=1 k l (z l t,2i−1 , z l s,2i ) .(8)En(y) = − c∈C y c logy c ,(9)
where y is a label vector which consists of the probabilities of all possible labels computed at each exit. From the physical meaning of entropy, we know that a lower entropy indicts a more certain output. As a result, we compare the sample entropy with a threshold in each exit. If the entropy is lower than the threshold, the classification of the sample would be finished. Otherwise, the sample will goto the next exit for prediction. For a better understanding, we show the main steps of our ADAN in Algorithm 1.
Experiments
In this section, we verify the proposed method with both accuracy and efficiency results. Two widely used base architectures, e.g., the classical LeNet (LeCun et al. 1998) and the state-of-the-art ResNet (He et al. 2016), are tailored to work in the manner of agile domain adaptation. The source codes will be publicly available on our GitHub page.
Data Description
USPS and MNIST are two widely used datasets in domain adaptation. Both of them are comprised of images of handwritten digits. There are 9,298 and 70,000 samples in total in each dataset, respectively. Office-31 dataset (Saenko et al. 2010) consists of 4,652 samples from 31 categories. Samples in this dataset are from 3 subsets, e.g., Amazon (A), DSLR (D) and Webcam (W). Specifically, Amazon includes images downloaded from amazon.com. DSLR consists of samples captured by a digital SLR camera. Images in Webcam are shoot by a lowresolution web camera.
Implementation Details
Our proposed paradigm is independent from specific base architectures. It can be easily incorporated into any popular deep networks. Limited by space, we implement our ADAN based on two base architectures, e.g., LeNet (LeCun et al. 1998) for digits recognition and ResNet (He et al. 2016) for object classification. LeNet-5 contains three convolutional layers and two fully connected layers. In our ADAN, we add one earlier exit after the first convolutional layer. The earlier exit consists of a convolutional layer and a fully connected layer. For LeNet-5, we set batchsize as 128, learning rate as 0.001, optimizer as SGD with momentum 0.9 and weight decay 0.0001.
For ResNet-50, we add two earlier exits into the backbone network. The first one is located after the 3rd layer, and the second one after the 39th layer. One can also tailor the earlier exits, either numbers or locations, for their own applications. In our implementation, we provide a general template, which makes it embarrassingly simple to add, remove and modify the earlier exits. With respect to the training parameters, we set batchsize to 24. We also use SGD with momentum of 0.9 to optimize the objective function. The learning rate is adjusted during SGD using the same formula as reported in (Long et al. 2017).
Metrics and Compared Methods
In the domain adaptation community, accuracy of the target domain is the most widely used metric. For the sake of fairness, we follow previous work (Tzeng et al. 2017;Long et al. 2017) and also report the accuracy of the target domain. Each of the reported results of our method is an average of 5 runs. The results of compared method are cited from the original paper. Or, if not available in the original paper, we report the best results we can achieve by running their codes. All of the experiments settings are following the previous work (Bousmalis et al. 2017;Long et al. 2017) to ensure fair comparison.
The following state-of-the-art methods are reported for comparison: Quantitative Results Table 1 and Table 2 report the experimental results of our method on MNIST and USPS datasets. Specifically, Table 1 verifies that our approach can achieve state-of-the-art accuracy. Table 2, several interesting observations can be drawn. At first, both the accuracy and the speed are improved by adding earlier exits. The backbone network achieves 91.27% in 2.42ms. With a earlier exit, however, ADAN can achieve 91.62% in only 0.83ms. In this case, , features extracted from either earlier layers or latter layers have their cons and pros in transfer learning tasks. Our approach, notably, can take advantage from every layer and maximize the value of deep networks. Table 3 and Table 4 show the results on Office-31 dataset. Similar to the results of digits recognition, we can observe that our approach achieves state-of-the-art performance and it can significantly reduce the running time by the earlier exits. Compared with the MNIST and USPS dataset, Office-31 is much more challenging. As a result, if the ratio of earlier exited samples goes high, the accuracy dropping is more ob-vious than in the digits recognition task. Notice that the baseline (backbone network) has 50 layers, while the first exit in our model is located after the 3rd layer, which is very shallow compared with ResNet-50. However, our model still get the accuracy of 97.23% compared with the baseline 99.8% on W→D when we set 20% and 60% samples exit from the first earlier exit and the second earlier exit, respectively. Notably, with the sacrifice of 1.6% accuracy, we can speed up the model 2.52 times! The results on D→W and other evaluations, e.g., A→W and A→D, draw the same conclusion. It is worth noting that Amazon is a very challenging dataset, the results on A→W and A→D, therefore, are not outstanding as the results on DSLR and Webcam. In our experiments, we tried that if we move the first exits backward several layers or if reduce the ratio of earlier detection, the results on A→W and A→D would improve. In our source code, we provide a general template to modify, add and remove earlier exits. One can tailor a personal ADAN with few lines of python codes.
Qualitative Results
Back to the Samples. The very basic motivation behind our formulation is that our approach handles the degrees of adaptation difficulty by introducing multiple earlier exits into the learning pipeline. To verify that our approach does perceive the adaptation difficulty and handle it with different strategies, we show some samples exited from different earlier exits in Fig. 4. From the results, it is crystal clear that our approach is able to identify the degrees of adaptation difficulty. For instance, let us take the last column, i.e., samples with the label monitor, on evaluation W→D as an example. We can see that the samples exited via the first earlier exit are very similar with the source samples. These samples are easy to be adapted. The target samples exited from the second earlier exit are slightly different from the source samples in view point. At last, the samples exited from the final exit have distinctive capture angles with the source samples. These target samples are hard to be adapted. For the easy samples, we only use few layers to speed up the model. At the same time, for the hard samples, we deploy more deep networks to guarantee the accuracy. Our approach is agile and resilient. Class-wise Visualization. Fig. 3 reports the visualized class-wise accuracy. Comparing the two lines of Fig. 3, it is clear that our approach is able to mitigate the distribution gaps between the source domain and the target domain.
Parameter Sensitivity. The main parameter in our model is λ which balances the weight of supervised loss and transfer loss. Fig. 5(a) reports the effects of different λ values. It can be seen that our model is not sensitive to the parameter when it chosen from [0,1]. However, the performance will degrade when λ > 2. Since the model is targeted at leveraging knowledge from the source domain to facilitate the target domain, a large λ would weaken the contribution of the source domain. The other hyper-parameter in our formulation is the weight w i , we simply set it to one throughout this paper. We also tested it for different settings, and find w i is not sensitive to the overall performance.
Convergence. Fig. 5(a) reports the overall error with respect to different epochs. The results reflect the convergence of our approach. From the results we can observe that our approach is able to achieve the stable result around 20 epochs. It is worth noting that the reported results are from the evaluations with 10% and 50% samples exited via the first and second earlier exit, respectively. Thus, the results are slightly worse than the backbone network. However, the results report the overall convergence rather than only the backbone.
Ablation Analysis. Our approach consists of the backbone networks and the earlier exits. If we remove the exits and only remain the backbone, the performance of our approach would be similar with results reported in Table 3 and the baseline in Table 4. However, the running cost would be an issue for mobile devices and real-time systems.
Conclusion
In this paper, we propose a novel learning paradigm for domain adaptation. In the proposed paradigm, several earlier detections are performed to identify the adaptation difficulty. At the same time, several earlier exit are added along the backbone network to reduce the running cost. The proposed paradigm can be easily incorporated into existing deep domain adaptation networks or even non-transfer deep architectures, e.g., LeNet, AlexNet and ResNet.
In addition, a novel domain adaptation approach, i.e., agile domain adaptation networks (ADAN), is implemented to verify the effectiveness and efficiency of the proposed paradigm. Extensive experiment results, both quantitative | 4,064 |
1907.04978 | 2961482815 | Domain adaptation investigates the problem of leveraging knowledge from a well-labeled source domain to an unlabeled target domain, where the two domains are drawn from different data distributions. Because of the distribution shifts, different target samples have distinct degrees of difficulty in adaptation. However, existing domain adaptation approaches overwhelmingly neglect the degrees of difficulty and deploy exactly the same framework for all of the target samples. Generally, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. In this paper, we aim to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks. We propose a novel approach, named agile domain adaptation , which agilely applies optimal frameworks to different target samples and classifies the target samples according to their adaptation difficulties. Specifically, we propose a paradigm which performs several early detections before the final classification. If a sample can be classified at one of the early stage with enough confidence, the sample would exit without the subsequent processes. Notably, the proposed method can significantly reduce the running cost of domain adaptation approaches, which can extend the application scenarios of domain adaptation to even mobile devices and real-time systems. Extensive experiments on two open benchmarks verify the effectiveness and efficiency of the proposed method. | Since domain adaptation aims to mitigate the distribution gap between the source domain and the target domain, it is vital to find a metric which can measure the data distribution divergence. Maximum mean discrepancy (MMD) @cite_27 is widely considered as a favorable criteria in previous work. For instance, deep adaptation networks (DAN) @cite_4 generalizes deep convolutional neural networks to the domain adaptation scenario. In DAN, the general (task-invariant) layers are shared by the two domains and the task-specific layers are adapted by multi-kernel MMD. Furthermore, joint adaptation networks (JAN) @cite_18 extends DAN by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. | {
"abstract": [
"We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets."
],
"cite_N": [
"@cite_27",
"@cite_4",
"@cite_18"
],
"mid": [
"2212660284",
"2951670162",
"2408201877"
]
} | Agile Domain Adaptation | Conventional machine learning algorithms generally assume that the training set and the test set are drawn from the same data distribution (Pan, Yang, and others 2010). The assumption, however, cannot always be guaranteed in realworld applications (Long et al. 2015;Ding, Shao, and Fu 2014). To address this, domain adaptation (Pan et al. 2011;Gong et al. 2012;Bousmalis et al. 2017;Ding and Fu 2017) has been proposed to mitigate the data distribution shifts among different domains.
Existing domain adaptation approaches can be roughly grouped into traditional methods and deep learning methods. Traditional methods (Gong et al. 2012;Pan et al. 2011; The authors are with the School of Computer Science and Engineering,University of Electronic Science and Technology. Email to: [email protected] source domain and target samples are from webcam dataset and amazon dataset, respectively. All the shown samples have the same label Calculator. It is obvious that some target samples are easier to be classified than others when we use the source samples as classification reference. Li et al. 2018b) generally do not care how to extract sample features. They only focus on the transfer techniques, such as distribution alignment (Pan et al. 2011), feature augmentation (Li et al. 2014) and landmark selection (Aljundi et al. 2015). Deep learning methods (Ganin et al. 2016;Ganin and Lempitsky 2014;Ding, Nasrabadi, and Fu 2018) take care of both feature extraction and knowledge adaptation via an end-to-end architecture. Yet, no matter what learning paradigm they take, previous domain adaptation methods deploy exactly the same leaning framework for all of the target samples. Specifically, all of the target samples, both easy and hard, are processed via exactly the same pipeline, e.g., the same optimization steps in tradition learning and the same neural network layers in deep learning. Existing methods just feed all the samples into a general formulation which can output an average result. They neglect the inherent degrees of difficulty (as shown in Fig. 1) with the target samples.
In general, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. It is worth noting that a network shared by both easy and hard samples tends to get deeper and larger since the model has to handle hard samples. For a better understanding, thinking about a smart phone. Although one mostly uses the phone for calls and messages, it has to be powerful enough just in case one wants to play the Temple Run from time to time. As a result, the neglect of different adaptation difficulties makes the domain adaptation methods hard to be deployed in real-time and energy-sensitive applications. Fig. 1 has clearly shown that different adaptation difficulties do exist in real-world datasets. Intuitively, if the two domains are highly related, most of the target samples would be easy ones and hard samples tend to be less frequent. Otherwise, one should consider choosing a different source domain for adaptation. Since easy samples can be adapted by a simple or shadow framework and hard samples need more effort, so, why don't we finish the easy ones first with almost no effort and then fully focus on hard ones? Motivated by the above observations, we propose a novel domain adaptation paradigm which takes the degrees of classification difficulty into consideration. Specifically, we present a deep domain adaptation architecture which has multiple exits. Different exits are located after different layers along the backbone deep network. A common sense about deep neural network is that the features extracted from earlier layers are more general but coarse, while the features extracted from the latter layers are more fine but specific (Long et al. 2015). Therefore, the most easy samples are supposed to be classified by very few layers with coarse features, and then these samples can be finished via the first exit. Similarly, the medium hard samples can be handled by the second exit, third exit and so on. At last, the very hard samples are handled by the final exit.
Since deep learning is computing-intensive, more layers generally need more computing power, e.g., GPU, memory and electricity. The early exit paradigm can significantly reduce the computational costs. As a result, it is possible to deploy our solution on a distributed platform. For instance, the first exit can be deployed on edge device, e.g., mobile devices. The second exit on local servers and the final exit on cloud. In particular, we can agilely tailor the deep architecture according to the specific application scenarios. At last, the main contributions of this paper can be listed as follows: 1) We propose a novel learning paradigm for domain adaptation. It explicitly handles the degrees of adaptation difficulty by introducing multiple exits in the learning pipeline. The proposed paradigm can be easily incorporated into deep domain adaptation approaches and significantly reduce their computational costs.
2) We present a novel domain adaptation method, i.e., agile domain adaptation networks (ADAN), which puts the learning paradigm into practice. Extensive experiments on open benchmarks verify the effectiveness and efficiency of ADAN.
3) For deep transfer learning methods, it is confusing to choose how many layers should be used to extract features. The earlier layers are more transferable but the corresponding features are too coarse. On the contrary, the features extracted from the latter layers are more fine/distinctive but these layers tend to be task-specific and hard to be transfered. In our approach, we find a way out of the dilemma. Specifically, earlier layers are used to classify easy samples and latter layers to classify hard ones. Our formulation takes full advantages from both the early layers and the latter layers. It is a practical way to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks.
The remainder of this paper is organized as follows. Section II briefly reviews related work and highlights the merits of our approach. Section III details the proposed learning paradigm and corresponding approach ADAN. Section IV reports the experiments and analyzes ADAN. At last, section V is the conclusion and future work.
Source samples
… … … … … … … … … … … … … … … … … … … …
in Fig. 1). It is so straightforward that everyone can imagine that some target samples are closer to source samples and some target samples are far away (Gong, Grauman, and Sha 2013). Intuitively, the closed samples are easier to be adapted than distant ones. Therefore, we advocate different pipelines for different samples. Specifically, simpler networks for easy and frequent samples, more complex networks for hard and rare ones. From the perspective of network structure, BranchyNet (Teerapittayanon, McDanel, and Kung 2016) and hard-aware deeply cascaded embedding (HDC) (Yuan, Yang, and Zhang 2017) are related with our work. BranchyNet leverages the insight that many test samples can be correctly classified early and therefore do not need the later network layers. HDC ensembles a set of models with different complexities in cascaded manner to mine hard examples at multiple levels. However, both BranchyNet and HDC are conventional machine learning approaches. They are limited to scenarios where the training set and the test set are drawn from the same data distribution. In transfer learning tasks, we have to address the distribution gaps along the network layers and exits.
Agile Domain Adaptation Notations and Definitions
In this paper, we use a bold uppercase letter and a bold lowercase letter to denote a matrix and a vector, respectively. Subscripts s and t are used to indicate the source domain and the target domain, respectively. We investigate the unsupervised domain adaptation problem defined as follows.
Definition 1 A domain D consists of three parts: feature space X , its probability distribution P (X) and a label set Y, where X ∈ X .
Problem 1 Given a labeled source domain D s and an unlabeled target domain D t , unsupervised domain adaptation handles the problem of transferring knowledge, e.g., samples, features and parameters, from
D s to D t , where D s = D t , Y s = Y t , P (X s ) = P (X t ) and P (y s |X s ) = P (y t |X t ).
Overall Idea
In unsupervised domain adaptation, we have a labeled source domain {X s , y s } with n s samples and an unlabeled target domain X t with n t samples. Our goal is to train a deep neural network f (x) which has multiple exits, e.g., exit 1 , exit 2 , ..., exit m . In the learned deep neural network, the very easy samples can be classified via the first exit, i.e., exit 1 , which is located after only a few early layers of the backbone network. In the same manner, more difficult samples are handled by subsequent exit 2 , ..., exit m−1 until the last remaining samples are handled by the final exit m . At the testing stage, if samples come in one by one instead of in batches, the sample x would be first handled by exit 1 . If exit 1 has enough confidence to classify x, the testing would finish. Otherwise, x will be successively handled by the following exits until one of the exit has high confidence to classify it or finally handled by the last exit. The learning pipeline of this idea is shown in Fig. 2. It is worth noting that the number of exits and the structure of each exit can be tailored according to specific tasks.
Problem Formulation
Since we have the labels of X s , we can train the deep model on the source domain data in a supervised fashion. Specifically, the empirical error of f (x) on X s can be written as:
L sup = 1 ns ns i=1 J(f (x s,i ), y s,i ),(1)
where J(·) is a loss metric. Minimizing L sup can train a model which is suitable for the source domain. However, the model cannot handle the target domain due to the data distribution shift. Therefore, we further introduce domain adaptation layers into the network, so that the trained model can be applied for the target domain. In a deep neural network, e.g., AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and ResNet (He et al. 2016), deep features extracted from the earlier layers are more general and features from the latter layers tend towards specific (Long et al. 2015). In other words, the features vary from domaininvariant to domain-specific along the network. Activations in the domain-specific layers are hard to be transferred. Consequently, we remold the domain-specific layers to domain adaptation layers by optimizing a transfer loss L tran . As a result, the loss function of each exit ( = 1, · · · , m) which considers both the source domain and the target domain can be written as:
L exit = L sup + λL tran ,(2)
where λ > 0 is the balancing parameter, indicates the index of network exits. In our agile domain adaptation networks, we have multiple exits, e.g., exit 1 , exit 2 , ..., exit m . To train the whole ADAN in an end-to-end manner, we jointly optimize the loss functions of each exit. Formally, we optimize a weighted loss sum of L exit ( = 1, 2, · · · , m):
L = m =1 w L exit ,(3)
where w > 0 ( = 1, 2, · · · , m) is the loss weight of exit . In our paper, we simply set w = 1 ( = 1, 2, · · · , m).
In this paper, we deploy the cross-entropy loss for the labeled source data. If we use y to denote the one-hot groundtruth label vector of sample x, J(·) can be written as:
J(ŷ, y) = − 1 |C| c∈C y c logŷ c ,(4)
where C is the set of all possible labels,ŷ is the predicted label vector of x:
y = softmax(f (x)) = exp(f (x)) c∈C exp(f (x) c ) .(5)
For the transfer learning part L tran , we deploy the multikernel MMD (Gretton et al. 2012) loss as our metric. Specifically, given two data distributions of the source and the target domain, their MMD can be computed as the square distance between the empirical kernel means as:
MMD(X s , X t ) = 1 n 2 s ns i=1 ns j=1 k(x s,i , x s,j ) − 2 nsnt ns i=1 nt j=1 k(x s,i , x t,j ) + 1 n 2 t ns i=1 ns j=1 k(x t,i , x t,j ),(6)
where k(·) is a kernel function which maps the data features into a reproducing kernel Hilbert space (RKHS). In this paper, we deploy the widely used Gaussian kernel.
Algorithm 1. Agile Domain Adaptation Networks Training
Learning the network parameters by optimizing Eq. (3). Test = 1; %Initialize the network exit index while z=fexit (x); %Get the features from exit y = softmax(z); % Predict the label at exit e = En(y); %Calculate the sample entropy if e ≤ T %If the entropy is lower than a threshold return y;
%Return the predicted label and finish = + 1;
%Otherwise, go to next exit Until > m; return y;
Eq. (6) calculates the MMD on the original data feature x. However, minimizing Eq. (6) is implicit in matching the activations generated by the domain adaptation layers. In transfer learning, the source activations and target activations generated by the domain adaptation layers are encouraged to be well-aligned so that the layer parameters can be shared by the two domains. Therefore, we explicitly minimize the MMD on layer activations (Long et al. 2017). Let Z l s and Z l t denote the activations generated by layer l from the source domain and the target domain, respectively, the MMD with respect to layer activations can be calculated by:
L tran = 1 n 2 s ns i=1 ns j=1 L l=1 k l (z l s,i , z l s,j ) − 2 nsnt ns i=1 nt j=1 L l=1 k l (z l s,i , z l t,j ) + 1 n 2 t ns i=1 ns j=1 L l=1 k l (z l t,i , z l t,j ).(7)
Furthermore, Eq. (7) can be rewritten as the following equivalent form to reduce the computational costs (Gretton et al. 2012). For the earlier detections, we need to estimate whether a sample should be finished at each exit. Therefore, we need a metric to measure the classification confidence. In this paper, we use the sample entropy as the metric, which is defined as:
L tran = 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l s,2i ) + L l=1 k l (z l t,2i−1 , z l t,2i ) − 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l t,2i ) + L l=1 k l (z l t,2i−1 , z l s,2i ) .(8)En(y) = − c∈C y c logy c ,(9)
where y is a label vector which consists of the probabilities of all possible labels computed at each exit. From the physical meaning of entropy, we know that a lower entropy indicts a more certain output. As a result, we compare the sample entropy with a threshold in each exit. If the entropy is lower than the threshold, the classification of the sample would be finished. Otherwise, the sample will goto the next exit for prediction. For a better understanding, we show the main steps of our ADAN in Algorithm 1.
Experiments
In this section, we verify the proposed method with both accuracy and efficiency results. Two widely used base architectures, e.g., the classical LeNet (LeCun et al. 1998) and the state-of-the-art ResNet (He et al. 2016), are tailored to work in the manner of agile domain adaptation. The source codes will be publicly available on our GitHub page.
Data Description
USPS and MNIST are two widely used datasets in domain adaptation. Both of them are comprised of images of handwritten digits. There are 9,298 and 70,000 samples in total in each dataset, respectively. Office-31 dataset (Saenko et al. 2010) consists of 4,652 samples from 31 categories. Samples in this dataset are from 3 subsets, e.g., Amazon (A), DSLR (D) and Webcam (W). Specifically, Amazon includes images downloaded from amazon.com. DSLR consists of samples captured by a digital SLR camera. Images in Webcam are shoot by a lowresolution web camera.
Implementation Details
Our proposed paradigm is independent from specific base architectures. It can be easily incorporated into any popular deep networks. Limited by space, we implement our ADAN based on two base architectures, e.g., LeNet (LeCun et al. 1998) for digits recognition and ResNet (He et al. 2016) for object classification. LeNet-5 contains three convolutional layers and two fully connected layers. In our ADAN, we add one earlier exit after the first convolutional layer. The earlier exit consists of a convolutional layer and a fully connected layer. For LeNet-5, we set batchsize as 128, learning rate as 0.001, optimizer as SGD with momentum 0.9 and weight decay 0.0001.
For ResNet-50, we add two earlier exits into the backbone network. The first one is located after the 3rd layer, and the second one after the 39th layer. One can also tailor the earlier exits, either numbers or locations, for their own applications. In our implementation, we provide a general template, which makes it embarrassingly simple to add, remove and modify the earlier exits. With respect to the training parameters, we set batchsize to 24. We also use SGD with momentum of 0.9 to optimize the objective function. The learning rate is adjusted during SGD using the same formula as reported in (Long et al. 2017).
Metrics and Compared Methods
In the domain adaptation community, accuracy of the target domain is the most widely used metric. For the sake of fairness, we follow previous work (Tzeng et al. 2017;Long et al. 2017) and also report the accuracy of the target domain. Each of the reported results of our method is an average of 5 runs. The results of compared method are cited from the original paper. Or, if not available in the original paper, we report the best results we can achieve by running their codes. All of the experiments settings are following the previous work (Bousmalis et al. 2017;Long et al. 2017) to ensure fair comparison.
The following state-of-the-art methods are reported for comparison: Quantitative Results Table 1 and Table 2 report the experimental results of our method on MNIST and USPS datasets. Specifically, Table 1 verifies that our approach can achieve state-of-the-art accuracy. Table 2, several interesting observations can be drawn. At first, both the accuracy and the speed are improved by adding earlier exits. The backbone network achieves 91.27% in 2.42ms. With a earlier exit, however, ADAN can achieve 91.62% in only 0.83ms. In this case, , features extracted from either earlier layers or latter layers have their cons and pros in transfer learning tasks. Our approach, notably, can take advantage from every layer and maximize the value of deep networks. Table 3 and Table 4 show the results on Office-31 dataset. Similar to the results of digits recognition, we can observe that our approach achieves state-of-the-art performance and it can significantly reduce the running time by the earlier exits. Compared with the MNIST and USPS dataset, Office-31 is much more challenging. As a result, if the ratio of earlier exited samples goes high, the accuracy dropping is more ob-vious than in the digits recognition task. Notice that the baseline (backbone network) has 50 layers, while the first exit in our model is located after the 3rd layer, which is very shallow compared with ResNet-50. However, our model still get the accuracy of 97.23% compared with the baseline 99.8% on W→D when we set 20% and 60% samples exit from the first earlier exit and the second earlier exit, respectively. Notably, with the sacrifice of 1.6% accuracy, we can speed up the model 2.52 times! The results on D→W and other evaluations, e.g., A→W and A→D, draw the same conclusion. It is worth noting that Amazon is a very challenging dataset, the results on A→W and A→D, therefore, are not outstanding as the results on DSLR and Webcam. In our experiments, we tried that if we move the first exits backward several layers or if reduce the ratio of earlier detection, the results on A→W and A→D would improve. In our source code, we provide a general template to modify, add and remove earlier exits. One can tailor a personal ADAN with few lines of python codes.
Qualitative Results
Back to the Samples. The very basic motivation behind our formulation is that our approach handles the degrees of adaptation difficulty by introducing multiple earlier exits into the learning pipeline. To verify that our approach does perceive the adaptation difficulty and handle it with different strategies, we show some samples exited from different earlier exits in Fig. 4. From the results, it is crystal clear that our approach is able to identify the degrees of adaptation difficulty. For instance, let us take the last column, i.e., samples with the label monitor, on evaluation W→D as an example. We can see that the samples exited via the first earlier exit are very similar with the source samples. These samples are easy to be adapted. The target samples exited from the second earlier exit are slightly different from the source samples in view point. At last, the samples exited from the final exit have distinctive capture angles with the source samples. These target samples are hard to be adapted. For the easy samples, we only use few layers to speed up the model. At the same time, for the hard samples, we deploy more deep networks to guarantee the accuracy. Our approach is agile and resilient. Class-wise Visualization. Fig. 3 reports the visualized class-wise accuracy. Comparing the two lines of Fig. 3, it is clear that our approach is able to mitigate the distribution gaps between the source domain and the target domain.
Parameter Sensitivity. The main parameter in our model is λ which balances the weight of supervised loss and transfer loss. Fig. 5(a) reports the effects of different λ values. It can be seen that our model is not sensitive to the parameter when it chosen from [0,1]. However, the performance will degrade when λ > 2. Since the model is targeted at leveraging knowledge from the source domain to facilitate the target domain, a large λ would weaken the contribution of the source domain. The other hyper-parameter in our formulation is the weight w i , we simply set it to one throughout this paper. We also tested it for different settings, and find w i is not sensitive to the overall performance.
Convergence. Fig. 5(a) reports the overall error with respect to different epochs. The results reflect the convergence of our approach. From the results we can observe that our approach is able to achieve the stable result around 20 epochs. It is worth noting that the reported results are from the evaluations with 10% and 50% samples exited via the first and second earlier exit, respectively. Thus, the results are slightly worse than the backbone network. However, the results report the overall convergence rather than only the backbone.
Ablation Analysis. Our approach consists of the backbone networks and the earlier exits. If we remove the exits and only remain the backbone, the performance of our approach would be similar with results reported in Table 3 and the baseline in Table 4. However, the running cost would be an issue for mobile devices and real-time systems.
Conclusion
In this paper, we propose a novel learning paradigm for domain adaptation. In the proposed paradigm, several earlier detections are performed to identify the adaptation difficulty. At the same time, several earlier exit are added along the backbone network to reduce the running cost. The proposed paradigm can be easily incorporated into existing deep domain adaptation networks or even non-transfer deep architectures, e.g., LeNet, AlexNet and ResNet.
In addition, a novel domain adaptation approach, i.e., agile domain adaptation networks (ADAN), is implemented to verify the effectiveness and efficiency of the proposed paradigm. Extensive experiment results, both quantitative | 4,064 |
1907.04978 | 2961482815 | Domain adaptation investigates the problem of leveraging knowledge from a well-labeled source domain to an unlabeled target domain, where the two domains are drawn from different data distributions. Because of the distribution shifts, different target samples have distinct degrees of difficulty in adaptation. However, existing domain adaptation approaches overwhelmingly neglect the degrees of difficulty and deploy exactly the same framework for all of the target samples. Generally, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. In this paper, we aim to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks. We propose a novel approach, named agile domain adaptation , which agilely applies optimal frameworks to different target samples and classifies the target samples according to their adaptation difficulties. Specifically, we propose a paradigm which performs several early detections before the final classification. If a sample can be classified at one of the early stage with enough confidence, the sample would exit without the subsequent processes. Notably, the proposed method can significantly reduce the running cost of domain adaptation approaches, which can extend the application scenarios of domain adaptation to even mobile devices and real-time systems. Extensive experiments on two open benchmarks verify the effectiveness and efficiency of the proposed method. | Recently, generative adversarial networks (GAN) @cite_14 has been introduced into domain adaptation. Compared with the distribution alignment methods, adversarial domain adaptation models @cite_6 @cite_25 are able to generate domain invariant features under the supervision of a discriminator. For instance, adversarial discriminative domain adaptation (ADDA) @cite_6 combines discriminative analysis, untied weight sharing and a GAN loss under a generalized framework. Coupled generative adversarial networks (CoGAN) @cite_15 minimize the domain shifts by simultaneously training two GANs to handle the source domain and the target domain. | {
"abstract": [
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task."
],
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_25",
"@cite_6"
],
"mid": [
"2471149695",
"2099471712",
"2584009249",
"2593768305"
]
} | Agile Domain Adaptation | Conventional machine learning algorithms generally assume that the training set and the test set are drawn from the same data distribution (Pan, Yang, and others 2010). The assumption, however, cannot always be guaranteed in realworld applications (Long et al. 2015;Ding, Shao, and Fu 2014). To address this, domain adaptation (Pan et al. 2011;Gong et al. 2012;Bousmalis et al. 2017;Ding and Fu 2017) has been proposed to mitigate the data distribution shifts among different domains.
Existing domain adaptation approaches can be roughly grouped into traditional methods and deep learning methods. Traditional methods (Gong et al. 2012;Pan et al. 2011; The authors are with the School of Computer Science and Engineering,University of Electronic Science and Technology. Email to: [email protected] source domain and target samples are from webcam dataset and amazon dataset, respectively. All the shown samples have the same label Calculator. It is obvious that some target samples are easier to be classified than others when we use the source samples as classification reference. Li et al. 2018b) generally do not care how to extract sample features. They only focus on the transfer techniques, such as distribution alignment (Pan et al. 2011), feature augmentation (Li et al. 2014) and landmark selection (Aljundi et al. 2015). Deep learning methods (Ganin et al. 2016;Ganin and Lempitsky 2014;Ding, Nasrabadi, and Fu 2018) take care of both feature extraction and knowledge adaptation via an end-to-end architecture. Yet, no matter what learning paradigm they take, previous domain adaptation methods deploy exactly the same leaning framework for all of the target samples. Specifically, all of the target samples, both easy and hard, are processed via exactly the same pipeline, e.g., the same optimization steps in tradition learning and the same neural network layers in deep learning. Existing methods just feed all the samples into a general formulation which can output an average result. They neglect the inherent degrees of difficulty (as shown in Fig. 1) with the target samples.
In general, a simple or shadow framework is fast but rough. A sophisticated or deep framework, on the contrary, is accurate but slow. It is worth noting that a network shared by both easy and hard samples tends to get deeper and larger since the model has to handle hard samples. For a better understanding, thinking about a smart phone. Although one mostly uses the phone for calls and messages, it has to be powerful enough just in case one wants to play the Temple Run from time to time. As a result, the neglect of different adaptation difficulties makes the domain adaptation methods hard to be deployed in real-time and energy-sensitive applications. Fig. 1 has clearly shown that different adaptation difficulties do exist in real-world datasets. Intuitively, if the two domains are highly related, most of the target samples would be easy ones and hard samples tend to be less frequent. Otherwise, one should consider choosing a different source domain for adaptation. Since easy samples can be adapted by a simple or shadow framework and hard samples need more effort, so, why don't we finish the easy ones first with almost no effort and then fully focus on hard ones? Motivated by the above observations, we propose a novel domain adaptation paradigm which takes the degrees of classification difficulty into consideration. Specifically, we present a deep domain adaptation architecture which has multiple exits. Different exits are located after different layers along the backbone deep network. A common sense about deep neural network is that the features extracted from earlier layers are more general but coarse, while the features extracted from the latter layers are more fine but specific (Long et al. 2015). Therefore, the most easy samples are supposed to be classified by very few layers with coarse features, and then these samples can be finished via the first exit. Similarly, the medium hard samples can be handled by the second exit, third exit and so on. At last, the very hard samples are handled by the final exit.
Since deep learning is computing-intensive, more layers generally need more computing power, e.g., GPU, memory and electricity. The early exit paradigm can significantly reduce the computational costs. As a result, it is possible to deploy our solution on a distributed platform. For instance, the first exit can be deployed on edge device, e.g., mobile devices. The second exit on local servers and the final exit on cloud. In particular, we can agilely tailor the deep architecture according to the specific application scenarios. At last, the main contributions of this paper can be listed as follows: 1) We propose a novel learning paradigm for domain adaptation. It explicitly handles the degrees of adaptation difficulty by introducing multiple exits in the learning pipeline. The proposed paradigm can be easily incorporated into deep domain adaptation approaches and significantly reduce their computational costs.
2) We present a novel domain adaptation method, i.e., agile domain adaptation networks (ADAN), which puts the learning paradigm into practice. Extensive experiments on open benchmarks verify the effectiveness and efficiency of ADAN.
3) For deep transfer learning methods, it is confusing to choose how many layers should be used to extract features. The earlier layers are more transferable but the corresponding features are too coarse. On the contrary, the features extracted from the latter layers are more fine/distinctive but these layers tend to be task-specific and hard to be transfered. In our approach, we find a way out of the dilemma. Specifically, earlier layers are used to classify easy samples and latter layers to classify hard ones. Our formulation takes full advantages from both the early layers and the latter layers. It is a practical way to challenge the fundamental contradiction between the accuracy and speed in domain adaptation tasks.
The remainder of this paper is organized as follows. Section II briefly reviews related work and highlights the merits of our approach. Section III details the proposed learning paradigm and corresponding approach ADAN. Section IV reports the experiments and analyzes ADAN. At last, section V is the conclusion and future work.
Source samples
… … … … … … … … … … … … … … … … … … … …
in Fig. 1). It is so straightforward that everyone can imagine that some target samples are closer to source samples and some target samples are far away (Gong, Grauman, and Sha 2013). Intuitively, the closed samples are easier to be adapted than distant ones. Therefore, we advocate different pipelines for different samples. Specifically, simpler networks for easy and frequent samples, more complex networks for hard and rare ones. From the perspective of network structure, BranchyNet (Teerapittayanon, McDanel, and Kung 2016) and hard-aware deeply cascaded embedding (HDC) (Yuan, Yang, and Zhang 2017) are related with our work. BranchyNet leverages the insight that many test samples can be correctly classified early and therefore do not need the later network layers. HDC ensembles a set of models with different complexities in cascaded manner to mine hard examples at multiple levels. However, both BranchyNet and HDC are conventional machine learning approaches. They are limited to scenarios where the training set and the test set are drawn from the same data distribution. In transfer learning tasks, we have to address the distribution gaps along the network layers and exits.
Agile Domain Adaptation Notations and Definitions
In this paper, we use a bold uppercase letter and a bold lowercase letter to denote a matrix and a vector, respectively. Subscripts s and t are used to indicate the source domain and the target domain, respectively. We investigate the unsupervised domain adaptation problem defined as follows.
Definition 1 A domain D consists of three parts: feature space X , its probability distribution P (X) and a label set Y, where X ∈ X .
Problem 1 Given a labeled source domain D s and an unlabeled target domain D t , unsupervised domain adaptation handles the problem of transferring knowledge, e.g., samples, features and parameters, from
D s to D t , where D s = D t , Y s = Y t , P (X s ) = P (X t ) and P (y s |X s ) = P (y t |X t ).
Overall Idea
In unsupervised domain adaptation, we have a labeled source domain {X s , y s } with n s samples and an unlabeled target domain X t with n t samples. Our goal is to train a deep neural network f (x) which has multiple exits, e.g., exit 1 , exit 2 , ..., exit m . In the learned deep neural network, the very easy samples can be classified via the first exit, i.e., exit 1 , which is located after only a few early layers of the backbone network. In the same manner, more difficult samples are handled by subsequent exit 2 , ..., exit m−1 until the last remaining samples are handled by the final exit m . At the testing stage, if samples come in one by one instead of in batches, the sample x would be first handled by exit 1 . If exit 1 has enough confidence to classify x, the testing would finish. Otherwise, x will be successively handled by the following exits until one of the exit has high confidence to classify it or finally handled by the last exit. The learning pipeline of this idea is shown in Fig. 2. It is worth noting that the number of exits and the structure of each exit can be tailored according to specific tasks.
Problem Formulation
Since we have the labels of X s , we can train the deep model on the source domain data in a supervised fashion. Specifically, the empirical error of f (x) on X s can be written as:
L sup = 1 ns ns i=1 J(f (x s,i ), y s,i ),(1)
where J(·) is a loss metric. Minimizing L sup can train a model which is suitable for the source domain. However, the model cannot handle the target domain due to the data distribution shift. Therefore, we further introduce domain adaptation layers into the network, so that the trained model can be applied for the target domain. In a deep neural network, e.g., AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and ResNet (He et al. 2016), deep features extracted from the earlier layers are more general and features from the latter layers tend towards specific (Long et al. 2015). In other words, the features vary from domaininvariant to domain-specific along the network. Activations in the domain-specific layers are hard to be transferred. Consequently, we remold the domain-specific layers to domain adaptation layers by optimizing a transfer loss L tran . As a result, the loss function of each exit ( = 1, · · · , m) which considers both the source domain and the target domain can be written as:
L exit = L sup + λL tran ,(2)
where λ > 0 is the balancing parameter, indicates the index of network exits. In our agile domain adaptation networks, we have multiple exits, e.g., exit 1 , exit 2 , ..., exit m . To train the whole ADAN in an end-to-end manner, we jointly optimize the loss functions of each exit. Formally, we optimize a weighted loss sum of L exit ( = 1, 2, · · · , m):
L = m =1 w L exit ,(3)
where w > 0 ( = 1, 2, · · · , m) is the loss weight of exit . In our paper, we simply set w = 1 ( = 1, 2, · · · , m).
In this paper, we deploy the cross-entropy loss for the labeled source data. If we use y to denote the one-hot groundtruth label vector of sample x, J(·) can be written as:
J(ŷ, y) = − 1 |C| c∈C y c logŷ c ,(4)
where C is the set of all possible labels,ŷ is the predicted label vector of x:
y = softmax(f (x)) = exp(f (x)) c∈C exp(f (x) c ) .(5)
For the transfer learning part L tran , we deploy the multikernel MMD (Gretton et al. 2012) loss as our metric. Specifically, given two data distributions of the source and the target domain, their MMD can be computed as the square distance between the empirical kernel means as:
MMD(X s , X t ) = 1 n 2 s ns i=1 ns j=1 k(x s,i , x s,j ) − 2 nsnt ns i=1 nt j=1 k(x s,i , x t,j ) + 1 n 2 t ns i=1 ns j=1 k(x t,i , x t,j ),(6)
where k(·) is a kernel function which maps the data features into a reproducing kernel Hilbert space (RKHS). In this paper, we deploy the widely used Gaussian kernel.
Algorithm 1. Agile Domain Adaptation Networks Training
Learning the network parameters by optimizing Eq. (3). Test = 1; %Initialize the network exit index while z=fexit (x); %Get the features from exit y = softmax(z); % Predict the label at exit e = En(y); %Calculate the sample entropy if e ≤ T %If the entropy is lower than a threshold return y;
%Return the predicted label and finish = + 1;
%Otherwise, go to next exit Until > m; return y;
Eq. (6) calculates the MMD on the original data feature x. However, minimizing Eq. (6) is implicit in matching the activations generated by the domain adaptation layers. In transfer learning, the source activations and target activations generated by the domain adaptation layers are encouraged to be well-aligned so that the layer parameters can be shared by the two domains. Therefore, we explicitly minimize the MMD on layer activations (Long et al. 2017). Let Z l s and Z l t denote the activations generated by layer l from the source domain and the target domain, respectively, the MMD with respect to layer activations can be calculated by:
L tran = 1 n 2 s ns i=1 ns j=1 L l=1 k l (z l s,i , z l s,j ) − 2 nsnt ns i=1 nt j=1 L l=1 k l (z l s,i , z l t,j ) + 1 n 2 t ns i=1 ns j=1 L l=1 k l (z l t,i , z l t,j ).(7)
Furthermore, Eq. (7) can be rewritten as the following equivalent form to reduce the computational costs (Gretton et al. 2012). For the earlier detections, we need to estimate whether a sample should be finished at each exit. Therefore, we need a metric to measure the classification confidence. In this paper, we use the sample entropy as the metric, which is defined as:
L tran = 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l s,2i ) + L l=1 k l (z l t,2i−1 , z l t,2i ) − 2 ns ns/2 i=1 L l=1 k l (z l s,2i−1 , z l t,2i ) + L l=1 k l (z l t,2i−1 , z l s,2i ) .(8)En(y) = − c∈C y c logy c ,(9)
where y is a label vector which consists of the probabilities of all possible labels computed at each exit. From the physical meaning of entropy, we know that a lower entropy indicts a more certain output. As a result, we compare the sample entropy with a threshold in each exit. If the entropy is lower than the threshold, the classification of the sample would be finished. Otherwise, the sample will goto the next exit for prediction. For a better understanding, we show the main steps of our ADAN in Algorithm 1.
Experiments
In this section, we verify the proposed method with both accuracy and efficiency results. Two widely used base architectures, e.g., the classical LeNet (LeCun et al. 1998) and the state-of-the-art ResNet (He et al. 2016), are tailored to work in the manner of agile domain adaptation. The source codes will be publicly available on our GitHub page.
Data Description
USPS and MNIST are two widely used datasets in domain adaptation. Both of them are comprised of images of handwritten digits. There are 9,298 and 70,000 samples in total in each dataset, respectively. Office-31 dataset (Saenko et al. 2010) consists of 4,652 samples from 31 categories. Samples in this dataset are from 3 subsets, e.g., Amazon (A), DSLR (D) and Webcam (W). Specifically, Amazon includes images downloaded from amazon.com. DSLR consists of samples captured by a digital SLR camera. Images in Webcam are shoot by a lowresolution web camera.
Implementation Details
Our proposed paradigm is independent from specific base architectures. It can be easily incorporated into any popular deep networks. Limited by space, we implement our ADAN based on two base architectures, e.g., LeNet (LeCun et al. 1998) for digits recognition and ResNet (He et al. 2016) for object classification. LeNet-5 contains three convolutional layers and two fully connected layers. In our ADAN, we add one earlier exit after the first convolutional layer. The earlier exit consists of a convolutional layer and a fully connected layer. For LeNet-5, we set batchsize as 128, learning rate as 0.001, optimizer as SGD with momentum 0.9 and weight decay 0.0001.
For ResNet-50, we add two earlier exits into the backbone network. The first one is located after the 3rd layer, and the second one after the 39th layer. One can also tailor the earlier exits, either numbers or locations, for their own applications. In our implementation, we provide a general template, which makes it embarrassingly simple to add, remove and modify the earlier exits. With respect to the training parameters, we set batchsize to 24. We also use SGD with momentum of 0.9 to optimize the objective function. The learning rate is adjusted during SGD using the same formula as reported in (Long et al. 2017).
Metrics and Compared Methods
In the domain adaptation community, accuracy of the target domain is the most widely used metric. For the sake of fairness, we follow previous work (Tzeng et al. 2017;Long et al. 2017) and also report the accuracy of the target domain. Each of the reported results of our method is an average of 5 runs. The results of compared method are cited from the original paper. Or, if not available in the original paper, we report the best results we can achieve by running their codes. All of the experiments settings are following the previous work (Bousmalis et al. 2017;Long et al. 2017) to ensure fair comparison.
The following state-of-the-art methods are reported for comparison: Quantitative Results Table 1 and Table 2 report the experimental results of our method on MNIST and USPS datasets. Specifically, Table 1 verifies that our approach can achieve state-of-the-art accuracy. Table 2, several interesting observations can be drawn. At first, both the accuracy and the speed are improved by adding earlier exits. The backbone network achieves 91.27% in 2.42ms. With a earlier exit, however, ADAN can achieve 91.62% in only 0.83ms. In this case, , features extracted from either earlier layers or latter layers have their cons and pros in transfer learning tasks. Our approach, notably, can take advantage from every layer and maximize the value of deep networks. Table 3 and Table 4 show the results on Office-31 dataset. Similar to the results of digits recognition, we can observe that our approach achieves state-of-the-art performance and it can significantly reduce the running time by the earlier exits. Compared with the MNIST and USPS dataset, Office-31 is much more challenging. As a result, if the ratio of earlier exited samples goes high, the accuracy dropping is more ob-vious than in the digits recognition task. Notice that the baseline (backbone network) has 50 layers, while the first exit in our model is located after the 3rd layer, which is very shallow compared with ResNet-50. However, our model still get the accuracy of 97.23% compared with the baseline 99.8% on W→D when we set 20% and 60% samples exit from the first earlier exit and the second earlier exit, respectively. Notably, with the sacrifice of 1.6% accuracy, we can speed up the model 2.52 times! The results on D→W and other evaluations, e.g., A→W and A→D, draw the same conclusion. It is worth noting that Amazon is a very challenging dataset, the results on A→W and A→D, therefore, are not outstanding as the results on DSLR and Webcam. In our experiments, we tried that if we move the first exits backward several layers or if reduce the ratio of earlier detection, the results on A→W and A→D would improve. In our source code, we provide a general template to modify, add and remove earlier exits. One can tailor a personal ADAN with few lines of python codes.
Qualitative Results
Back to the Samples. The very basic motivation behind our formulation is that our approach handles the degrees of adaptation difficulty by introducing multiple earlier exits into the learning pipeline. To verify that our approach does perceive the adaptation difficulty and handle it with different strategies, we show some samples exited from different earlier exits in Fig. 4. From the results, it is crystal clear that our approach is able to identify the degrees of adaptation difficulty. For instance, let us take the last column, i.e., samples with the label monitor, on evaluation W→D as an example. We can see that the samples exited via the first earlier exit are very similar with the source samples. These samples are easy to be adapted. The target samples exited from the second earlier exit are slightly different from the source samples in view point. At last, the samples exited from the final exit have distinctive capture angles with the source samples. These target samples are hard to be adapted. For the easy samples, we only use few layers to speed up the model. At the same time, for the hard samples, we deploy more deep networks to guarantee the accuracy. Our approach is agile and resilient. Class-wise Visualization. Fig. 3 reports the visualized class-wise accuracy. Comparing the two lines of Fig. 3, it is clear that our approach is able to mitigate the distribution gaps between the source domain and the target domain.
Parameter Sensitivity. The main parameter in our model is λ which balances the weight of supervised loss and transfer loss. Fig. 5(a) reports the effects of different λ values. It can be seen that our model is not sensitive to the parameter when it chosen from [0,1]. However, the performance will degrade when λ > 2. Since the model is targeted at leveraging knowledge from the source domain to facilitate the target domain, a large λ would weaken the contribution of the source domain. The other hyper-parameter in our formulation is the weight w i , we simply set it to one throughout this paper. We also tested it for different settings, and find w i is not sensitive to the overall performance.
Convergence. Fig. 5(a) reports the overall error with respect to different epochs. The results reflect the convergence of our approach. From the results we can observe that our approach is able to achieve the stable result around 20 epochs. It is worth noting that the reported results are from the evaluations with 10% and 50% samples exited via the first and second earlier exit, respectively. Thus, the results are slightly worse than the backbone network. However, the results report the overall convergence rather than only the backbone.
Ablation Analysis. Our approach consists of the backbone networks and the earlier exits. If we remove the exits and only remain the backbone, the performance of our approach would be similar with results reported in Table 3 and the baseline in Table 4. However, the running cost would be an issue for mobile devices and real-time systems.
Conclusion
In this paper, we propose a novel learning paradigm for domain adaptation. In the proposed paradigm, several earlier detections are performed to identify the adaptation difficulty. At the same time, several earlier exit are added along the backbone network to reduce the running cost. The proposed paradigm can be easily incorporated into existing deep domain adaptation networks or even non-transfer deep architectures, e.g., LeNet, AlexNet and ResNet.
In addition, a novel domain adaptation approach, i.e., agile domain adaptation networks (ADAN), is implemented to verify the effectiveness and efficiency of the proposed paradigm. Extensive experiment results, both quantitative | 4,064 |
1907.04758 | 2961619217 | With deep learning becoming a more prominent approach for automatic classification of three-dimensional point cloud data, a key bottleneck is the amount of high quality training data, especially when compared to that available for two-dimensional images. One potential solution is the use of synthetic data for pre-training networks, however the ability for models to generalise from synthetic data to real world data has been poorly studied for point clouds. Despite this, a huge wealth of 3D virtual environments exist which, if proved effective can be exploited. We therefore argue that research in this domain would be of significant use. In this paper we present SynthCity an open dataset to help aid research. SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Every point is assigned a label from one of nine categories. We generate our point cloud in a typical Urban Suburban environment using the Blensor plugin for Blender. | The need for outdoor labelled point clouds has been addressed by a range of researchers. , @cite_15 released the Paris-rue-Madame MLS dataset containing 20M points ( @math and reflectance), , @cite_10 the iQmulus dataset containing 300M points ( @math , time, reflectance and number of echoes) and , @cite_8 the Paris-Lille-3D containing 143.1M points ( @math , scanner @math , gps time, reflectance). However, many caveats exist within these datasets. For example, Paris-rue-Madame, whilst large enough for traditional machine learning algorithms (i.e. Support Vector Machines, Random Forest), does not meet the scale for a modern DNN, which number of parameters can easily exceed 10x the number of points available. The iQmulus is more suited in terms of size however due to a 2D semi-manual data labelling approach, many mislabelled ground truth points exist. | {
"abstract": [
"",
"The objective of the TerraMobilita iQmulus 3D urban analysis benchmark is to evaluate the current state of the art in urban scene analysis from mobile laser scanning (MLS) at large scale. A very detailed semantic tree for urban scenes is proposed. We call analysis the capacity of a method to separate the points of the scene into these categories (classification), and to separate the different objects of the same type for object classes (detection). A very large ground truth is produced manually in two steps using advanced editing tools developed especially for this benchmark. Based on this ground truth, the benchmark aims at evaluating the classification, detection and segmentation quality of the submitted results. Graphical abstractDisplay Omitted HighlightsVery rich data: high accuracy, high resolution, many attributes.Massive data: 160 million annotated points thanks to a performant web based annotation tool (and many hours of work).Rich semantics organized in a semantic tree with various levels of generalization.Very objective evaluation metrics.",
"This paper introduces a new Urban Point Cloud Dataset for Automatic Segmentation and Classification acquired by Mobile Laser Scanning (MLS). We describe how the dataset is obtained from acquisition to post-processing and labeling. This dataset can be used to learn classification algorithm, however, given that a great attention has been paid to the split between the different objects, this dataset can also be used to learn the segmentation. The dataset consists of around 2km of MLS point cloud acquired in two cities. The number of points and range of classes make us consider that it can be used to train Deep-Learning methods. Besides we show some results of automatic segmentation and classification. The dataset is available at: http: caor-mines-paristech.fr fr paris-lille-3d-dataset ."
],
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_8"
],
"mid": [
"2552796391",
"2027710719",
"2964257316"
]
} | SynthCity: A large scale synthetic point cloud | One of the fundamental requirements for supervised deep learning are large, accurately labelled datasets. For this reason, progress in two-dimensional (2D) image processing is often largely accredited to the wealth of very large, high quality datasets such as ImageNet [1] (classification), COCO [2] (object detection) and Pascal VOC [3] (segmentation). It is now common practice to pre-train Convolutional Neural Networks (CNN) on large datasets before fine-tuning on smaller domain specific datasets. Despite the large success of deep learning for 2D image processing, it is evident that automatic understanding for threedimensional (3D) point cloud data is not as mature. We argue one of the reasons for this is the lack of training data at the scale of that available for 2D data.
A key reason for the lack of 3D training data is that naturally the amount of prepared labelled data decreases as the complexity of labelling increases. For example in 2D, single image classification (i.e. dog, car, cup etc.) is generally trivial and can therefore be carried out by large communi- ties of untrained workers. Object detection requires more skill and has an added level of subjectivity. Segmentation again requires further precision, delicacy and involves more subjectivity. Per-point 3D segmentation requires highly skilled users and generating perfect labels for even the most advanced users is non-trivial. A potential solution to account for this is to synthetically generate training data (i.e. ShapeNet [4]). Despite general success when pre-training 2D images on synthetic data and fine-tuning on real-world data, there has been very little research on this topic with respect to point cloud classification.
More so than 2D, 3D data benefits from a wealth of synthetic data in the form of virtual 3D environments generated for the purpose of gaming, virtual reality and scenario training simulators to name a few. However, the ability for deep learning networks to generalise from synthetic point clouds to real-world data is poorly studied, and as such the community risks missing out on a massive resource of data. To help address this we introduce SynthCity an open, large scale synthetic point cloud of a typical urban/suburban environment. SynthCity is captured using a simulated Mobile Laser Scanner (MLS). MLS point cloud data capturing is being increasingly used due to its ability to easily cover large areas when compared to a Terrestrial Laser Scanner (TLS) and at a higher resolution than an Aerial Laser Scanner (ALS). However, whilst capturing large quantities of data is becoming more trivial, such large datasets are useless without the means to extract useful structured information from otherwise useless unstructured data. As such, progress in this field offers huge potential for a range of disciplines from city planning to autonomous driving.
The primary purpose of our dataset is therefore to offer an open dataset to aid further research assessing the potential of synthetic datasets for pre-training Deep Neural Networks (DNNs) for automatic point cloud labelling. We believe successful progression in this area could have potentially huge implications of the future of automatic point cloud labelling. Our dataset is available for download at http://www.synthcity.xyz.
Generation
The primary aim in generating our dataset is to produce a globally registered point cloud where each point P ∈ R n×3 . Additionally, each point P contains a feature vector F ∈ R n×d where n is the number of points such that n = 367.9M and d is red, green, blue, time, end of line, and label l where l ∈ L such that |L| = 9.
The SynthCity data was modelled inside the open-source Blender 3D graphics software [15]. The initial model was downloaded from an online model database (Fig. 2). The model was subsequently duplicated with the object undergoing shuffling to ensure the two areas were not identical to one another. Road segments were duplicated to connect the two urban environments leaving large areas of unoccupied space. To populate these areas additional typical suburban building models were downloaded and placed along the road. With respect to model numbers the dataset contains; 130 buildings, 196 cars, 21 natural ground planes, 12 ground planes, 272 pole-like objects, 172 road objects, 1095 street furniture objects and 217 trees (table 2). The total disk size of the model was 16.9GB. The primary restriction for the size of the dataset was availability of Random Access Memory (RAM) required on the workstation used for creating the model. This was limited to 32GB in our case, however, with a larger RAM the model size could have easily been extended.
The open-source Blender Sensor Simulation plugin Blensor [17] was used for simulation of the MLS and thus point cloud generation. We use the following setup for scanning: A typical scan took ∼330s to render and a total of 75,000 key frames were rendered from a pre-defined trajectory. To increase realism and generate more variability in point density the trajectory spline was moved by a random permutation at random intervals in all x, y, z directions. The final rendering required (330 × 75000)/86400 = 286.46 days CPU compute time. This was processed using AWS cloud computing service. We launched 22 type r4.2xlarge Ubuntu 18.04 EC2 spot instances, each containing 8 virtual CPUs and 61GB RAM. These were selected as rendering typically required ∼50GB RAM. All data was read and written to a EFS file storage system to allow for joint access of a single model instance. The total rendering time took ∼13 days on 22 EC2 instances.
Each render node produces an individual file s t for the 2D scan at time frame t. To create the global 3D point cloud each point must undergo a transformation T with respect to the scanner location S x,y,z and rotation S ω,φ,κ . Blensor can export both S x,y,z and S ω,φ,κ at time t as a motion file. Each scan is passed through a global registration script where the transformation T is computed as the rotation matrix where:
R x = 1 0 0 0 cos ω −sinω 0 sin ω cos ω (1) R y = cos φ 0 sin φ 0 1 0 − sin φ 0 cos φ (2) R z = cos κ −sinκ 0 sin κ cos κ 0 0 0 1 (3) R = R z R y R x (4) T = R 1,1 R 1,2 R 1,3 S x R 2,1 R 2,2 R 2,3 S y R 3,1 R 3,2 R 3,3 S z 0 0 0 1 (5)
Finally, each transformed pointp t is computed as:
p t = p t · T(6)
In a separate post-processing stage we generate the features F = xn,yn, zn, time, eol. To create F = xn,yn, zn we simply apply a 0.005m Gaussian noise to each p x , p y and p z independently such that pn i = p x i +σ 1 , p y i +σ 2 , p z i + We choose to store our data in the parquet data format [18]. The parquet format is very efficient with respect to memory storage but is also very suitable for out-of-memory processing. The parquet format is designed to integrate with the Apache Hadoop ecosystem. It can be directly read into python Pandas dataframes but also python Dask data frames which allow for easy out-of-memory processing directly in the python ecosystem.
Data
The dataset is modelled from a completely fictional typical urban environment. In reality the environment would be most similar to that of downtown and suburban New York City, USA. This was due to the initial starting model, and not any design choices made by ourselves. Other buildings and street infrastructure are typical of mainland Europe. We classify each point into one category from; road, pavement, ground, natural ground, tree, building, pole-like, street furniture or car. To address the class imbalance issue, during construction of the model we aimed to bias the placement of small less dominant features in an attempt to reduce this as much as possible. As point cloud DNNs typically work on small subsets of the dataset we argue that this approach should not introduce any unfavourable bias, but instead help physically reduce the class imbalance.
The final feature list with their respective storage type is shown in file is 27.5GB, as a typical work station would not be able to load this model into memory, we split this scan into 9 sub areas. Each sub area is split solely on horizontal coordinates and can therefore contain points from any scan at any key frame. The purpose of this is twofold; firstly, users of typical workstations can load an area directly into memory, and secondly, we can nominate a fixed test area. We propose that areas 1-2 and 4-9 be used for training and area 3 be reserved for model testing. This enables consistency if models trained on our dataset are to be compared from one another. We choose area 3 as it contains a good representation of all classes. As SynthCity is not designed as a benchmark dataset we provide the ground truth labels for area 3 in the same manner as all other areas.
Discussion
Although SynthCity was modelled to be biased toward poorly represented categories (i.e. street furniture and polelike objects), it is evident that a significant class imbalance still exists (Fig. 3). The reasons for this is twofold. Firstly, continuous features such as road and pavement cover significantly larger areas than smaller discrete features. Secondly, due to the nature of MLS, objects closer to the scanner are sampled with a higher point density. As MLS are typically car mounted, road and pavement naturally have very high point densities. A sensible pre-processing approach to account for this issue is to first voxel downsample the point cloud to a regular point density. This technique has been shown to considerably improve classification accuracy for both outdoor and indoor point clouds [9]. As one of the primary benefits of a self-constructed synthetic model is the ability to choose the object placement distribution, it is evident from our dataset that this should be further exaggerated still.
SynthCity has been designed primarily to be used for semantic per-point classification. As such each point contains a feature vector and a classification label. Whilst this is useful for a range of applications, currently the dataset does not contain instance id's for individual object extraction. As each object is a discrete object within the Blender environment extraction of instance id's would be reasonably trivial to extract. Moreover, a simple post processing script could be employed to convert instance id's to 3D instance bounding boxes which would enable the dataset to be used for 3D object localisation algorithms as well as per-point classification. With SynthCity being an ongoing project we plan to implement this in future releases.
Blensor supports the ability scan with a range of scanners, most notably a simulated Velodyne scanner. Such scanners are commonly used for both MLS systems and autonomous vehicles. Re-rendering with a Velodyne scanner would only require the AWS instances to be run again to produce the equivalent point cloud. Furthermore, scanner properties can be changed to simulate a range of scanners that are currently not covered by the pre-defined settings. We argue that as with 2D images, 3D point clouds should be sensor invariant. Training on multiple sensors would likely be a very valuable augmentation technique.
Conclusion
In this work we present SynthCity an open, large-scale synthetic point cloud. We release this dataset to help aid research in the potential use for pre-training of segmentation/classification models on synthetic datasets. We argue an ability to generalise from synthetic data to real world data would be immensely beneficial to the community as such a wealth of existing synthetic 3D environments already exist. Most notably those generated from the gaming, virtual environment and simulated training industries. Our model contains 367.9M perfect labelled points with 5 additional features; red, green, blue, time, eol. In addition we also present an identical point cloud with the permutation of Gaussian sampled noise, giving the point cloud a more realistic appearance. | 2,021 |
1907.04844 | 2959360491 | Given a family @math of graphs and a positive integer @math , a graph @math is called vertex @math -fault-tolerant with respect to @math , denoted by @math -FT @math , if @math contains some @math as a subgraph, for every @math with @math . Vertex-fault-tolerance has been introduced by Hayes [A graph model for fault-tolerant computing systems, IEEE Transactions on Computers, C-25 (1976), pp. 875-884.], and has been studied in view of potential applications in the design of interconnection networks operating correctly in the presence of faults. We define the Fault-Tolerant Complete Matching (FTCM) Problem in bipartite graphs of order @math : to design a bipartite @math , with @math , @math , @math , that has a FTCM, and the tuple @math , where @math and @math are the maximum degree in @math and @math , respectively, is lexicographically minimum. @math has a FTCM if deleting at most @math vertices from @math creates @math that has a complete matching, i.e., a matching of size @math . We show that if @math is integer, solutions of the FTCM Problem can be found among @math -regular bipartite graphs of order @math , with @math , and @math . If @math then all @math -regular bipartite graphs of order @math have a FTCM, and for @math , it is not the case. We characterize the values of @math , @math , @math , and @math that admit an @math -regular bipartite graph of order @math , with @math , and give a simple construction that creates such a graph with a FTCM whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them. | Design of fault-tolerant bipartite graphs has potential applications in the design of flexible processes, where there are @math different request types and @math servers that should process them (see, for example, the work of @cite_0 for a review of the topic). | {
"abstract": [
"One of the most effective ways of minimizing supply demand mismatch costs, with little increase in operational costs, is to deploy valuable resources in a flexible and timely manner to meet the realized demand. This notion of flexible processes has significantly changed operations in many manufacturing and service companies. For example, a flexible production system is now commonly used by automobile manufacturers, and a workforce cross-training system is now a common practice in many service industries. However, there is a trade-off between the level of flexibility available in the system and the associated complexity and operating costs. The challenge is to have the “right” level of flexibility to capture the bulk of the benefits from a fully flexible system, while controlling the increase in implementation costs. This paper reviews developments in process flexibility over the past decade. In particular, we focus on the phenomenon, often observed in practice, that a slight increase in process flexibility can lead to a significant improvement in system performance. This review explores the issues from three perspectives: design, evaluation, and applications. We also discuss how the concept of process flexibility has been deployed in several manufacturing and service systems."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2031039901"
]
} | MINIMUM k-CRITICAL BIPARTITE GRAPHS | 0 |
|
1907.04844 | 2959360491 | Given a family @math of graphs and a positive integer @math , a graph @math is called vertex @math -fault-tolerant with respect to @math , denoted by @math -FT @math , if @math contains some @math as a subgraph, for every @math with @math . Vertex-fault-tolerance has been introduced by Hayes [A graph model for fault-tolerant computing systems, IEEE Transactions on Computers, C-25 (1976), pp. 875-884.], and has been studied in view of potential applications in the design of interconnection networks operating correctly in the presence of faults. We define the Fault-Tolerant Complete Matching (FTCM) Problem in bipartite graphs of order @math : to design a bipartite @math , with @math , @math , @math , that has a FTCM, and the tuple @math , where @math and @math are the maximum degree in @math and @math , respectively, is lexicographically minimum. @math has a FTCM if deleting at most @math vertices from @math creates @math that has a complete matching, i.e., a matching of size @math . We show that if @math is integer, solutions of the FTCM Problem can be found among @math -regular bipartite graphs of order @math , with @math , and @math . If @math then all @math -regular bipartite graphs of order @math have a FTCM, and for @math , it is not the case. We characterize the values of @math , @math , @math , and @math that admit an @math -regular bipartite graph of order @math , with @math , and give a simple construction that creates such a graph with a FTCM whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them. | Nevertheless, the relation of our model with the process flexibility literature is still remote, since the process flexibility community has so far focused on systems where a server can process different kinds of compatible requests within the same time period. Moreover, only recently unbalanced systems, i.e., with @math , have started to be considered (see, @cite_21 and @cite_4 for examples). | {
"abstract": [
"Several design guidelines and flexibility indices have been developed in the literature to inform the design of flexible production networks. In this paper, we propose additional flexibility design guidelines for unbalanced networks, where the numbers of plants and products are not equal, by refining the well-known Chaining Guidelines. We study symmetric networks, where all plants have the same capacity and product demands are independent and identically distributed, and focus mainly on the case where each product is built at two plants. We also briefly discuss cases where (1) each product is built at three plants and (2) some products are built at only one plant. An extensive computational study suggests that our refinements work very well for finding flexible configurations with minimum shortfall in unbalanced networks.",
"Abstract A new methodology is presented for structuring the multiskilling characteristics of a set of initially single-skilled employees in the context of a multi-department service sector business, more specifically in the retail industry. Assuming stochastic demand under continuous modeling of the workforce, the methodology decomposes the multiskilling problem into three stages that are then solved sequentially. The first stage delivers a novel analytic expression for estimating the number of multiskilled employees required by each store department, the second stage applies constructive heuristics to these estimates to generate a feasible set of closed chaining structures, and the third stage uses Monte Carlo simulation and a linear programming model to evaluate the chains so generated. The results show that in unbalanced systems, the solutions involve closed long chains and closed short chains of different lengths, which are robust to demand variability, minimize the expected total cost of staff shortages, and exhibit the best cost-effective performance. The methodology provides a tool for company decision makers to address the three fundamental multiskilling issues in unbalanced systems, namely, where to add multiskilling, how much to add and how it should be added."
],
"cite_N": [
"@cite_21",
"@cite_4"
],
"mid": [
"2119277007",
"2903021497"
]
} | MINIMUM k-CRITICAL BIPARTITE GRAPHS | 0 |
|
1907.04844 | 2959360491 | Given a family @math of graphs and a positive integer @math , a graph @math is called vertex @math -fault-tolerant with respect to @math , denoted by @math -FT @math , if @math contains some @math as a subgraph, for every @math with @math . Vertex-fault-tolerance has been introduced by Hayes [A graph model for fault-tolerant computing systems, IEEE Transactions on Computers, C-25 (1976), pp. 875-884.], and has been studied in view of potential applications in the design of interconnection networks operating correctly in the presence of faults. We define the Fault-Tolerant Complete Matching (FTCM) Problem in bipartite graphs of order @math : to design a bipartite @math , with @math , @math , @math , that has a FTCM, and the tuple @math , where @math and @math are the maximum degree in @math and @math , respectively, is lexicographically minimum. @math has a FTCM if deleting at most @math vertices from @math creates @math that has a complete matching, i.e., a matching of size @math . We show that if @math is integer, solutions of the FTCM Problem can be found among @math -regular bipartite graphs of order @math , with @math , and @math . If @math then all @math -regular bipartite graphs of order @math have a FTCM, and for @math , it is not the case. We characterize the values of @math , @math , @math , and @math that admit an @math -regular bipartite graph of order @math , with @math , and give a simple construction that creates such a graph with a FTCM whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them. | A related line of research is not to design the smallest fault tolerant graphs (in terms of any metric, like the ones given above), but to analyze the level of fault tolerance assured by prescribed topologies @cite_2 @cite_12 . This topic is of particular interest for algorithm design in high performance computing. Supercomputers are comprised of many processing nodes (with some local memory) that use an interconnection network to communicate during the execution of distributed algorithms. An algorithm delegates computational tasks to different nodes, and uses some logical topology for its message passing. This logical topology has to be somehow embedded in the interconnection network provided by the supercomputer. So it is of practical interest to study if the message passing topologies most common in algorithm design (like cycles, and trees of certain types) can still be embedded in interconnection topologies provided by supercomputers (often similar to hypercubes) when the system presents some faults @cite_2 . | {
"abstract": [
"Abstract A d-starlike tree (or a d-quasistar ) is a subdivision of a star tree of degree d . A family of hypercube-like interconnection networks, called restricted hypercube-like graphs , includes most non-bipartite hypercube-like networks found in the literature such as twisted cubes, crossed cubes, Mobius cubes, recursive circulant G ( 2 m , 4 ) of odd m , etc. In this paper, we prove that given an arbitrary fault-free vertex r in an m -dimensional restricted hypercube-like graph with a set F of faults (vertex and or edge faults) and d positive integers, l 1 , l 2 , … , l d , whose sum is equal to the number of fault-free vertices minus one, there exists a d -starlike tree rooted at r , each of whose subtrees forms a fault-free path on l i vertices for i ∈ 1 , 2 , … , d , provided | F | ≤ m − 2 and | F | + d ≤ m . The bounds on | F | and | F | + d are the maximum possible.",
"This paper showed the pancyclicity of Cartesian product graphs with faulty edges.This paper showed the bipancyclicity of Cartesian product graphs with faulty edges.Determining the edge-fault pancyclicity (bipancyclicity) of NQ m r , ? , m 1 efficiently.Determining the edge-fault pancyclicity (bipancyclicity) of GQ m r , ? , m 1 efficiently. Let r? 4 be an even integer. Graph G is r-bipancyclic if it contains a cycle of every even length from r to 2 ? n ( G ) 2 ? , where n ( G ) is the number of vertices in G. A graph G is r-pancyclic if it contains a cycle of every length from r to n ( G ) , where r ? 3 . A graph is k-edge-fault Hamiltonian if, after deleting arbitrary k edges from the graph, the resulting graph remains Hamiltonian. The terms k-edge-fault r-bipancyclic and k-edge-fault r-pancyclic can be defined similarly. Given two graphs G and H, where n ( G ) , n ( H ) ? 9, let k 1 , k 2 ? 5 be the minimum degrees of G and H, respectively. This study determined the edge-fault r-bipancyclic and edge-fault r-pancyclic of Cartesian product graph G × H with some conditions. These results were then used to evaluate the edge-fault pancyclicity (bipancyclicity) of NQ m r , ? , m 1 and GQ m r , ? , m 1 ."
],
"cite_N": [
"@cite_12",
"@cite_2"
],
"mid": [
"2943552721",
"2298465770"
]
} | MINIMUM k-CRITICAL BIPARTITE GRAPHS | 0 |
|
1907.04844 | 2959360491 | Given a family @math of graphs and a positive integer @math , a graph @math is called vertex @math -fault-tolerant with respect to @math , denoted by @math -FT @math , if @math contains some @math as a subgraph, for every @math with @math . Vertex-fault-tolerance has been introduced by Hayes [A graph model for fault-tolerant computing systems, IEEE Transactions on Computers, C-25 (1976), pp. 875-884.], and has been studied in view of potential applications in the design of interconnection networks operating correctly in the presence of faults. We define the Fault-Tolerant Complete Matching (FTCM) Problem in bipartite graphs of order @math : to design a bipartite @math , with @math , @math , @math , that has a FTCM, and the tuple @math , where @math and @math are the maximum degree in @math and @math , respectively, is lexicographically minimum. @math has a FTCM if deleting at most @math vertices from @math creates @math that has a complete matching, i.e., a matching of size @math . We show that if @math is integer, solutions of the FTCM Problem can be found among @math -regular bipartite graphs of order @math , with @math , and @math . If @math then all @math -regular bipartite graphs of order @math have a FTCM, and for @math , it is not the case. We characterize the values of @math , @math , @math , and @math that admit an @math -regular bipartite graph of order @math , with @math , and give a simple construction that creates such a graph with a FTCM whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them. | A problem that is closely related to ours was presented by Perarnau and Petridis @cite_13 . The authors studied the existence of perfect matchings in induced balanced subgraphs of random biregular bipartite graphs. | {
"abstract": [
"We study the existence of perfect matchings in suitably chosen induced subgraphs of random biregular bipartite graphs. We prove a result similar to a classical theorem of Erdos and Renyi about perfect matchings in random bipartite graphs. We also present an application to commutative graphs, a class of graphs that are featured in additive number theory."
],
"cite_N": [
"@cite_13"
],
"mid": [
"2130745630"
]
} | MINIMUM k-CRITICAL BIPARTITE GRAPHS | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.